We regularly talk about the benefits that application modernisation and cloud native architectures enable for customers (and rightly so), but there are also common mistakes and pitfalls that can be made during the migration and application refactoring process. Over the last couple of years, I’ve been working on customer projects and sometimes ran into these “gotchas”. Ultimately they have served as lessons and resulted in better implementations going forward, but that doesn’t mean they have to remain out there for you to stumble over.
In this blog post, I will identify and describe some of these pitfalls so that you don’t have to run into them on your own.
Assume that there will be latency
As a best practice guideline, cloud native applications should be built to assume that underlying infrastructure can be unreliable and that latency can play a big part in how the application works. However, from experience, this is not always factored into how the application handles latency.
The distributed nature of data centres is great for resilience and availability, but they might also cause latency between two endpoints depending on the geography of the data centres. Think about the difference in latency between data centres from Cape Town to Ireland and Johannesburg respectively. Your application should take into account that it could possibly take longer when waiting for responses, especially if you make use of data centres from multiple providers or multiple locations of the same service provider.
Don’t hard-code dependencies
The traditional methodology of keeping persistent connections to your application’s dependencies, like for example a storage layer, will no longer be the most efficient way to do it. Cloud native architecture assumes that the underpinning dependencies aren’t always available, or at least not super reliable. Instead, cloud native applications will connect when they need to without holding up the rest of the application.
Consider the following example: a traditional application starting up might need to connect to a shared storage location to retrieve data. If that storage location isn’t immediately available, the application might fail to start or crash. With a cloud native methodology, if the connection to the storage doesn’t succeed, the specific service requesting the storage will continue trying while the rest of the services that make up the application start normally. When the service then establishes a connection, it will return the result to the rest of the application without suffering an application-wide outage.
Assume a usage-based model for cloud costs
A decade ago, it was common to provision dedicated servers to host all the different components of your application. Each server would have its own resources like memory and storage, which it could use to perform its functions. If there was exceptionally high traffic to your application, the only way to provide additional memory and storage would be to either buy more of it or commission an entirely new server to accommodate the new load, a solid strategy. But what happens when traffic returns to normal? You now have the additional memory and storage, but it’s only sitting there idling.
What you are able to do today is lean into dynamic scaling, one of the key strengths of cloud native applications. When you build your application with cloud native best practices, the resources used by your application will be able to scale up and down dynamically, so that you pay only for what you use. Traffic surge? No problem, your application will scale out automatically to accommodate all the users. When it’s over, it will scale back and you will end up paying for what you use, instead of paying for idling memory that isn’t being used in a conventional server. The pitfall comes into place when applications aren’t built to take advantage of the elasticity provided by the cloud. Think about scaling down: if your application doesn’t need to use an upper threshold of resources when there isn’t much going on, why should you be paying for it?
Architect your data sources to be processed in smaller chunks
Data doesn’t take a back seat when you’re adopting cloud native architecture and the way it is structured plays an important role in how your application services consume it. A common pitfall is continuing to consume data in the same way as before the cloud native shift. This can lead to inefficiencies, such as reading entire data sets into memory, which takes more processing time and uses up valuable memory that could be used for other purposes. As mentioned above, usage-based cost models for the cloud can also take a bite of your budget.
To avoid this, you should consider breaking data into smaller chunks, which is known as parallelisation. This allows your application to process data more efficiently, improving reaction time from services and providing faster service to customers. In other words, by changing the way your application consumes data from sources, you can improve performance and deliver a better user experience.
Even when your application makes use of cloud native architecture, there still are simplistic pitfalls that can hamper your application’s performance and ability to serve customers. By factoring in latency, not hardcoding dependencies, understanding usage-based cost models for your cloud platform provider and standardising on smaller data chunks, you can ensure that your application takes full advantage of the cloud. Keep these pitfalls in mind when you are planning your move to the cloud and you won’t stumble over them.
If you run into any problems of your own and would like some assistance, get in touch with us.