Building an application in a cloud native way simply means that you will derive the full benefit that the cloud provides, even if your application is running in your on-premise data centre. In the first part of this two-part blog post, I will expand on some of the benefits that your application, infrastructure, business and users will experience by architecting cloud natively.
Keep in mind that the following benefits will all only be present if we make the following assumptions:
- Every component in the environment is generated through Infrastructure-as-code and/or config-as-code;
- Applications are built to be immutable;
- Applications are designed to run within a pay-as-you-use cloud-pricing environment and so embrace elasticity (scale on demand).
Your application can run wherever you want it to
Cloud native applications are made up of microservices that each run in containers, which can simply be moved between container orchestraters like Kubernetes, Red Hat OpenShift or VMware Tanzu. Even then, the orchestrators can run on top of any public cloud platform, multiple cloud platforms in a hybrid or poly cloud configuration, or in your data centre. It all depends on your applications’ needs and your own preferences.
You will no longer need to troubleshoot differences between environments like developer laptops and testing when the application misbehaves. The container will have the same experience in any environment because the configuration and infrastructure are defined as code at the time of deployment.
Pull existing services directly from cloud vendors instead of creating your own
When your application needs a specific component to function, you can leverage existing vendor-built services that are available to your cloud environment. Let’s say, for example, your application requires a storage layer. Instead of spinning up and configuring one of your own, you’re simply able to call a storage layer service from the cloud provider’s library. That means less work on your side and your application will be using services that are vendor-approved and supported.
Scaling up your application doesn’t necessarily mean more hardware costs
Traditionally, applications hosted in a data centre would require the addition of more physical hardware and rack space to facilitate resource scaling. When using a hybrid model with cloud native architecture, scaling out the application can be done with cloud computing resources, at a much lower cost and lead time than provisioning physical hardware. Dynamic scaling also means that when you don’t use the additional resources, you don’t pay for them.
Fault tolerance and self-healing
Cloud native applications are built to be fault-tolerant, meaning it assumes the infrastructure is unreliable. This is done to minimise the impact of failures so that the entire application doesn’t fall over when one component is unresponsive. You’re also able to run more than one instance of an application at a time so that it remains available to customers while tolerating outages.
Containers (and therefore containerised applications) also have the ability to self-heal. If a component becomes unresponsive or suffers an issue, a new one can just be started and the old one stopped, without the need for someone to fix the problem first before bringing the application up again.
To summarise, your containerised applications or services can run wherever you need them to, with homogenised environments between development, testing and production. They can heal themselves and are ready for failures so that less time is spent fixing and restoring your application experience. You’re also able to make full use of the cloud for both scaling resources and making use of pre-built cloud services to complete your application.
In part 2, I will look at more benefits of architecting your application in a cloud native way.