What is cloud native?
Let’s start with a definition. According to the Cloud Native Computing Foundation (CNCF), ‘cloud native’ can be defined as “technologies that empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds”. Essentially it is a technology that is purposefully built to make full use of the advantages of the cloud in terms of scalability and reliability, and is “resilient, manageable and observable”.
Even though the word “cloud” features heavily in the explanation, it doesn’t mean that it has to operate exclusively in the cloud. Cloud native applications can also run in your own data centre or server room and simply refers to how the application is built (to make use of the cloud’s advantages) and doesn’t pre-determine where it should run (the cloud).
How did it start?
While virtualization and microservices have been around for decades, they didn’t really become popular until 2015, when businesses were pouncing on Docker for virtualization because of its ability to easily run computing workloads in the cloud. Google open-sourced their container orchestration tool Kubernetes around that same time and it soon became the tool of choice for everyone using microservices. Fast forward to today and there are various different flavours of Kubernetes available both as community and enterprise options.
How does it work?
As this piece has explained, Cloud Native means you have the ability to run and scale an application in a modern dynamic environment. Looking at most applications today, this is just not possible as they are monolithic in nature, which means the entire application comes from a single code base. All its features are bundled into one app and one set of code. Applications need to know what server they are on, where their database is, where it sends their outputs to and which sources it expects inputs from. So taking an application like that from a data centre, and placing it in the cloud doesn’t really work as expected. Applications can be made to work on this model, but it’s not pretty, costs a lot of money and it won’t have the full benefit of the cloud.
This is not true for all monolithic applications, but the ideal situation is to move toward microservices. A microservice means that each important component of the application has its own code base. Take Netflix, for example, one service handles profiles, the next handles a user account, the next handles billing, the next lists television shows and movies etc. The end result is thousands of these services, which all communicate with each other through an API (Application Programming Interface). Each service has a required input and produces an output, so if the accounts service needs to run a payment, it would send the user code and the amount to the payment service. The payment service receives the request and checks the banking details with the user data service, then processes the payment and sends the successful completion or failed completion status back to the accounts service. It means that they have a smaller team dedicated to a single service, ensuring it functions properly.
Now moving a set of services to the cloud is fairly simple, as they usually have no state (so they can be killed and restated at will) and they don’t have storage so it doesn’t matter where they start.
Where is it going?
The latest cloud native survey by the Cloud Native Computing Foundation (CNCF) suggests that 96% of organisations are either evaluating, experimenting or have implemented Kubernetes. Over 5.6 million developers worldwide are using Kubernetes, which represents 31% of current backend developers. The survey also suggests that cloud native computing will continue to grow, with enterprises even adopting less mature cloud native projects to solve complicated problems.
In our future posts, application modernisation will be discussed in more detail and used to explain how businesses are really growing and thriving with this new paradigm.