Sounding like the next enterprise technology buzzwords, the terms ‘containerisation’ and ‘Kubernetes’ feature in many of today’s business meetings about technology platforms. Although containerisation has been around for a while, Kubernetes itself has only been on the scene for just over half a decade. Before looking at those concepts, we first need to look at the history of cloud native to understand how it got to where we are today.
The past
In the past when computing became critical to businesses, they started out with mainframe servers with the idea to have a huge, expensive server with almost 100% uptime. Their computing power was incredible and over the past 70+ years evolved to have even more. In the 80s and 90s, a move to commodity hardware took place instead of bulky proprietary mainframe servers.
Commodity servers are based on the principle that lots of relatively cheap, standardised servers running in parallel were used to perform the same tasks as mainframes. If one experienced failure, it was quick and easy to replace it. The model worked by having redundancy at scale, rather than the mainframe which was a small number of servers with high redundancy built in. This meant that data centres started growing exponentially to make space for all the servers.
The other problem that appeared with commodity hardware is that it lacked the features needed to isolate individual applications. This meant that multiple applications would run on a single server or would be required to have one server for each application. This meant that another solution was required, which is where enterprise x86 server virtualisation comes in. The ability to run multiple virtual servers on a single physical server changed the game yet again, with companies like VMWare dominating the market and growing at an unimaginable scale. Virtualisation used the principle that a virtual server with an operating system is attached to a network, with its own storage block. It would have anti-virus, the libraries for the application, the code, and everything else required installed on it, just like any other regular computer. It would behave like a regular server with its own network interface(s) and drives. It would have its own operating system as well as application libraries, application code and any other supporting software.
The problem soon became clear: the number of resources, disk space, memory and licenses used to run multiple instances of the same operating system across all of these virtual computers together with all the installed components were adding up and were essentially wasted.
This is where containers shine
These problems are what containers are designed to solve. Instead of providing isolation between workloads by virtualizing an entire server; the application code and supporting libraries are isolated from other workloads via containerization.
Let’s create two viewpoints to understand containers even further: a development angle and an infrastructure & operations angle.
Infrastructure & Operations
From an infrastructure & operations perspective, IT teams had their hands full managing all of these virtual servers, which all had a host of components to maintain. Each had an operating system with anti-virus, needed to be patched and upgraded, and security tools that needed to be managed (usually agents installed to monitor the server and the application). Beyond that, there would also be the application’s individual components that were needed to make it function. Usually, a developer would finish their code and send it to IT. IT would copy it to the development server and start testing to see if it was fit for the production environment. Is the development server the same as the developer’s laptop? Chances are that it probably is not, which meant that the code would need fixing for the differences between the two environments. Next, it moves to User Acceptance Testing (UAT), System Integration Testing (SIT) or Quality Assurance (QA) (all of which also had to be identical to the servers before them). Finally, it would reach the production environment which again also needed to be identical to the previous environments to function correctly. The IT team spent so much time having to fix, debug and keep track of each server, and making sure there is a repeatable process to do it all over again. Containers solved this problem by enabling code to run the same regardless of the environment – on a developer’s laptop, a server or in the cloud. The team needed only to ensure that the container could reach its network and storage locations, making it far easier to manage compared to the previous problem.
Development
Looking at it from a development perspective, the idea with containers and cloud native is that an application needed to be broken down and refactored into microservices. This means that instead of one single, monolithic code base, each part of the application will operate on its own, and it looks for the data it needs to output its programmed function. An example of this is Uber: its application consists of hundreds of services, all with their own function. One service maps out the customer’s area, the next service looks for drivers, the next compares drivers for the best fit for the trip, one for route mapping, one for calculating costing for the route, etc. Running each service in its own virtual machine would be near impossible, but for each of these services to run on its own container is a completely different story. The container runs on a container orchestration engine (like Kubernetes) which handles the load, network traffic, ingress, egress, storage and more on the server platform.
That doesn’t mean monolithic applications can’t run in a container. In fact, many companies follow that route as it still provides some of the cloud native benefits, without having to undertake the time-consuming task of completely rewriting an application.
The next point to consider is how all of these containers are going to be managed. The progression started with a handful of mainframes, then a dozen servers, to hundreds of virtual machines, to thousands of containers.
Enter Kubernetes
As discussed in a previous blogpost, Kubernetes comes from a project in Google called
“Borg”, which was renamed and open-sourced as “Kubernetes” in 2014. There have been many other container orchestration engines like Cattle, Mesos and more, but in the end, Kubernetes won the race and it is now the standard for managing your containerised applications.
Kubernetes (also known as K8s) is an open-source system for automating the deployment, scaling, and management of containerised applications. The whole idea is that Kubernetes can run on a laptop, on a server or in the cloud, which means that it can deploy a service or application in a container and move it to the next environment without experiencing a problem. Kubernetes has some amazing features that make it a powerful orchestration tool, including:
- Horizontal scaling – containers are created to keep up with demand in seconds so that a website or application will never go down because of user traffic. And additional compute nodes can be added (or removed) as needed according to workload demand.
- Automated rollouts and rollbacks – New features of an application can be tested on let’s say. 20% of the users, slowly rolling out to more as it proves safe. If it experiences a problem, it can simply be rolled back on the container to the previous version that worked, with minimal interruption.
- Self-healing is the functionality that enables the restarting of failing containers, replacing or rescheduling containers when nodes die or killing containers that aren’t responding. It all happens automatically.
- Multi-architecture: Kubernetes can manage x86 and arm-based clusters
Kubernetes is what allows companies like Netflix, Booking.com and Uber to handle customer scale in the millions, and give them the ability to release new versions and code daily.
What about serverless? ‘Serverless’ does not literally mean “without a server”, and it is still based on Kubernetes. Simply put, it means that a cloud provider makes resources available on-demand when needed, instead of them being allocated permanently. It is an event-driven model and will be discussed in more detail in a later blog post.
In future posts, application modernisation will be discussed and why it is so important to use real-world examples from actual teams. This will show how businesses that adopt containers, DevOps and cloud-native are moving ahead of their competitors at an exponential rate.