Home Use Cases Blog Contact Us
24/7 Support 010 5000 LSD (573) Submit a Ticket
LSD

LSD

Kubernetes Professionals

  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu

Tag: application modernisation

Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market

Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market

Posted on January 6, 2023 by Charl Barkhuizen
INSIGHTS

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

What are Kubernetes and Containerisation?

What are Kubernetes and Containerisation?

Posted on October 6, 2022October 5, 2022 by Deon Stroebel
INSIGHTS

Sounding like the next enterprise technology buzzwords, the terms ‘containerisation’ and ‘Kubernetes’ feature in many of today’s business meetings about technology platforms. Although containerisation has been around for a while, Kubernetes itself has only been on the scene for just over half a decade. Before looking at those concepts, we first need to look at the history of cloud native to understand how it got to where we are today.

The past

In the past when computing became critical to businesses, they started out with mainframe servers with the idea to have a huge, expensive server with almost 100% uptime. Their computing power was incredible and over the past 70+ years evolved to have even more. In the 80s and 90s, a move to commodity hardware took place instead of bulky proprietary mainframe servers.

Commodity servers are based on the principle that lots of relatively cheap, standardised servers running in parallel were used to perform the same tasks as mainframes. If one experienced failure, it was quick and easy to replace it. The model worked by having redundancy at scale, rather than the mainframe which was a small number of servers with high redundancy built in. This meant that data centres started growing exponentially to make space for all the servers.

The other problem that appeared with commodity hardware is that it lacked the features needed to isolate individual applications. This meant that multiple applications would run on a single server or would be required to have one server for each application. This meant that another solution was required, which is where enterprise x86 server virtualisation comes in. The ability to run multiple virtual servers on a single physical server changed the game yet again, with companies like VMWare dominating the market and growing at an unimaginable scale. Virtualisation used the principle that a virtual server with an operating system is attached to a network, with its own storage block. It would have anti-virus, the libraries for the application, the code, and everything else required installed on it, just like any other regular computer. It would behave like a regular server with its own network interface(s) and drives. It would have its own operating system as well as application libraries, application code and any other supporting software.

The problem soon became clear: the number of resources, disk space, memory and licenses used to run multiple instances of the same operating system across all of these virtual computers together with all the installed components were adding up and were essentially wasted.

This is where containers shine

These problems are what containers are designed to solve. Instead of providing isolation between workloads by virtualizing an entire server; the application code and supporting libraries are isolated from other workloads via containerization.

Let’s create two viewpoints to understand containers even further: a development angle and an infrastructure & operations angle.

Diagram of application servers as they progressed from physical servers to virtual machines to containers

Infrastructure & Operations

From an infrastructure & operations perspective, IT teams had their hands full managing all of these virtual servers, which all had a host of components to maintain. Each had an operating system with anti-virus, needed to be patched and upgraded, and security tools that needed to be managed (usually agents installed to monitor the server and the application). Beyond that, there would also be the application’s individual components that were needed to make it function. Usually, a developer would finish their code and send it to IT. IT would copy it to the development server and start testing to see if it was fit for the production environment. Is the development server the same as the developer’s laptop? Chances are that it probably is not, which meant that the code would need fixing for the differences between the two environments. Next, it moves to User Acceptance Testing (UAT), System Integration Testing (SIT) or Quality Assurance (QA) (all of which also had to be identical to the servers before them). Finally, it would reach the production environment which again also needed to be identical to the previous environments to function correctly. The IT team spent so much time having to fix, debug and keep track of each server, and making sure there is a repeatable process to do it all over again. Containers solved this problem by enabling code to run the same regardless of the environment – on a developer’s laptop, a server or in the cloud. The team needed only to ensure that the container could reach its network and storage locations, making it far easier to manage compared to the previous problem.

Development

Looking at it from a development perspective, the idea with containers and cloud native is that an application needed to be broken down and refactored into microservices. This means that instead of one single, monolithic code base, each part of the application will operate on its own, and it looks for the data it needs to output its programmed function. An example of this is Uber: its application consists of hundreds of services, all with their own function. One service maps out the customer’s area, the next service looks for drivers, the next compares drivers for the best fit for the trip, one for route mapping, one for calculating costing for the route, etc. Running each service in its own virtual machine would be near impossible, but for each of these services to run on its own container is a completely different story. The container runs on a container orchestration engine (like Kubernetes) which handles the load, network traffic, ingress, egress, storage and more on the server platform.

That doesn’t mean monolithic applications can’t run in a container. In fact, many companies follow that route as it still provides some of the cloud native benefits, without having to undertake the time-consuming task of completely rewriting an application.

The next point to consider is how all of these containers are going to be managed. The progression started with a handful of mainframes, then a dozen servers, to hundreds of virtual machines, to thousands of containers.

Enter Kubernetes

As discussed in a previous blogpost, Kubernetes comes from a project in Google called

“Borg”,  which was renamed and open-sourced as “Kubernetes” in 2014. There have been many other container orchestration engines like Cattle, Mesos and more, but in the end, Kubernetes won the race and it is now the standard for managing your containerised applications.

Kubernetes (also known as K8s) is an open-source system for automating the deployment, scaling, and management of containerised applications. The whole idea is that Kubernetes can run on a laptop, on a server or in the cloud, which means that it can deploy a service or application in a container and move it to the next environment without experiencing a problem. Kubernetes has some amazing features that make it a powerful orchestration tool, including:

  • Horizontal scaling – containers are created to keep up with demand in seconds so that a website or application will never go down because of user traffic. And additional compute nodes can be added (or removed) as needed according to workload demand.
  • Automated rollouts and rollbacks – New features of an application can be tested on let’s say. 20% of the users, slowly rolling out to more as it proves safe. If it experiences a problem, it can simply be rolled back on the container to the previous version that worked, with minimal interruption.
  • Self-healing is the functionality that enables the restarting of failing containers, replacing or rescheduling containers when nodes die or killing containers that aren’t responding. It all happens automatically.
  • Multi-architecture: Kubernetes can manage x86 and arm-based clusters

Kubernetes is what allows companies like Netflix, Booking.com and Uber to handle customer scale in the millions, and give them the ability to release new versions and code daily.

What about serverless? ‘Serverless’ does not literally mean “without a server”, and it is still based on Kubernetes. Simply put, it means that a cloud provider makes resources available on-demand when needed, instead of them being allocated permanently. It is an event-driven model and will be discussed in more detail in a later blog post.

In future posts, application modernisation will be discussed and why it is so important to use real-world examples from actual teams. This will show how businesses that adopt containers, DevOps and cloud-native are moving ahead of their competitors at an exponential rate.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

Recent Posts

  • The Technical Benefits of Cloud Native architecture (Part 1)
  • What is Event-Streaming?
  • Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market
  • Protected: Free VMware Tanzu Proof of Concept for three qualifying companies
  • Wrap-Up | Tech This Out: Ansible

Recent Comments

No comments to show.

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Press Release
  • Uncategorized
  • Video
  • Why Cloud Native?
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

MENU logo
  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu
  • Blog