Home Use Cases Blog Contact Us
24/7 Support 010 5000 LSD (573) Submit a Ticket
LSD

LSD

Kubernetes Professionals

  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu

Tag: Deon Stroebel

LSD x Red Hat: Scaling to Meet Customer Demand and Security with Red Hat OpenShift Platform Plus

LSD x Red Hat: Scaling to Meet Customer Demand and Security with Red Hat OpenShift Platform Plus

Posted on December 1, 2022 by Charl Barkhuizen
INSIGHTS

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

What is best for my business – doing Kubernetes ourselves or a managed service?

What is best for my business – doing Kubernetes ourselves or a managed service?

Posted on November 25, 2022 by Deon Stroebel
INSIGHTS

Kubernetes and Cloud Native are two of the hottest topics in IT at the moment. From the CIO to the development teams, people are looking at Kubernetes and application modernisation to accelerate business innovation, and reduce time to market for new products and features. But how do these get managed? Do you need to do it in-house utilising your own talent, or would the better option be to find a managed service provider to do it for you?

Cloud Native is a much broader topic than just Kubernetes, and both are extremely complex new technologies. Let’s look at Kubernetes first as the foundation and work our way to cloud native.

So many options to choose from

We’ve already covered what Kubernetes is and where comes from in an earlier blog post.  Companies are modernising their applications to run in a cloud native way to provide digital services to their customers wherever they are (I won’t go into much detail of the modernisation process, but you can read up on containers and microservices in an article published here by my colleague Andrew McIver). These modernised applications are refactored to run in containers, which are typically orchestrated with Kubernetes and can feature infrastructure both on-premise and in the cloud. Kubernetes is a popular choice as the orchestrator because of what it enables, yes – but it also has to do with the fact that it is an open source project with many different distributions that cater to different preferences of the organisations that use it.

We have seen vendors pick up Kubernetes solutions with Red Hat Openshift, VMware Tanzu and SUSE Rancher, being some of the top distributions adopted globally. We also see public cloud vendors come up with their own flavours of Kubernetes with EKS from AWS, AKS from Azure and GKE or Anthos from Google. Then there are the open source projects which are also in use, like OpenShift Community Edition (OKD), Rancher and Kubernetes itself, on which all of these editions are based.

Once your organisation has decided that containers and Kubernetes are right for you, how do you get started and what is the best for the business? Do you build and manage Kubernetes yourself, or do you partner up with a service provider to manage that for you? I think we can all agree that there is no one right answer to suit everyone, and not only each company, but each business unit in a large organisation will have different requirements. In general, we can look at the pros and cons, and hopefully help you make your decision.

I speak to many companies every day about this topic, and I can completely understand why this is an important consideration. Organisations want to make sure they empower their employees, reduce costs and reduce the reliance on outsourcing.

Start with the Business Goals

Firstly, we need to understand the business strategy, objectives and goals. We want to answer a few questions, which will really help us determine the best course of action:

  1. What is the urgency? Do we need this platform done immediately, or can we take 12 to 18 months (unfortunately, this is typically the time it takes if you have a high-performing Linux team)
  2. What kind of skills do we have internally? And what is their availability like with everything else on their plates?
  3. Are you looking into public cloud? Do you have a timeframe in mind for including public cloud in our infrastructure? And are you going with a Hybrid- or Multi-Cloud configuration?
  4. Do we have good automation, DevSecOps, Infrastructure-as-Code patterns and principles in our organisation?

Once you have answers to these questions, you can start to make sense of what the best options are. I won’t go into detail for each of them, but the important considerations are around urgency and skill.

Urgency and skill

If you are looking to move fast, getting a skilled Kubernetes-focused company to assist in the deployment and management of the platform makes a lot of sense. It removes the burden and worry from the organisation and gives your team the time to learn essential cloud native skills.

There is a major skills shortage at the moment, and it is very difficult to find people with not only skills but experience managing Kubernetes. Building it usually can be done with vendor documents, but managing and supporting Kubernetes is complex, and one of the reasons we get contacted the most from companies that aren’t LSD customers yet.

Doing it yourself

Let’s look at the Benefits of the DIY approach:

  • Your team will grow and learn a new technology
  • You do not need to rely on outside skills to ensure business continuity
  • It might be more cost-effective to use existing skills.
  • Internal growth for your employees

As for drawbacks of the DIY approach, let’s consider the following:

  • It can be a slow process to upskill people
  • As mentioned above, there is a big gap in skills and finding people is difficult, and keeping them is even harder.
  • When something goes wrong, your team is responsible for fixing it, and might not be able to just yet.

Using a Managed Service

If we look at managed services that are done correctly, they should give your business the following:

  • A platform deployed faster and to the vendor’s standards.
  • Bring all the skills needed for the job. Not just a certification, but actual experience having built these platforms. Too many companies get paid to learn by their clients.
  • Have 24/7 support, because even if you have some skill, expecting them to work around the clock is not sustainable, and those people will leave.
  • It also takes away the key person dependency that so many companies are plagued by.
  • If you take the cost of time, certifications, losing talent, rehiring and more, it actually works out more economically to go the managed service route.
  • Free up your internal resources to focus on what they do best, especially developers.
  • Platforms deployed by partners/consulting companies must be done to open standards from the vendors and must be done in such a way that you can take it over once your team has the skill.

The drawbacks of using a managed service are:

  • Your business is relying on external skills
  • The perceived threat of replacement of your team can be difficult to navigate and alleviate

One of the things I feel very strongly about is that we do not want to replace people. We want to grow and empower people. The goal with a managed service should never be to replace staff, it should be to give the business the best chance of success and to get up and running fast while giving the staff the time to learn and grow. It essentially becomes another tool in their toolbox.

I want to also add that services like EKS, AKS and GKE from the hyperscalers still require a lot of management and support. The management they provide is not enough, and I include a managed service to manage those nodes too.

Hopefully, you now have a better understanding of how a managed service weighs up with doing it yourself. There are benefits and drawbacks to both methods, and how effective a method performs depends on each unique scenario.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

LSD x Red Hat: Helping Businesses to Innovate at Speed

LSD x Red Hat: Helping Businesses to Innovate at Speed

Posted on October 21, 2022 by Charl Barkhuizen
INSIGHTS

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

What are Kubernetes and Containerisation?

What are Kubernetes and Containerisation?

Posted on October 6, 2022October 5, 2022 by Deon Stroebel
INSIGHTS

Sounding like the next enterprise technology buzzwords, the terms ‘containerisation’ and ‘Kubernetes’ feature in many of today’s business meetings about technology platforms. Although containerisation has been around for a while, Kubernetes itself has only been on the scene for just over half a decade. Before looking at those concepts, we first need to look at the history of cloud native to understand how it got to where we are today.

The past

In the past when computing became critical to businesses, they started out with mainframe servers with the idea to have a huge, expensive server with almost 100% uptime. Their computing power was incredible and over the past 70+ years evolved to have even more. In the 80s and 90s, a move to commodity hardware took place instead of bulky proprietary mainframe servers.

Commodity servers are based on the principle that lots of relatively cheap, standardised servers running in parallel were used to perform the same tasks as mainframes. If one experienced failure, it was quick and easy to replace it. The model worked by having redundancy at scale, rather than the mainframe which was a small number of servers with high redundancy built in. This meant that data centres started growing exponentially to make space for all the servers.

The other problem that appeared with commodity hardware is that it lacked the features needed to isolate individual applications. This meant that multiple applications would run on a single server or would be required to have one server for each application. This meant that another solution was required, which is where enterprise x86 server virtualisation comes in. The ability to run multiple virtual servers on a single physical server changed the game yet again, with companies like VMWare dominating the market and growing at an unimaginable scale. Virtualisation used the principle that a virtual server with an operating system is attached to a network, with its own storage block. It would have anti-virus, the libraries for the application, the code, and everything else required installed on it, just like any other regular computer. It would behave like a regular server with its own network interface(s) and drives. It would have its own operating system as well as application libraries, application code and any other supporting software.

The problem soon became clear: the number of resources, disk space, memory and licenses used to run multiple instances of the same operating system across all of these virtual computers together with all the installed components were adding up and were essentially wasted.

This is where containers shine

These problems are what containers are designed to solve. Instead of providing isolation between workloads by virtualizing an entire server; the application code and supporting libraries are isolated from other workloads via containerization.

Let’s create two viewpoints to understand containers even further: a development angle and an infrastructure & operations angle.

Diagram of application servers as they progressed from physical servers to virtual machines to containers

Infrastructure & Operations

From an infrastructure & operations perspective, IT teams had their hands full managing all of these virtual servers, which all had a host of components to maintain. Each had an operating system with anti-virus, needed to be patched and upgraded, and security tools that needed to be managed (usually agents installed to monitor the server and the application). Beyond that, there would also be the application’s individual components that were needed to make it function. Usually, a developer would finish their code and send it to IT. IT would copy it to the development server and start testing to see if it was fit for the production environment. Is the development server the same as the developer’s laptop? Chances are that it probably is not, which meant that the code would need fixing for the differences between the two environments. Next, it moves to User Acceptance Testing (UAT), System Integration Testing (SIT) or Quality Assurance (QA) (all of which also had to be identical to the servers before them). Finally, it would reach the production environment which again also needed to be identical to the previous environments to function correctly. The IT team spent so much time having to fix, debug and keep track of each server, and making sure there is a repeatable process to do it all over again. Containers solved this problem by enabling code to run the same regardless of the environment – on a developer’s laptop, a server or in the cloud. The team needed only to ensure that the container could reach its network and storage locations, making it far easier to manage compared to the previous problem.

Development

Looking at it from a development perspective, the idea with containers and cloud native is that an application needed to be broken down and refactored into microservices. This means that instead of one single, monolithic code base, each part of the application will operate on its own, and it looks for the data it needs to output its programmed function. An example of this is Uber: its application consists of hundreds of services, all with their own function. One service maps out the customer’s area, the next service looks for drivers, the next compares drivers for the best fit for the trip, one for route mapping, one for calculating costing for the route, etc. Running each service in its own virtual machine would be near impossible, but for each of these services to run on its own container is a completely different story. The container runs on a container orchestration engine (like Kubernetes) which handles the load, network traffic, ingress, egress, storage and more on the server platform.

That doesn’t mean monolithic applications can’t run in a container. In fact, many companies follow that route as it still provides some of the cloud native benefits, without having to undertake the time-consuming task of completely rewriting an application.

The next point to consider is how all of these containers are going to be managed. The progression started with a handful of mainframes, then a dozen servers, to hundreds of virtual machines, to thousands of containers.

Enter Kubernetes

As discussed in a previous blogpost, Kubernetes comes from a project in Google called

“Borg”,  which was renamed and open-sourced as “Kubernetes” in 2014. There have been many other container orchestration engines like Cattle, Mesos and more, but in the end, Kubernetes won the race and it is now the standard for managing your containerised applications.

Kubernetes (also known as K8s) is an open-source system for automating the deployment, scaling, and management of containerised applications. The whole idea is that Kubernetes can run on a laptop, on a server or in the cloud, which means that it can deploy a service or application in a container and move it to the next environment without experiencing a problem. Kubernetes has some amazing features that make it a powerful orchestration tool, including:

  • Horizontal scaling – containers are created to keep up with demand in seconds so that a website or application will never go down because of user traffic. And additional compute nodes can be added (or removed) as needed according to workload demand.
  • Automated rollouts and rollbacks – New features of an application can be tested on let’s say. 20% of the users, slowly rolling out to more as it proves safe. If it experiences a problem, it can simply be rolled back on the container to the previous version that worked, with minimal interruption.
  • Self-healing is the functionality that enables the restarting of failing containers, replacing or rescheduling containers when nodes die or killing containers that aren’t responding. It all happens automatically.
  • Multi-architecture: Kubernetes can manage x86 and arm-based clusters

Kubernetes is what allows companies like Netflix, Booking.com and Uber to handle customer scale in the millions, and give them the ability to release new versions and code daily.

What about serverless? ‘Serverless’ does not literally mean “without a server”, and it is still based on Kubernetes. Simply put, it means that a cloud provider makes resources available on-demand when needed, instead of them being allocated permanently. It is an event-driven model and will be discussed in more detail in a later blog post.

In future posts, application modernisation will be discussed and why it is so important to use real-world examples from actual teams. This will show how businesses that adopt containers, DevOps and cloud-native are moving ahead of their competitors at an exponential rate.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

What is Cloud Native?

What is Cloud Native?

Posted on September 20, 2022September 21, 2022 by Deon Stroebel
INSIGHTS

What is cloud native?

Let’s start with a definition. According to the Cloud Native Computing Foundation (CNCF), ‘cloud native’ can be defined as “technologies that empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds”. Essentially it is a technology that is purposefully built to make full use of the advantages of the cloud in terms of scalability and reliability, and is “resilient, manageable and observable”.

Even though the word “cloud” features heavily in the explanation, it doesn’t mean that it has to operate exclusively in the cloud. Cloud native applications can also run in your own data centre or server room and simply refers to how the application is built (to make use of the cloud’s advantages) and doesn’t pre-determine where it should run (the cloud).

How did it start?

While virtualization and microservices have been around for decades, they didn’t really become popular until 2015, when businesses were pouncing on Docker for virtualization because of its ability to easily run computing workloads in the cloud. Google open-sourced their container orchestration tool Kubernetes around that same time and it soon became the tool of choice for everyone using microservices. Fast forward to today and there are various different flavours of Kubernetes available both as community and enterprise options.

How does it work?

As this piece has explained, Cloud Native means you have the ability to run and scale an application in a modern dynamic environment. Looking at most applications today, this is just not possible as they are monolithic in nature, which means the entire application comes from a single code base. All its features are bundled into one app and one set of code. Applications need to know what server they are on, where their database is, where it sends their outputs to and which sources it expects inputs from. So taking an application like that from a data centre, and placing it in the cloud doesn’t really work as expected. Applications can be made to work on this model, but it’s not pretty, costs a lot of money and it won’t have the full benefit of the cloud.

This is not true for all monolithic applications, but the ideal situation is to move toward microservices. A microservice means that each important component of the application has its own code base. Take Netflix, for example, one service handles profiles, the next handles a user account, the next handles billing, the next lists television shows and movies etc. The end result is thousands of these services, which all communicate with each other through an API (Application Programming Interface). Each service has a required input and produces an output, so if the accounts service needs to run a payment, it would send the user code and the amount to the payment service. The payment service receives the request and checks the banking details with the user data service, then processes the payment and sends the successful completion or failed completion status back to the accounts service. It means that they have a smaller team dedicated to a single service, ensuring it functions properly.

Now moving a set of services to the cloud is fairly simple, as they usually have no state (so they can be killed and restated at will) and they don’t have storage so it doesn’t matter where they start.

Where is it going?

The latest cloud native survey by the Cloud Native Computing Foundation (CNCF) suggests that 96% of organisations are either evaluating, experimenting or have implemented Kubernetes. Over 5.6 million developers worldwide are using Kubernetes, which represents 31% of current backend developers. The survey also suggests that cloud native computing will continue to grow, with enterprises even adopting less mature cloud native projects to solve complicated problems.

In our future posts, application modernisation will be discussed in more detail and used to explain how businesses are really growing and thriving with this new paradigm.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

Press release: LSD expands its Managed Kubernetes Platform with SUSE Platinum partner status

Posted on July 14, 2022September 20, 2022 by Charl Barkhuizen
INSIGHTS

14 July – Johannesburg

LSD today announced that it is augmenting its Managed Kubernetes Platform with SUSE Rancher solution offering by achieving SUSE Platinum partner and Managed Service Provider (MSP) status. Currently, LSD is the only partner to achieve Platinum status in Sub-Saharan Africa. These achievements are a key part of LSD’s strategy of delivering certified expert services to customers that now features SUSE Rancher, SUSE NeuVector and Harvester. The partnership level means that LSD can offer certified managed services to their customers, including enterprise-grade container security through SUSE’s recent acquisition of NeuVector.

”LSD is a long-standing partner with SUSE. They have proven their commitment by their high level of technical certification held” says Ton Musters, SVP Channel & Cloud EMEA, APJ, GC for SUSE.

“LSD has been working with Rancher for many years and manages retailers, banks and a Telco’s primary estate. Their management of other Kubernetes clusters is also fantastic, especially the hyperscale versions such as AWS EKS. LSD has incorporated SUSE Rancher, NeuVector and Harvester into our main offering as the value it brings to our clients is fantastic and our technology team loves working on it” says Deon Stroebel, Head of Solutions for LSD Open.

LSD

LSD was founded in 2001 and wants to inspire the world by embracing OPEN philosophy and technology, empowering people to be their authentic best selves, all while having fun. LSD is your cloud native digital acceleration partner that provides a fully managed and engineered cloud native accelerator, leveraging a foundation of containerization, Kubernetes and open- source technologies. LSD is a silver member of the Cloud Native Computing Foundation (CNCF) and also a Kubernetes Certified Services Provider (KCSP).

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

Recent Posts

  • The Technical Benefits of Cloud Native architecture (Part 1)
  • What is Event-Streaming?
  • Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market
  • Protected: Free VMware Tanzu Proof of Concept for three qualifying companies
  • Wrap-Up | Tech This Out: Ansible

Recent Comments

No comments to show.

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Press Release
  • Uncategorized
  • Video
  • Why Cloud Native?
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

MENU logo
  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu
  • Blog