Home Use Cases Blog Contact Us
24/7 Support 010 5000 LSD (573) Submit a Ticket
LSD

LSD

Kubernetes Professionals

  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu

Tag: Why Cloud Native

The Technical Benefits of Cloud Native architecture (Part 1)

The Technical Benefits of Cloud Native architecture (Part 1)

Posted on February 2, 2023 by Andrew McIver
INSIGHTS

Building an application in a cloud native way simply means that you will derive the full benefit that the cloud provides, even if your application is running in your on-premise data centre.  In the first part of this two-part blog post, I will expand on some of the benefits that your application, infrastructure, business and users will experience by architecting cloud natively.

Keep in mind that the following benefits will all only be present if we make the following assumptions:

  1. Every component in the environment is generated through Infrastructure-as-code and/or config-as-code;
  2. Applications are built to be immutable;
  3. Applications are designed to run within a pay-as-you-use cloud-pricing environment and so embrace elasticity (scale on demand).

Your application can run wherever you want it to

Cloud native applications are made up of microservices that each run in containers, which can simply be moved between container orchestraters like Kubernetes, Red Hat OpenShift or VMware Tanzu. Even then, the orchestrators can run on top of any public cloud platform, multiple cloud platforms in a hybrid or poly cloud configuration, or in your data centre. It all depends on your applications’ needs and your own preferences.

You will no longer need to troubleshoot differences between environments like developer laptops and testing when the application misbehaves. The container will have the same experience in any environment because the configuration and infrastructure are defined as code at the time of deployment.

Pull existing services directly from cloud vendors instead of creating your own

When your application needs a specific component to function, you can leverage existing vendor-built services that are available to your cloud environment. Let’s say, for example, your application requires a storage layer. Instead of spinning up and configuring one of your own, you’re simply able to call a storage layer service from the cloud provider’s library. That means less work on your side and your application will be using services that are vendor-approved and supported.

Scaling up your application doesn’t necessarily mean more hardware costs

Traditionally, applications hosted in a data centre would require the addition of more physical hardware and rack space to facilitate resource scaling. When using a hybrid model with cloud native architecture, scaling out the application can be done with cloud computing resources, at a much lower cost and lead time than provisioning physical hardware. Dynamic scaling also means that when you don’t use the additional resources, you don’t pay for them.

Fault tolerance and self-healing

Cloud native applications are built to be fault-tolerant, meaning it assumes the infrastructure is unreliable. This is done to minimise the impact of failures so that the entire application doesn’t fall over when one component is unresponsive. You’re also able to run more than one instance of an application at a time so that it remains available to customers while tolerating outages.

Containers (and therefore containerised applications) also have the ability to self-heal. If a component becomes unresponsive or suffers an issue, a new one can just be started and the old one stopped, without the need for someone to fix the problem first before bringing the application up again.

To summarise, your containerised applications or services can run wherever you need them to, with homogenised environments between development, testing and production. They can heal themselves and are ready for failures so that less time is spent fixing and restoring your application experience. You’re also able to make full use of the cloud for both scaling resources and making use of pre-built cloud services to complete your application.

In part 2, I will look at more benefits of architecting your application in a cloud native way.

Andrew 'Mac' McIver

At LSD Information Technology, I'm helping clients embrace OPEN including Cloud-Native, Containerization, Architecture, DevSecOps and Automation. I'm an Open Source evangelist who loves helping people get the best out of their cloud experience.

What is Event-Streaming?

What is Event-Streaming?

Posted on January 13, 2023January 12, 2023 by Doug Moll
INSIGHTS

The evolution of business

There have been massive shifts in how services are consumed by customers over the last few years which is necessitating a change to event-driven architectures in order to effectively deliver rich customer experiences and facilitate back-end operations. Where it was once the norm to queue at a bank, customers now expect a fully digital banking experience, either online or through an app. They want instant notifications when transactions occur or if there are potential fraudulent activities happening on their account. They also want to apply for new banking products through an app and expect immediate approval or some interaction to confirm that something indeed happened to their request.

Another such example is grocery shopping where there is a definite shift in consumer behaviour with many people wanting to order their groceries online. They want delivery within an hour, live tracking of their order once it is placed, live tracking of the driver once they are on their way and they want to be notified when their groceries are arriving.

Do you notice something similar in these two examples? Both the rich front-end experiences and backend operations which support this are driven by events happening in real-time.

In this post, I will discuss event streaming, the paradigm many businesses are turning to make this all possible. I will also detail why this approach has become so immensely popular and is quickly replacing or supplementing the more traditional approaches in delivering on real-time back-end operations and rich front-end customer experiences.

What is Event Streaming?

To understand this, I will start by looking at what is meant by an event. This is pretty straightforward as a concept. An event is simply something that has happened. A change in the operation of some piece of equipment or device is an event. The change in the location of a driver reported by a GPS device is an event. Someone clicking on something on a web app or interactions on a mobile device are events. Changes in business processes, such as an invoice becoming past due is an event. Sales, trades, and shipments are all events. You get the idea.

Having the ability for all lines of business to harness, reason on top of and act on all events occurring in a business in real-time, can solve a multitude of problems, as discussed in our introduction, and is exactly what event streaming is designed to do.

A note before I continue, Apache Kafka is recognised as the de facto standard for event streaming platforms and many of my assertions in this post are based on this technology.

Event streaming uses the publish/subscribe approach to enable asynchronous communications between systems. This approach effectively decouples applications that send events from those that receive them.

In Apache Kafka we talk about producers and consumers, where your producers are apps or other data sources which produce events to what we refer to as topics. Consumers sit on the other end of the platform and subscribe to these topics from where they consume the events.

The decoupling of producers from consumers is vital in mitigating delays associated with synchronous communications and catering for multiple consumers or clients as opposed to point-to-point messaging.

The topics we mentioned above are broken up into partitions. The events themselves are written to these partitions. Simply put partitions are append-only logs which are persisted. This persistence means the same events can be consumed by multiple applications, each having its own offsets. It also means apps can replay events if required.

There are also a number of other aspects that characterise an event streaming solution, such as the scalability, elasticity, reliability, durability, stream processing, flexibility and openness afforded to developers.

This is a very high-level depiction of what characterises an event streaming platform but when you bring these aspects together, you have the building blocks for a fully event-driven architecture.

Some of the concepts I have discussed above may be familiar to some and there certainly are many technologies that fall into the integration or middleware space. The next section will detail what some of the key requirements are for an event streaming platform along with what differentiates event streaming from other similar technologies.

Event Streaming key requirements and differentiators

There are 4 key requirements for enabling event-driven architectures. I have detailed these below along with the main differentiators to traditional tools used in the middleware/integration space.

  1. The system needs to be built for real-time events. This is in contrast to a data warehouse for example which is good at historical analysis and reporting. When considering databases, the general assumption is that the data is at rest, with slow batch processes which are run daily. Databases also only typically store the current state and not all events which occurred to reach that state. When considering ETL tools, they are traditionally good for batch processing but not real-time events.
  2. The system needs to be scalable to cater for all event data. This goes beyond just transactional data, which a database, for example, has been historically used for. As we have described above, events stretch across the whole business and can include data from IoT devices, logs, security events, customer interactions, etc. The volumes of data from these sources are significantly more than transactional data. Message Queues (MQs) have traditionally been used as a form of asynchronous service-to-service communication but struggle to scale to efficiently handle large volumes of events and remain performant.
  3. The system needs to be persistent and durable. Data loss is not acceptable for mission-critical applications. Furthermore, mission-critical applications typically have a requirement for events to be replayed meaning persisting events. MQs are transient in nature and messages are not persisted, meaning once the event is consumed, it is dropped. This has some obvious downsides if the intention is for events to be replayed or for multiple domains in the business to subscribe to the same events.
  4. The system needs to provide flexibility and openness to development teams and have the capability to enrich events in real-time. Although not relevant for all use cases, having the capability to enrich data in flight has a multitude of benefits and can alleviate the efforts of multiple consumers having to build enrichment logic into their applications. The important aspect is the flexibility event streaming architectures provide where the system can either provide real-time enrichment pipelines or simply dumb pipelines where events can be passed through, enriched or directly consumed by the application. There have historically been many systems used in the application integration or middleware space, including ETL tools for batch pipelines, ESBs to enable SOAs and many more which fulfil very similar functions to each other. I am not going to fully unpack all the pros and cons across the technologies, suffice it to say that businesses are moving away from “black box” type systems such as ESBs where there are massive dependencies on critical skills which often cause bottlenecks in productivity. In contrast, event streaming provides businesses with flexible and open architectures which decouple applications, alleviate these dependencies and allow any kind of application or system to be integrated regardless of the technologies used.

When considering the 4 key elements above, Event Streaming is the only technology that ticks all the boxes and is quickly being entrenched as a critical component in the data infrastructure of businesses.

What is the end goal?

To circle back to the opening of my post, the expectations on businesses are high to continuously improve on the experiences delivered to customers as well as the back-end operations which support this.

The more focused a business is on delivering real-time event-driven architectures which are scalable, persistent, durable, open, flexible and remove critical dependencies, the more likely they are to succeed and rise above the competition.

There is a very distinct maturity curve when it comes to the adoption of event streaming. Becoming a fully event-driven organisation means that event streaming has been deployed across all lines of business and serves as the central nervous system of all business events. Getting to this point is a journey that often requires a shift in how things have traditionally been done.

This is luckily a journey that you do not need to go on alone. Please feel free to contact us and see how we can help and also stay tuned for my next post where I will go a bit more in-depth on use cases and benefits of adopting event streaming.

Doug Moll

Doug Moll is a solution architect at LSD, focusing on Observability and Event-Streaming for cloud native platforms.

What is best for my business – doing Kubernetes ourselves or a managed service?

What is best for my business – doing Kubernetes ourselves or a managed service?

Posted on November 25, 2022 by Deon Stroebel
INSIGHTS

Kubernetes and Cloud Native are two of the hottest topics in IT at the moment. From the CIO to the development teams, people are looking at Kubernetes and application modernisation to accelerate business innovation, and reduce time to market for new products and features. But how do these get managed? Do you need to do it in-house utilising your own talent, or would the better option be to find a managed service provider to do it for you?

Cloud Native is a much broader topic than just Kubernetes, and both are extremely complex new technologies. Let’s look at Kubernetes first as the foundation and work our way to cloud native.

So many options to choose from

We’ve already covered what Kubernetes is and where comes from in an earlier blog post.  Companies are modernising their applications to run in a cloud native way to provide digital services to their customers wherever they are (I won’t go into much detail of the modernisation process, but you can read up on containers and microservices in an article published here by my colleague Andrew McIver). These modernised applications are refactored to run in containers, which are typically orchestrated with Kubernetes and can feature infrastructure both on-premise and in the cloud. Kubernetes is a popular choice as the orchestrator because of what it enables, yes – but it also has to do with the fact that it is an open source project with many different distributions that cater to different preferences of the organisations that use it.

We have seen vendors pick up Kubernetes solutions with Red Hat Openshift, VMware Tanzu and SUSE Rancher, being some of the top distributions adopted globally. We also see public cloud vendors come up with their own flavours of Kubernetes with EKS from AWS, AKS from Azure and GKE or Anthos from Google. Then there are the open source projects which are also in use, like OpenShift Community Edition (OKD), Rancher and Kubernetes itself, on which all of these editions are based.

Once your organisation has decided that containers and Kubernetes are right for you, how do you get started and what is the best for the business? Do you build and manage Kubernetes yourself, or do you partner up with a service provider to manage that for you? I think we can all agree that there is no one right answer to suit everyone, and not only each company, but each business unit in a large organisation will have different requirements. In general, we can look at the pros and cons, and hopefully help you make your decision.

I speak to many companies every day about this topic, and I can completely understand why this is an important consideration. Organisations want to make sure they empower their employees, reduce costs and reduce the reliance on outsourcing.

Start with the Business Goals

Firstly, we need to understand the business strategy, objectives and goals. We want to answer a few questions, which will really help us determine the best course of action:

  1. What is the urgency? Do we need this platform done immediately, or can we take 12 to 18 months (unfortunately, this is typically the time it takes if you have a high-performing Linux team)
  2. What kind of skills do we have internally? And what is their availability like with everything else on their plates?
  3. Are you looking into public cloud? Do you have a timeframe in mind for including public cloud in our infrastructure? And are you going with a Hybrid- or Multi-Cloud configuration?
  4. Do we have good automation, DevSecOps, Infrastructure-as-Code patterns and principles in our organisation?

Once you have answers to these questions, you can start to make sense of what the best options are. I won’t go into detail for each of them, but the important considerations are around urgency and skill.

Urgency and skill

If you are looking to move fast, getting a skilled Kubernetes-focused company to assist in the deployment and management of the platform makes a lot of sense. It removes the burden and worry from the organisation and gives your team the time to learn essential cloud native skills.

There is a major skills shortage at the moment, and it is very difficult to find people with not only skills but experience managing Kubernetes. Building it usually can be done with vendor documents, but managing and supporting Kubernetes is complex, and one of the reasons we get contacted the most from companies that aren’t LSD customers yet.

Doing it yourself

Let’s look at the Benefits of the DIY approach:

  • Your team will grow and learn a new technology
  • You do not need to rely on outside skills to ensure business continuity
  • It might be more cost-effective to use existing skills.
  • Internal growth for your employees

As for drawbacks of the DIY approach, let’s consider the following:

  • It can be a slow process to upskill people
  • As mentioned above, there is a big gap in skills and finding people is difficult, and keeping them is even harder.
  • When something goes wrong, your team is responsible for fixing it, and might not be able to just yet.

Using a Managed Service

If we look at managed services that are done correctly, they should give your business the following:

  • A platform deployed faster and to the vendor’s standards.
  • Bring all the skills needed for the job. Not just a certification, but actual experience having built these platforms. Too many companies get paid to learn by their clients.
  • Have 24/7 support, because even if you have some skill, expecting them to work around the clock is not sustainable, and those people will leave.
  • It also takes away the key person dependency that so many companies are plagued by.
  • If you take the cost of time, certifications, losing talent, rehiring and more, it actually works out more economically to go the managed service route.
  • Free up your internal resources to focus on what they do best, especially developers.
  • Platforms deployed by partners/consulting companies must be done to open standards from the vendors and must be done in such a way that you can take it over once your team has the skill.

The drawbacks of using a managed service are:

  • Your business is relying on external skills
  • The perceived threat of replacement of your team can be difficult to navigate and alleviate

One of the things I feel very strongly about is that we do not want to replace people. We want to grow and empower people. The goal with a managed service should never be to replace staff, it should be to give the business the best chance of success and to get up and running fast while giving the staff the time to learn and grow. It essentially becomes another tool in their toolbox.

I want to also add that services like EKS, AKS and GKE from the hyperscalers still require a lot of management and support. The management they provide is not enough, and I include a managed service to manage those nodes too.

Hopefully, you now have a better understanding of how a managed service weighs up with doing it yourself. There are benefits and drawbacks to both methods, and how effective a method performs depends on each unique scenario.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

What is Observability? (Part 1)

What is Observability? (Part 1)

Posted on October 21, 2022October 19, 2022 by Doug Moll
INSIGHTS

In part one of this two-part series of posts, I’ll be discussing my views on the fundamentals and key elements of Observability, as opposed to a technical deep dive. There are many great resources out there which already take a closer look at the key concepts. First off, let’s look at what Observability is.

What is Observability?

The CNCF defines Observability as “the capability to continuously generate and discover actionable insights based on signals from the system under observation”.

Essentially the goal of Observability is to detect and fix problems as fast as possible. In the world of monolithic apps and older architectures, monitoring was often enough to accomplish this goal, but with the world moving to distributed architectures and microservices, it is not always obvious why a problem has occurred by merely monitoring an isolated metric which has spiked.

This is where observability becomes a necessity. With observability basically being a measure of how well the internal state of a system can be understood based on its signals, it stands to reason that all the right data is needed! In a distributed system the right data is typically regarded to be logs, metrics and application traces, often referred to as the “three pillars of observability”.

While these are the generally agreed upon key indicators, it is important in my view to also look at including user experience data, uptime data, as well as synthetic data to provide an end-to-end observable system.

The analyst’s ability to then gain the relevant insights from this data to detect and fix root cause events in the quickest and most efficient way possible is the measure of how effectively observability has been implemented for the system.

There are a number of aspects which can determine the success of your observability efforts, some of which bear more weight than others. There are also tons of observability tools and solutions to choose from. What is fairly typical amongst customers that LSD engages with is that they have numerous tools in their stable but have not achieved their goals in terms of observability, and therefore haven’t achieved the desired state.

Let’s explore this a bit more by looking at what the desired state may look like.

What is the desired state?

This is best explained by looking at an example: A particular service has a spike in latency which is likely picked up through an alert. How does an analyst go from there to determine the root cause of the latency spike?

Firstly the analyst may want to trace the transaction causing the latency spike. For this, they would analyse the full distributed trace of the high latency events. Having identified the transaction, the analyst still does not know the root cause. Some clues may lie in the metrics of the host or container it ran in, so that may be the next course of action. The root cause is mostly determined in the logs, so ultimately the analyst would want to analyse the logs for the specific transaction in question.

The above scenario is fairly simple however achieving this in the most efficient way, relies on the ability to optimally correlate between logs, metrics and traces.

Proper correlation means being able to jump directly from a transaction in a trace to the logs for that specific transaction, or being able to jump directly to the metrics of the container it ran in. To me, the most effective way to achieve this is for all the logs, metrics and traces, to exist in the same observability platform and to share the same schema.

In the digital age, customers want a flawless experience when interacting with businesses. Let’s look at a bank for example. There is no room for error when a service is directly interacting with a customer’s finances. So when an online banking service goes down for three days (it happens), it will lose customers or at least suffer reputational damage.

The ultimate goal is to detect and fix root cause events as quickly and efficiently as possible, and in this, the approach of using multiple tools fails.

In part two of this series, I will discuss the most critical factors which contribute to a good Observability solution that will help businesses reach the goals set out above.

 

Learn more about Observability by reading this blog post by Mark Billett, an Observability engineer at LSD.

If you would like to know more about Observability or a Managed Observability Platform, check out our page.

Doug Moll

Doug Moll is a solution architect at LSD, focusing on Observability and Event-Streaming for cloud native platforms.

What is Cloud Native?

What is Cloud Native?

Posted on September 20, 2022September 21, 2022 by Deon Stroebel
INSIGHTS

What is cloud native?

Let’s start with a definition. According to the Cloud Native Computing Foundation (CNCF), ‘cloud native’ can be defined as “technologies that empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds”. Essentially it is a technology that is purposefully built to make full use of the advantages of the cloud in terms of scalability and reliability, and is “resilient, manageable and observable”.

Even though the word “cloud” features heavily in the explanation, it doesn’t mean that it has to operate exclusively in the cloud. Cloud native applications can also run in your own data centre or server room and simply refers to how the application is built (to make use of the cloud’s advantages) and doesn’t pre-determine where it should run (the cloud).

How did it start?

While virtualization and microservices have been around for decades, they didn’t really become popular until 2015, when businesses were pouncing on Docker for virtualization because of its ability to easily run computing workloads in the cloud. Google open-sourced their container orchestration tool Kubernetes around that same time and it soon became the tool of choice for everyone using microservices. Fast forward to today and there are various different flavours of Kubernetes available both as community and enterprise options.

How does it work?

As this piece has explained, Cloud Native means you have the ability to run and scale an application in a modern dynamic environment. Looking at most applications today, this is just not possible as they are monolithic in nature, which means the entire application comes from a single code base. All its features are bundled into one app and one set of code. Applications need to know what server they are on, where their database is, where it sends their outputs to and which sources it expects inputs from. So taking an application like that from a data centre, and placing it in the cloud doesn’t really work as expected. Applications can be made to work on this model, but it’s not pretty, costs a lot of money and it won’t have the full benefit of the cloud.

This is not true for all monolithic applications, but the ideal situation is to move toward microservices. A microservice means that each important component of the application has its own code base. Take Netflix, for example, one service handles profiles, the next handles a user account, the next handles billing, the next lists television shows and movies etc. The end result is thousands of these services, which all communicate with each other through an API (Application Programming Interface). Each service has a required input and produces an output, so if the accounts service needs to run a payment, it would send the user code and the amount to the payment service. The payment service receives the request and checks the banking details with the user data service, then processes the payment and sends the successful completion or failed completion status back to the accounts service. It means that they have a smaller team dedicated to a single service, ensuring it functions properly.

Now moving a set of services to the cloud is fairly simple, as they usually have no state (so they can be killed and restated at will) and they don’t have storage so it doesn’t matter where they start.

Where is it going?

The latest cloud native survey by the Cloud Native Computing Foundation (CNCF) suggests that 96% of organisations are either evaluating, experimenting or have implemented Kubernetes. Over 5.6 million developers worldwide are using Kubernetes, which represents 31% of current backend developers. The survey also suggests that cloud native computing will continue to grow, with enterprises even adopting less mature cloud native projects to solve complicated problems.

In our future posts, application modernisation will be discussed in more detail and used to explain how businesses are really growing and thriving with this new paradigm.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

Recent Posts

  • The Technical Benefits of Cloud Native architecture (Part 1)
  • What is Event-Streaming?
  • Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market
  • Protected: Free VMware Tanzu Proof of Concept for three qualifying companies
  • Wrap-Up | Tech This Out: Ansible

Recent Comments

No comments to show.

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Press Release
  • Uncategorized
  • Video
  • Why Cloud Native?
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

MENU logo
  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu
  • Blog