Home Use Cases Blog Contact Us
24/7 Support 010 5000 LSD (573) Submit a Ticket
LSD

LSD

Kubernetes Professionals

  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu

Category: Cloud Native Tech

What are the benefits of Observability?

What are the benefits of Observability?

Posted on November 4, 2022 by Doug Moll
INSIGHTS

Now that we have explored what observability is and what makes up a good observability solution, we can dive a bit deeper into the benefits. This is again not an exhaustive list of benefits but I consider these to be the most impactful to businesses. Although some of these have been touched on in my previous posts, in this post I will consolidate these and add the missing pieces.

More performance, less downtime

Leaders in the observability space can detect and resolve issues considerably faster than businesses that are still relatively immature in this space. This includes issues relating to application performance or downtime.

Poorly performing applications or applications experiencing downtime have a direct impact on costs for any business. These can be in the form of tangible costs such as a direct loss in revenue or intangible costs such as brand and reputational damage.

Consider an eCommerce store which cannot transact due to a broken payment service, a social application that can no longer serve ads, a real-time trading application with super high latency, or a logistics application with a broken tracking service. There are literally thousands of examples across industries where the costs associated with downtime or poorly performing applications are very tangible.

When a banking application goes down, almost everyone knows about it the minute it happens. Twitter lights up, it appears on everyone’s news feeds and it even lands up on radio and television news broadcasts. Apart from the direct costs, the reputational damage caused by the downtime of an application can also be very costly, leading to increased customer churn, loss of new customers; as well as a host of other outcomes which impact the bottom line.

Measuring the true costs of downtime or poor-performing applications can be a difficult task, but the costs typically far outweigh the costs of making sure observability is done right; where issues are detected early and fixed before they can have a significant impact.

Higher productivity, better customer experience

A properly implemented observability solution provides businesses with massively improved insights across the entirety of the business. These insights improve efficiencies and workflows in detecting and resolving issues across the application landscape. This landscape is distributed in today’s modern architectures and extends to the infrastructure, networks and platforms on which the applications run, both on-prem as well as cloud environments. These insights and efficiencies ultimately provide multiple benefits across business operations.

One of the more tangible benefits is that if your developers and DevOps engineers are not stuck diagnosing problems all day, they can spend their time developing and deploying applications. This means accelerated development cycles that ultimately lead to getting applications to the market quicker as well as leading to better and more innovative applications.

With businesses being ever more defined by the digital experiences they provide to their customers, observability is one of the edges required to become leaders in the industry. The deeper insights also help to align the different functions of the business. Having visibility on all aspects of the system, from higher level SLAs to all the frontend and backend processes, enables operations and development teams to optimise processes across the landscape. These insights even enable businesses to introduce new sources of income.

Observability is also vital in providing businesses with confidence in their cross-functional processes and assurance that the applications that are brought to market are robust. This confidence is even more important in today’s complex distributed systems which stretch across on-prem and cloud environments.

Happy people, better talent retention

One of the often overlooked benefits of observability is talent retention. With highly skilled developers and DevOps engineers being a bit of a scarcity, it stands to reason that businesses would want to do what they can to retain their best talent.

The frustration of sitting in endless war rooms and spending the majority of the day putting out fires is a surefire way to ensure highly skilled talent will look for opportunities to work elsewhere,  to be able to do what they enjoy.

Efficient observability practices and workflows drastically reduce the amount of time developers and engineers spend dealing with issues, making them happier and ultimately helping to retain them.

Fewer monitoring tools, look at all those benefits

One of the themes from my previous posts is that using multiple monitoring tools instead of a centralised observability solution creates inefficiencies and has a severe impact on a business’s ability to detect and resolve issues. From this post, it should be apparent that the insights gained – by a centralised observability solution across the landscape – have a number of other benefits too.

Although this post is dealing with the generic benefits of observability without necessarily comparing it to other approaches, I feel addressing a few drawbacks from the multiple tool approach will also highlight additional benefits of the central platform approach to observability. Below are some of these drawbacks:

  • Licensing multiple monitoring tools introduces unnecessary costs as well as complexity in administering multiple different licensing models.
  • Having multiple tools also introduces complexity across your environment with multiple different agents and tools to be managed and operationally maintained.
  • The diverse and often rare skills required to operate multiple different tools either introduce a burden on existing operations teams or cause reliance on multiple different external parties to implement, manage and maintain tools.
  • Data governance is vital in any tool or system that stores data. Monitoring tools are no different and often contain sensitive data. Governance for a single observability solution is far simpler to achieve and less costly than multiple tools.
  • Storing data also has a cost burden which is often far higher when you have multiple tools, each with its own storage requirements.

The main thing to highlight is that the above drawbacks are really secondary to the most important benefit of centralised observability over the multiple monitoring tools approach. That is detecting and resolving issues in the most efficient and quickest way possible. This is best achieved with seamless correlation between your logs, metrics and APM data in a centralised platform.

Realising your benefits

To be a leader in the observability space is a journey. As I mentioned in previous posts, observability is not simply achieved by deploying a tool. It starts with architecture and design to ensure the solution adheres to best practices and can scale and grow as the business needs it to. It then extends to ingesting all the right data, formatted and stored in a way that can facilitate efficient correlations and workflows. Then all the other backend and frontend pieces need to fall in place, such as retention management, alerting, security, machine learning, etc.

LSD has been deploying observability solutions for our customers for many years and we help accelerate their journey through our battle-tested solutions and experience in deploying and implementing these solutions. Please follow this link to learn more.

Doug Moll

Doug Moll is a solution architect at LSD, focusing on Observability and Event-Streaming for cloud native platforms.

What is Cloud Native?

What is Cloud Native?

Posted on September 20, 2022September 21, 2022 by Deon Stroebel
INSIGHTS

What is cloud native?

Let’s start with a definition. According to the Cloud Native Computing Foundation (CNCF), ‘cloud native’ can be defined as “technologies that empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds”. Essentially it is a technology that is purposefully built to make full use of the advantages of the cloud in terms of scalability and reliability, and is “resilient, manageable and observable”.

Even though the word “cloud” features heavily in the explanation, it doesn’t mean that it has to operate exclusively in the cloud. Cloud native applications can also run in your own data centre or server room and simply refers to how the application is built (to make use of the cloud’s advantages) and doesn’t pre-determine where it should run (the cloud).

How did it start?

While virtualization and microservices have been around for decades, they didn’t really become popular until 2015, when businesses were pouncing on Docker for virtualization because of its ability to easily run computing workloads in the cloud. Google open-sourced their container orchestration tool Kubernetes around that same time and it soon became the tool of choice for everyone using microservices. Fast forward to today and there are various different flavours of Kubernetes available both as community and enterprise options.

How does it work?

As this piece has explained, Cloud Native means you have the ability to run and scale an application in a modern dynamic environment. Looking at most applications today, this is just not possible as they are monolithic in nature, which means the entire application comes from a single code base. All its features are bundled into one app and one set of code. Applications need to know what server they are on, where their database is, where it sends their outputs to and which sources it expects inputs from. So taking an application like that from a data centre, and placing it in the cloud doesn’t really work as expected. Applications can be made to work on this model, but it’s not pretty, costs a lot of money and it won’t have the full benefit of the cloud.

This is not true for all monolithic applications, but the ideal situation is to move toward microservices. A microservice means that each important component of the application has its own code base. Take Netflix, for example, one service handles profiles, the next handles a user account, the next handles billing, the next lists television shows and movies etc. The end result is thousands of these services, which all communicate with each other through an API (Application Programming Interface). Each service has a required input and produces an output, so if the accounts service needs to run a payment, it would send the user code and the amount to the payment service. The payment service receives the request and checks the banking details with the user data service, then processes the payment and sends the successful completion or failed completion status back to the accounts service. It means that they have a smaller team dedicated to a single service, ensuring it functions properly.

Now moving a set of services to the cloud is fairly simple, as they usually have no state (so they can be killed and restated at will) and they don’t have storage so it doesn’t matter where they start.

Where is it going?

The latest cloud native survey by the Cloud Native Computing Foundation (CNCF) suggests that 96% of organisations are either evaluating, experimenting or have implemented Kubernetes. Over 5.6 million developers worldwide are using Kubernetes, which represents 31% of current backend developers. The survey also suggests that cloud native computing will continue to grow, with enterprises even adopting less mature cloud native projects to solve complicated problems.

In our future posts, application modernisation will be discussed in more detail and used to explain how businesses are really growing and thriving with this new paradigm.

Deon Stroebel

Head of Solutions for LSD which allows me to build great modernisation strategies with my clients, ensuring we deliver solutions to meet their needs and accelerating them to a modern application future. With 7 years experience in Kubernetes and Cloud Native, I understand the business impact, limitations and stumbling blocks faced by organisations. I love all things tech especially around cloud native and Kubernetes and really enjoy helping customers realise business benefits and outcomes through great technology. I spent a year living in Portugal where I really started to understand clear communication and how to present clearly on difficult topics. This has helped me to articulate complex problems to clients and allow them to see why I am able to help them move their organisations forward through innovation and transformation. My industry knowledge is focused around Cloud Native and Kubernetes with vendors such as VMWare Tanzu , Red Hat Openshift, Rancher , AWS, Elastic and Kafka.

Solution Focus: Infrastructure Automation

Solution Focus: Infrastructure Automation

Posted on February 12, 2020April 17, 2022 by Andrew Hill
INSIGHTS

Automation is a big part of LSD’s solution toolbox, so we decided to have a closer look at it and hopefully explain the concept in a little more detail. There are a ton of questions that typically come up from customers and people interested in the solution, which can be answered in enough detail to write an informative post filled with the right answers. To do that, we roped in one of our automation experts, Andrew Hill (aka “Krow” in-house) to answer some common questions and give some insight on infrastructure automation projects.

A little bit about Andrew’s background in infrastructure automation:

Krow has been in this space for quite a while, mostly getting involved in automation projects focusing on provisioning infrastructure and pipeline tools. He gets called in when companies want to reduce the time spent by their technical people on provisioning, deploying and maintaining infrastructure. Andrew also works with many different automation tools depending on the task at hand, so there is a deep understanding of the problems faced by companies and the tools that can fix them.

What does a common solution look like?

This all depends on the customer’s problems. Typically, you’d start off by building and configuring a basic (‘vanilla’) server setup – which ends up in a Docker container. This image will become the template for all the other infrastructure that you’ll be provisioning (depending on the requirements), so once this process is completed, it can be deployed thousands of times with the exact same configuration. In Andrew’s projects, Red Hat Satellite normally features to give him a complete view of the estate (all the infrastructure involved). The different servers can have tags applied to them that indicate their function, e.g. a web server, so that all the tools required for a web server can be automatically installed and kept as an image for future web server deployments. It creates a standard internally so that configurations don’t differ depending on the technicians that built the machine. With a standard image and other images that you’ve configured to function in certain roles, your tech people have the power to deploy in a matter of minutes from a single interface.

According to Andrew, there are a few challenges faced when starting out with an automation project. Most teething problems revolve around the customer environment. In some cases, getting all the credentials and access to sources can hold the process up a bit, but this is completely dependent on the customer and their policies. As with any new solution or technology that enters such an environment, there will be cases where security personnel will want to ensure that everything in the automation solution adheres to their internal policy. In one case, Andrew explained that security personnel sat down with him, going through the automation process step-by-step to investigate how it functions and what it accesses. That isn’t at all doom and gloom, though – once everything is green lit, newly deployed machines don’t need to be inspected because they’re set up to be identical to the original.

The good stuff: results that customers are seeing by starting the process.

The main benefit that always gets mentioned is that valuable time is being saved by technical resources who used to provision, configure and maintain infrastructure. Deployment times are drastically reduced to get something provisioned, freeing up time for technical resources to work on other revenue-generating tasks. Their environment gets a benchmark so that infrastructure is standardized – which means that troubleshooting time is less as all the servers are configured the same. Another big benefit is rolling out updates, patches and other files out to infrastructure automatically, instead of someone having to apply it to group policy or walk from desk to desk to do it. And when something breaks? Just deploy a new one in minutes.

There are quite a few generic tasks that automation is used for, but a very interesting idea from Andrew on where automation can be used is to automate tasks that are either done on a regular basis (like running a couple of queries or scripts) or that are run with large stretches of time in-between events. For example, someone might need to run a script once a week to generate a particular result. Even if it takes 15 minutes to complete the task, if it is done once a week, it amounts to an hour of time that someone will be doing it in a month. In the same manner, if a script needs to run every six months with other projects running in the meantime, it is sometimes needed for a technical resource to get re-acquainted with the process before it can be kicked off. If these tasks were automated, an hour can be saved every month by that resource and almost two full days of working hours in a year. It may seem like a small amount of time but imagine how many of those tasks are going on in your business right now that can amount to DAYS worth of saved time a year.

We asked Andrew if he’s worked on a cool automation project that was a little different from the standard use case, and he had a great one in mind. On one project, automation was used to generate reports from multiple sources, but that wasn’t the main benefit. The time taken to produce a report was shrunk from three hours down to just over twenty minutes. Where this was useful is that some of these reports would be needed in meetings to be discussed, but sometimes took too long to generate and it took some planning to get them available at the right times. Now, with a reduced time of twenty minutes, a report can be generated on shorter notice. Another handy project was creating a custom front-end for a customer that featured infrastructure specifications in drop-down menus so that users could select their specs and deploy machines using a wizard-like process.

The technology used in Automation

At LSD, our automation projects normally include tools like Ansible and Jenkins – so we had Andrew tell us what they do and what he likes about them. First off, Ansible is a configuration manager with an orchestration component, whereas Jenkins is a pipeline tool with hundreds of plugins to perform tasks in the pipeline process. What Andrew liked about Ansible is that he feels like it was built from a DevOps person’s point of view, whereas other similar tools like SaltStack and Chef have a strong developer focus. It makes it easy for people in the DevOps environment to make sense of everything, keeping the learning curve down to a minimum. This is of course all dependent on the person setting up the automation tasks and the tasks required for the project, and many different tools can be used to complete the job.

Hopefully through this insight, you’ll have a better idea of infrastructure automation, the tools used, how it is implemented, how projects typically work and the benefits that your business will experience by having automation in place. We’ll have some more automation content up soon, including comparing some of the popular tools.

You can find Andrew Hill on LinkedIn for more of his automation expertise, and we’d like to thank him for taking the time to share this info with us. If you have any questions or would like to add anything, please get in touch on our Contact Us page!

Andrew Hill

When I am not wearing my batman T-shirt to impress my kids, I am finding ways to add value to the lives of those in my circle of influence, whether it be at home or work. This leads to each day being an opportunity to learn something new, solve challenges that life leads you to and creating or maintaining meaningful relationships with the people you encounter in the process. Being a senior consultant for LSD Information Technology means that I get to be part of a team that focuses on staying up to date with the latest open source technologies and applications. Using what I learn every day in the open source space of banking and telecommunications, I have been able to set up process standards within the automation and compliance side of infrastructure. Some of the experience gained from this covers tools such as Ansible, Chef, Git, Red Hat Satellite, Red Hat Virtualization, Clustering and OpenSCAP. With my love for problem solving, especially first world challenges such as how to stay #1 on the office Joust arcade scoreboard, I am able to find the root cause of issues quickly and set up solutions to mitigate them. My job titles have ranged from Front Line Phone Tech Ninja through to "Hold your money for you institute" Linux Team Lead, giving me the ability of being comfortable with flying solo as well as collaborating with or leading a team of techies. When I can show people the advantages of open source, automation and setting standards for their business at the end of a day, I know that my work is giving value.

Recent Posts

  • The Technical Benefits of Cloud Native architecture (Part 1)
  • What is Event-Streaming?
  • Watch: LSD, VMware and Axiz talk Tanzu at the Prison Break Market
  • Protected: Free VMware Tanzu Proof of Concept for three qualifying companies
  • Wrap-Up | Tech This Out: Ansible

Recent Comments

No comments to show.

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Press Release
  • Uncategorized
  • Video
  • Why Cloud Native?
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

MENU logo
  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu
  • Blog