Home Use Cases Partners Blog Contact Us
LSD

LSD

Kubernetes Professionals

  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu

Category: News

Vim To Vs Code – A Story About A RHCA Who Became A TKGi Platform Developer

Vim To Vs Code – A Story About A RHCA Who Became A TKGi Platform Developer

Posted on April 7, 2022June 7, 2022 by JP
INSIGHTS

Today, I’m a TKGi platform developer and for the most part I develop Concourse CI pipelines. Specifically, I build and maintain pipelines that build foundations that other pipelines use to build custom Kubernetes clusters!

For a while now, I’ve been meaning to blog a bit about my own personal “digital transformation”. I went from a highly sought after Red Hat Certified Architect to a mere sysadmin seemingly overnight. Why? Because DevOps! It was all the IT job market wanted!

I’d been hearing about this DevOps thing for a while and like most people didn’t quite understand it. Then I got to know the theory but hadn’t actually experienced it. I was going for interviews and one technical interview stood out. I was given a laptop with VS code and Ansible plugin and had to deploy a whole lot of infrastructure!! In previous interviews, I’d happily been able to get away with using Vim but it was clear that Vim, on its own, just wouldn’t scale for a massive Git repo full of Infrastructure-as-Code. It was then I realised I had to escape Vim and get with it!

Before I knew it my contract was up and I had to act fast! Luckily there was a Pivotal PKS (Vmware TKGi) opportunity waiting for me. I had no idea what I was getting into but wanted to learn DevOps and Kubernetes so leapt right in.

Pivotal held a 3 month Dojo engagement where myself and the team were transformed into Agile/XP DevOps ninjas. I’d read about the DevOps culture but, like the Matrix, I couldn’t understand it until I experienced it for myself. I was thrown into the deep end of Agile/XP DevOps culture, GitOps, Platform and Cloud Native spaces (all of it “DevOps” for short). It’s a vast myriad of software and soft skills to boot. For those destined for digital transformation: Buckle up! It’s no wonder you need an LSD solution! 😉

Now that I’m in the TKGi DevOps space I don’t worry about anything at OS level anymore AT ALL!! VMware Tanzu OpsManager (BOSH under the hood) plays a big part in simplifying automation because it manages your servers for you. You never have to worry about server inconsistencies or patching, just roll out a new stemcell (a packaged OS “base”) using BOSH and a few coffees later… Presto! All your workloads are rebuilt to spec and running on a new OS with zero workload downtime! We don’t even have user accounts on servers, temporary accounts are supplied by BOSH and they automatically self-destruct on exit. I very rarely have to log into servers these days because BOSH is in control.

TKGi and Concourse CI loaded with Platform Automation Toolkit plus Terraform play a large part in rolling out your VMs and supporting infrastructure too. Everything including deployment of infrastructure, servers and Kubernetes clusters is automated through Concourse pipelines. Configuration files and manifests (like Helm values and tfvars) are all template-driven and are interpolated by Concourse (with secrets and vars) on build containers before being deployed to foundations and clusters.

Sandbox is the most important environment because that’s where all the work is done before getting automatically promoted to other environments. All the code starts on the Sandbox foundation and gets promoted through environments to the Production foundation by Git. There is no code variance between foundations or clusters except the ones we know about!

Honourable mentions: Helm, Carvel KApp and ArgoCD combined with TKGi and Concourse pipelines make for a formidable Kubernetes deployment technique indeed! I can’t stress it enough EVERYTHING IS AUTOMATED! This is GitOps and Infrastructure-as-Code to the max!

BUT (that’s a big BUT)…

I’d be remiss not to mention that it’s really the DevOps/XP/Agile culture that makes it easy to progress steadily through iterations. This DevOps culture requires a certain commitment to a way of work. Commitment to automation, participating in regular ceremonies (standup, pre-IPM, IPM and retro) and commitment to pair programming which makes the platform rock-solid-stable.

So yeah, now I’m a Platform DevOps Engineer doing development and operations on a “platform as a product” that can deploy “clusters as a product” to be consumed en masse.

It’s been awesome to see the platform grow steadily over time and nothing beats that feeling of good cadence! It was a necessary move into the DevOps space but a welcome one. I can see why DevOps is the future and I’m happy to be living the dream of a completely automated cloud native GitOps platform to deploy Kubernetes clusters on! An amazing journey indeed! Now I’m busy looking into TKG which essentially replaces BOSH with Cluster API. Watch this space.

I hope that something from this post will help nudge DevOps-wary sysadmins a little into the future!

Jean-Pierre Pitout

I'm a Certified Kubernetes Administrator that's worked with Linux and Open Source since 2006. I embrace Open Source, Agile, XP and DevOps methodologies and have consulted to a number of companies over the years giving me insights into a variety of infrastructures. I've had to learn how they run, maintain them and implement new ones. I'm able to run with and complete projects while setting expectations and following due process along the way. Currently I'm doing Agile/XP, DevOps and cloud stuff around VMware TKGi and work a lot with Kubernetes. I'm interested in Python, automation, data processing and machine learning. On the softer side... mentorship, empowerment, making and designing things.

Enneagram: Understanding LSD’s People

Enneagram: Understanding LSD’s People

Posted on March 2, 2022April 17, 2022 by Charl Barkhuizen
INSIGHTS

IMAGINE A HOUSE

This is your house and you have many different rooms in your house.  You have your room which you are most familiar with.  You know what is in there, you know where to find stuff, you know where you might have hidden stuff that you don’t want anyone else to know about or find.  You may even be so comfortable and familiar in your room that you have stopped noticing some of the things around you.  You feel safe in your room and things are pretty predictable in there.

The other rooms in your house vary from known and familiar to completely unknown and maybe even unopened.  Rooms that you have never ventured into or even tried to unlock.  You may have a fear of entering some of these rooms, maybe you have tried once before but it felt too scary and uncertain to hang around in there.  You may be completely disinterested in some of the rooms and there is nothing that stimulates an intrigue about unlocking and entering that door.  And there are other rooms that are fascinating and you are drawn to spending more time in there, exploring and finding little treasures that belong to you, for the first time in your life.

THIS IS LIKE THE ENNEAGRAM

The Enneagram is a personality profiling tool which is used as an assessment of personality.  It can tell you many things about yourself and the people around you.  It is multidimensional and fluid in its expression. What I love about the Enneagram is that it does not put you in a box but it does show you the box that you are already in and it shows you how you can get out of it. Amongst other things it gives you an indicator of what motivates you, what you might think, act and feel like in stress and it offers you a pathway for growth. According to the Enneagram there are 9 different personality types and each of us will default to one of those types.  However, as we grow in self-awareness we discover that we are all flavored a little differently, and as we get to know ourselves and understand the other types we can become more compassionate and embracing  towards the responses and reactions of others.

So back to our imagination journey in your house and why I said that it’s like the Enneagram.

Your room that you are most comfortable in and most familiar with is your default personality type. This is the part of your personality that you will understand the most.  More than likely it is the part of your personality that others will observe and know to be YOU.  The “above the surface” part of you that you present to the world.  The part of you that you are unconsciously living out.

But as we all know self-awareness is of massive importance.  Especially so within a work context.  This is where we spend the majority of our waking hours and this is where we can become tired and stressed and triggered by our colleagues.  The more we understand ourselves, the better we are able to perform at work, the better we are able to get along with our team members.  Ironically the more we understand ourselves the more we begin to understand other people and the more we grow in empathy and compassion for those we live and work with.

In the last couple of years LSD has been using the Enneagram, an archetypal framework that gives insight into individuals and their personalities. It resonates the most with us as a team, because it doesn’t just put a label on someone and that is how they are classified for their entire tenure here. Let me explain what it is and why we love it:

IMAGINE A HOUSE

This is your house and you have many different rooms in your house.  You have your room which you are most familiar with.  You know what is in there, you know where to find stuff, you know where you might have hidden stuff that you don’t want anyone else to know about or find.  You may even be so comfortable and familiar in your room that you have stopped noticing some of the things around you.  You feel safe in your room and things are pretty predictable in there.

The other rooms in your house vary from known and familiar to completely unknown and maybe even unopened.  Rooms that you have never ventured into or even tried to unlock.  You may have a fear of entering some of these rooms, maybe you have tried once before but it felt too scary and uncertain to hang around in there.  You may be completely disinterested in some of the rooms and there is nothing that stimulates an intrigue about unlocking and entering that door.  And there are other rooms that are fascinating and you are drawn to spending more time in there, exploring and finding little treasures that belong to you, for the first time in your life.

THIS IS LIKE THE ENNEAGRAM

The Enneagram is a personality profiling tool which is used as an assessment of personality.  It can tell you many things about yourself and the people around you.  It is multidimensional and fluid in its expression. What I love about the Enneagram is that it does not put you in a box but it does show you the box that you are already in and it shows you how you can get out of it. Amongst other things it gives you an indicator of what motivates you, what you might think, act and feel like in stress and it offers you a pathway for growth. According to the Enneagram there are 9 different personality types and each of us will default to one of those types.  However, as we grow in self-awareness we discover that we are all flavored a little differently, and as we get to know ourselves and understand the other types we can become more compassionate and embracing  towards the responses and reactions of others.

So back to our imagination journey in your house and why I said that it’s like the Enneagram.

Your room that you are most comfortable in and most familiar with is your default personality type. This is the part of your personality that you will understand the most.  More than likely it is the part of your personality that others will observe and know to be YOU.  The “above the surface” part of you that you present to the world.  The part of you that you are unconsciously living out.

But as we all know self-awareness is of massive importance.  Especially so within a work context.  This is where we spend the majority of our waking hours and this is where we can become tired and stressed and triggered by our colleagues.  The more we understand ourselves, the better we are able to perform at work, the better we are able to get along with our team members.  Ironically the more we understand ourselves the more we begin to understand other people and the more we grow in empathy and compassion for those we live and work with.

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge

Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge

Posted on February 4, 2022April 22, 2022 by Seagyn Davis
INSIGHTS

This is Part 2 in the Red Hat Hackfest review series. Click here to read Part 1.

After finding out about the use case and receiving the hardware (thanks again Red Hat, Intel and IBM), we set off on getting the base software installed on the devices. Setting up the hardware consisted of installing Single-node Openshift on the Intel NUC and RHEL (Red Hat Enterprise Linux) for Edge on the Fitlet2.

We initially looked at using PXE to install the OS on each unit but the time sink for this was too large within the scope of our normal day to day job (nevermind the additional hardware requirements).

The installation process also took a turn for the worse when we realised that we needed an HDMI cable and ironically our large boxes of cables were without HDMI.

INSTALL SINGLE-NODE OPENSHIFT (SNO) ON THE INTEL NUC

Single-node Openshift is a proof of concept by Red Hat to experiment with deploying Openshift in environments where it is not feasible to implement large compute resources. This is ideal for an edge location like a factory where you may not have space to set up a mini datacenter/server room for multiple hosts and a virtual environment to run fully-fledged Openshift. It does come with a warning that it isn’t HA (one control plane and ETCD store etc.) and that it isn’t officially supported by Red Hat yet but this is fine for our use case.

Once we had an HDMI cable (and additional USB keyboard for good measure) at hand, we proceeded to install SNO on the Intel NUC. There is an in-depth guide on the QioT website here but the basic premise was as follows:

Download a Discovery ISO (technology preview so be beware) selecting SNO as an installation option from https://console.redhat.com/openshift/

Burn the ISO to a USB flash drive

Boot the Intel NUC from the USB flash drive

Go back to the Red Hat Console and finalise the installation from there

Eventually, the Console will let you know that you need to remove the USB flash drive

Update/configure DNS

The installation took about 30-minutes although we did it a few times as we went further along when we wanted to change hostnames and/or base domains for the project.

INSTALL RHEL FOR EDGE ON THE FITLET2

A use case we could prove to customers was using RHEL for Edge on the Fitlet2. This is an ideal scenario where a customer would want to use an enterprise-grade distribution with enterprise-level support for their edge devices.

Installing RHEL for Edge edge was relatively simple although, again, we would have preferred to use a local PXE server that we could preconfigure our devices (imagine 100s of machines that needed to be deployed). There is an in-depth guide to installing RHEL for Edge on the Fitlet2 here but here is the main premise:

Boot the Fitlet2 and enter the BIOS

Make sure the date/time are correct, turn off Secure Boot and update boot priorities (HDD, USB, etc.)

Download the RHEL 8 iso from https://access.redhat.com/downloads.

Create a bootable USB flash drive

Insert microSD card and USB flash drive into Fitlet2 and boot

During boot, change the boot params to target the RHEL for Edge target

The install should work through by itself

Once installed, there wasn’t much we needed to do on the actual device other than configuring the WiFi.

DNS AND PORT-FORWARDING

Because LSD Open is a distributed company (we’re not even remote because we have nothing to be remote too), we had to expose the environment to the greater team. To do this we configured public DNS entries to point to the network it was in (using dynamic DNS).

We then added port forwarding on the router into said network to forward requests to the SNO (API route for Openshift and the various routes for the services like AMQ via AMQP).

Ideally in this scenario, we would have set up a separate network using Wireguard or the likes allowing everyone to hop onto this network when they needed to work on it. Again, this is something we would have needed more time to execute on.

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Posted on January 25, 2022April 22, 2022 by Seagyn Davis
INSIGHTS

BUILDING AN EDGE COMPUTING USE-CASE FOR HACKFEST

Editor’s note – We are very proud of Team LSD winning Red Hat Hackfest 2021. The team comprised of Seagyn Davis and Julian Gericke, who put all of this together over the month of November and bowled the judges over with their winning project. To give more context about their project, we asked Seagyn and Julian to write a series of blog posts to unpack it in detail. Please enjoy Part 1):

We found out about the Red Hat Hackfest from the Red Hat South Africa Partner Channel Manager (thanks Ziggi!). It’s basically a hackathon spanning over a few weeks where we set out to achieve working on an element (or a few elements) of a much larger blueprint that has been done by the Red Hat Hackfest team.

The use-case for this iteration of Hackfest was edge manufacturing with the basic architecture being a central data center (Openshift cluster), a factory edge location (Single Node Openshift running on an Intel NUC) and machinery simulated on IoT devices (Fitlet2 running RHEL for Edge).

Red Hat Hackfest diagram

The Intel NUC and the Fitlet2 were supplied to us so a big shout out to Red Hat, IBM and Intel for sponsoring and supporting the Hackfest to make this happen.

HACKFEST USE CASE

The premise of the Hackfest was an interesting blend of IoT and Cloud Native tech to support a scalable t-shirt manufacturing process. Hackfest participants were asked to deploy a fleet of containerized factory-level services onto the Single-node OpenShift environment.

A machinery service was to be implemented on edge devices – in our simulated environment this was the Fitlet2 running RHEL for Edge – which would facilitate the industrial control side of T-shirt production.

Red Hat implemented datacenter (or plant) level services running within an OpenShift cluster, to register and orchestrate multiple factories, capture factory metrics and implement higher-level business logic in support of what a typical global T-shirt manufacturing behemoth would require towards world domination (of the T-shirt manufacturing vertical).

World Domination meme

HARDWARE SPECS

The hardware specs on each device were pretty amazing. The Intel NUC had 6 cores, 64GB of RAM and 250GB of M.2 SSD storage – it was more than enough to run Single Node Openshift (SNO) and the run time simulations/services for the factory. The Fitlet2 has an Atom x5 processor, 4GB of RAM and we were supplied with a 64GB SD card to run in it.

SOFTWARE SPECS

The Intel NUC was set up with Single Node Openshift (more on SNO and setting it up in the next post). We ran Openshift 4.8 purely because of a dependency on a version of Red Hat AMQ that we needed to run that was not yet compatible with Openshift 4.9.

On the Fitlet2 we set up RHEL (Red Hat Enterprise Linux) for Edge 8.4. Both RHEL and RHEL for Edge provide enterprise-level features and support.

OUR UNDERSTANDING AND THE PLAN

Because of the complex architecture, it took a while for us to get our bearings and figure out what was needed to be done. Fortunately, there were regular drop-in clinics that allowed us to ask questions and/or find out how certain things needed to be done.

Ultimately, the minimum requirement was for us to get the provided software and services operating and to deploy a simulated machine service on the edge device. After that, anything we did would be additional work and more points towards potentially winning the Hackfest.

Our plan was to use the example machinery service provided by the Hackfest team and add an additional metrics emitter that would be pushed onto a metrics queue on the Factory. We would then create a metrics service on the Factory which we could then use to create graphs and give observability into the factory and machines running there.

That concludes Part 1 of the Red Hat Hackfest review. Part 2 will be published soon!

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

2021 In Review: A Message From LSD

2021 In Review: A Message From LSD

Posted on December 20, 2021April 21, 2022 by Charl Barkhuizen
INSIGHTS

It’s been a year!

We hope that you’re doing okay and keeping safe and that there’s some time off on the cards for you over the festive season. Before you unplug and go off the grid, we’d like to take a moment to look back at LSD’s year with you.

Since our departure from EOH last year, we’ve been spending a lot of time working on who we are as a company and where LSD wants to be in the future. Our team grew to 40 amazingly talented people, a record number for us. We doubled down on Kubernetes and launched our own fully managed Kubernetes platform as a service, which has been really well received by our clients.

Deciding on Kubernetes and cloud native was an easy choice based on the team’s skillset and our history with open source technologies. To grow a step further in that direction, we decided to buff our skills even further in the field by encouraging multiple team members to successfully achieve Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD) status. Together, they enabled LSD to become a Kubernetes Certified Services Provider (KCSP) and a silver member of the Cloud Native Computing Foundation (CNCF).

We are also part of a wider cloud native ecosystem and placed focus on growing key relationships here with partners that have been there from the start, and some new ones that we met along the way. This year was a monumental year for partnerships – from achieving our Red Hat Certified Cloud and Services Provider (CCSP), GitLab Select and Managed Services Partner and SUSE Managed Services Partner statuses – to growing fresh relationships with partners like Snyk.

LSD also made a splash in the news with some of our dealings in cryptocurrency earlier this year, where we brought Bitcoin onto our books as a store of value and rolled out the LSD Stimulus Package, an initiative to start our team members off with an investment into a managed cryptocurrency account through BitFund.

This year also saw the launch of AHOY, an LSD-developed open source release manager for Kubernetes, based on Argo and Helm. It’s something that we’re really proud of and can’t wait to see people solving problems with it. Next year, we are planning on adding even more features to the tool and showing it off on some bigger platforms to spread awareness.

Right at the end of the year, LSD struck another highlight by winning the Red Hat Hackfest, an international competition where teams create end-to-end solutions over a four week period. Team LSD, consisting of Seagyn Davis who bravely led the team through uncharted waters, Julian Gericke and other highly skilled LSDians, wowed the judges with their amazing skills and bagged the number one spot. We are very proud of you!

As you can see, it was a big year from a work point of view, but we also had a lot of fun along the way and expect to do even more cool things next year. Thank you for your role in making LSD what it is today, we couldn’t have done it without you. We look forward to doing even bigger things and building more meaningful relationships together next year. Enjoy your festive season break, take care of yourselves and stay safe.

See you in 2022,

The LSD Team

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

Ansible – Solving my Documentation Pet Peeve

Ansible – Solving my Documentation Pet Peeve

Posted on September 13, 2021April 21, 2022 by Charl Barkhuizen
INSIGHTS

As a DevOps engineer,  I find myself far more comfortable iterating over lists or writing conditional statements than finding fancy words to document my code. Since Ansible is designed to be easy to read and understand, you can almost get away without documenting anything…except ‘Variables’.

As with any code, there are best practices that make it easier for teams to collaborate on the same code like keeping your playbooks readable and following conventions for naming variables.  One thing that I see skipped over all too often is documenting your variables so that the next person that wants to use them understands what is going on.

Ansible has a staggering amount of places where you can define variables, each having its own precedence rules over the other. If another DevOps engineer has to inherit your plays and roles, it can be time-consuming to find and figure out all the variables.

In an effort to help you to properly document variables, I personally recommend three things:

1. CREATE A ‘README.MD’ FILE THAT EXPLAINS NOT ONLY HOW TO USE THE PLAY OR ROLE, BUT ALSO EVERY SINGLE VARIABLE USED, THEIR LOCATION, PURPOSE AND WHETHER I CAN EXPECT TO HAVE THE VARIABLE VALUE OVERWRITTEN.

Vars:
Name: “{{ home_address }}”
Location: ./roles/defaults/main.yml
Overwrite: true
Purpose: used in the jinja2 template, customer_info.j2
This is a default variable with a low precedence designed to be
overwritten by variables defined in your playbook.

2. ADD COMMENTS IN YOUR PLAYS AND ROLES WHEREVER YOU DECLARE A VARIABLE.


# file: group_vars/all
# For data synchronisation from the server to localhost
local_source_folder: /users/sitedev
remote_production_folder: /home/site/prod
# app name to look for in the local registry
app_name: dingbat
# image name to search for in the local image registry
image_name: “wingman/{{ app_name }}”
# version to search for in the local image registry
version: 4

3. FINALLY, TRY TO MINIMIZE WHERE YOU DECLARE YOUR VARIABLES.

The fewer places they are declared in, the easier it will be for the next person to find and keep track of them. These are some easy ways to document your variables and solves a pet peeve of mine. Do you have any documentation tips? Please share them with me in the comments below!

Charl Barkhuizen, Marketing Plug-in

I'm the marketing plug-in and resident golden retriever at LSD Open. You can find me making a lot of noise about how cool Cloud Native is or catch me at a Tech & Tie-dye Meetup event!

My Journey with LSD

My Journey with LSD

Posted on August 11, 2021April 22, 2022 by Amor Pienaar
INSIGHTS

I remember my first interview with LSD IT in April 2015 – I was sitting in the boardroom waiting for our CEO at that time, the dear and beloved Sven Lesicnik, to finish an important call with one of our clients.

As I was working in a typical corporate environment at that time, I was dressed to the T: black corporate suit, hair was done up all nice and flawless makeup. Little did I know… (slight chuckle).

Sven entered the boardroom and that’s when it all started for me. He told me more about the company, answered questions that I had, and of course, he had some questions of his own for me. It felt like we just got along from the get-go, but that feeling had very little to do with me. It was the culture of LSD that Sven brought into that meeting room. He would have made anyone feel more comfortable and at ease, as that was his way, and that is the LSD way. They take care of their clients, of their working family (the staff) and even new potential employees.

I remember feeling that the interview went really well and I left with a feeling that I couldn’t really describe anyway else than enthusiasm. Then the phone call I was waiting for from the recruitment agent came about two hours later. She sounded very excited and told me that they would like to arrange a second interview. Their only concern was my black suit – “Please dress down for this one”, she said with a chuckle in her voice. Two days later, I arrived wearing something much more casual, hair hanging down and just a smile for makeup.

I then met Stef (our current “Sometimes Adult in charge) for the first time. He had some standard interview questions, Andand I will never forget his words to me: “Remember that if we hire you, you would have to deal with Sven”. We both laughed, but I had no idea at that time what a fun and challenging experience that would be. The final part of the 2nd interview consisted of finishing some “tests” that Sven gave me to do. I got the fright of my life because it was the first time seeing a laptop with a Linux operating system – never mind knowing how to navigate around it. Sven kept reassuring me that they were not testing my Linux skills and I was so relieved about that – I can’t even tell you.

It then happened that I received the job offer of PA / Office Manager and started working for LSD on 30 June 2015. So far, it has been the greatest journey of my career. I remember the treasure hunt that they had me do – they did that with all the new employees as a way to encourage learning where to find everything and everyone that would help you get through a working day. That’s how I became part of the LSD family. At times I felt a little bit lost, as LSD is everything but your typical corporate environment but everyone stepped up and helped and made me feel at home. That is the LSD culture – they take care.

A few years passed and then the LSD family suffered a terrible loss. That is where I and every one of our clients, partners and vendors could see the true spirit and potential of what the LSD family is capable of. We pulled together. The main question on everyone’s lips at that time was “how can I help?”

We were forced to deal, forced to step up and forced to adapt – through no fault of our own. We had to learn new skills, take on new challenges, and we did that in true LSD style without dropping a single ball.

Today I look back and see the growth of the company and how immensely resilient we have become. I see a family and not just staff. I see a brother who lost a brother and who, through sheer bravery and dedication has steered this well-oiled machine to become the best open source company in the world, because he took care of us and we took care of him.

I am proud to be an LSD Penguin, and privileged beyond comparison to be part of this family.

Amor Pienaar

Observability: Cloud-Native Deep Distributed Systems Insight With LSDobserve

Observability: Cloud-Native Deep Distributed Systems Insight With LSDobserve

Posted on July 23, 2021April 22, 2022 by Mark Billett
INSIGHTS

INTRODUCTION

Modern problems require modern solutions. Just a few years ago we noticed a trickle of interest from our clients in moving their old development and deployment methodologies to something more agile, something more modern, something more cloudy. We jumped on the containerisation bullet-train early on and have now become an expert in the field of cloud-ready continuous development and integration both in South Africa and globally. This is the core tenet of our LSDtrip vision for our clients and as a CNCF member, we are uniquely positioned to provide services and support across a wide gamut of industries and verticals.

That initial trickle of interest became a groundswell and then most recently a veritable deluge. There is so much interest around cloud-ready and cloud-native computing that we can hardly keep up!

But in the modern era of distributed applications, microservices, and hybrid-cloud deployed applications and systems – legacy methods of monitoring are unsuitable. Monitoring is focused mainly on attempting to handle predictable system outages. The old ways were centered around reactive response to an issue that had already arisen.

CLOUD-NATIVE DEVELOPMENT

When an organisation decides to move to a more DevOps flavoured development process they start by deconstructing a monolithic application and reducing it to a set of independent microservices which when orchestrated as a whole behave identically to the original system for which they were designed to replace; at least that’s the idea but in reality, that process of deconstruction actually also introduces an across-the-board new layer of complexity, one the requires a proactive approach to dealing with issues.

Monitoring lets an organisation identify problems based on a predefined set of known failure states. You can’t know what you don’t know, and that is precisely where simple monitoring falls short. Monitoring is most definitely critical but in a cloud-ready-microservice world, we need to layer more on top of that. We need to be able to identify problems that we do not know about, we need to know what we don’t know.

WHAT IS OBSERVABILITY?

Observability originated in engineering control theory. The level of Observability over a system is measured by how well we can understand the internal state of that system given its external outputs. By layering various sets of instrumentation over a set of services that comprise a system, we can gain that insight very well. In fact, monitoring a microservice-based system isn’t really possible without getting proper observability first. Because the system is greater than the sum of its parts we must be able to understand every single part.

It is generally agreed in the industry that Observability consists of three major areas of interest:

  • Logs
  • Metrics
  • Distributed Tracing

Logs are generated by services for the purposes of generating a record of events over time, how the service responded, which other services interacted with it, any handled exceptions the service experienced, and any unhandled exception, amongst others.

Metrics are a set of time-series measurements – over a set of services that comprise a system – in aggregation. These discrete measurements are encompassed by the likes of CPU Usage, System Load, Memory Usage, Disk IO rate, Network Traffic, Disk Usage Volumes. This can be on a per system or per-process level. In the cloud-native world, it’s important to have both.

Tracing involves insight directly into the inner working of a distributed microservice at the functional level. It’s possible to gain visibility into granular request rate, source and destination, even payload content itself and the ability to identify those functional components which have a contribution to performance degradation.

HOW DOES OBSERVABILITY IN K8S WORK?

LSD has pooled a set of best-of-breed tools into an engineered product that integrates the pillars of Observability and the functions of Monitoring into an integrated build-once-deploy-anywhere solution to containerised microservice workload optimisation. Through the use of metrics collection from base to canopy of a Kubernetes stack; Logs collection at an Operating system, Application level, Kubernetes system, and Microservice level, and Distributed Tracing at the source code level.

HOW DOES LSD PROVIDE OBSERVABILITY ENABLEMENT?

Through our dedicated team of Open Source Ninjas, DevOps Specialists, and Engineers, we can enable turnkey full-stack insight into every aspect of a distributed system. Whether the system of microservices is deployed to On-premise Kubernetes clusters, Cloud Kubernetes clusters, or Hybrid-Cloud service orchestration systems; LSDobserve provides the full-stack observability and insight necessary. This helps to enable rapid time-to-resolution, quick root cause analysis and preemptive pro-action in the identification of problems and issues.

LSDObserve provides this and more via bespoke Professional Services and fully managed cloud-native Kubernetes support services.

REFERENCES

Egilmez, I., Mao, M. and Gile, J., 2021. Monitoring vs. Observability: What’s the Difference? – The New Stack. [online] The New Stack. Available at: <https://thenewstack.io/monitoring-vs-observability-whats-the-difference/> [Accessed 10 June 2021].

Google Cloud. 2021. DevOps measurement: Monitoring and observability  |  Google Cloud. [online] Available at: <https://cloud.google.com/architecture/devops/devops-measurement-monitoring-and-observability> [Accessed 10 June 2021].

dzone.com. 2021. Observability vs. Monitoring – DZone DevOps. [online] Available at: <https://dzone.com/articles/observability-vs-monitoring> [Accessed 10 June 2021].

InfoWorld. 2021. What is observability? Software monitoring on steroids. [online] Available at: <https://www.infoworld.com/article/3607980/what-is-observability-software-monitoring-on-steroids.html> [Accessed 10 June 2021].

Arundel, J. and Domingus, J., 2019. Cloud native DevOps with Kubernetes. Beijing: O’Reilly.
Julian, M., 2018. Practical monitoring. Sebastopol, CA: O’Reilly Media.

Evangelisti, E., 2011. Controllability and Observability. Berlin, Heidelberg: Springer Berlin Heidelberg.

Gruĭich, L., n.d. Observability and controllability of general linear systems.

Mark Billett

Mark specialises in Elasticsearch Engineering, Elasticsearch Solution Architecture, Elasticsearch Data Modelling, 3DS Exalead, FAST ESP, Oracle Endeca, SOLR, Lucene, LucidWorks, Kapow Katalyst, Enterprise Search, Data Mining, DBA

Configuring Active Directory Authentication For Rancher 2.5.X

Configuring Active Directory Authentication For Rancher 2.5.X

Posted on July 6, 2021April 22, 2022 by Zak McGregor
INSIGHTS

RANCHER CONFIG FOR AD

PRE-REQUISITES

Rancher allows several auth mechanisms to be used to authenticate users of the cluster. One of the more tricky to set up options is the Microsoft Active Directory auth provider, due to some opaque options.

Firstly, you’ll need a low-privilege account in your Active Directory (AD) that can list users and groups, and not much else. This is the account that Rancher will utilise to perform the account lookups and verification.

Once that’s in place, navigate to the Active Directory setup screen in your Rancher admin UI, while logged in as an admin-level user. This can be found under ‘Security → Authentication → Active Directory’.

Rancher AD config screen

Rancher AD config screen

You’ll be presented with a number of options. The trick is to get each section aligned with what your AD setup requires, and that can take patience, guesswork and luck, if you haven’t been explicitly given all the settings you’ll need.

SERVER SETTINGS

First, you’re asked to fill in the LDAP server or host. Enter the hostname in the field provided, and add a port of 389 for unencrypted or 636 for TLS (also referred to as ldaps). If you are sure that your ldap is encrypted, tick the “Use TLS” box as well as making the port 636.

Rancher AD server details config screen

Rancher AD server details config screen

The connection timeout can usually be left as-is, only fiddle with that under exceptional circumstances, beyond the scope of this article.

BIND ACCOUNT

The next piece of information to provide is the service account username that the bind to ldap will be made as. This takes the form of username@domain.tld. You can also provide it in the NetBIOS format of domain\username. But don’t do that.

Next, enter the corresponding password in the password field provided.

The default login domain should be left empty, unless you specify the NetBIOS style login information above. But don’t do that.

The user and group search base should look something like DC=x,DC=y,DC=z if your login domain is x.y.z. This can be narrowed down by restricting the base search further by including a specific user group.

SCHEMA CUSTOMISATION

The trickier part starts now. You can Customize the schema by supplying user and group search, class and other bits of arcana that seem far too complex. Stick to defaults to start off with or unless you have explicit information that you should change these. Most AD setups have not been changed here on the server side, so the common options should work.

I’ll quickly list the most common options for them below:

USERS

Object class: “Person”

Login attribute: “userPrincipalName”

Username Attribute: “name”

Search Attribute: “sAMAccountName”

Status Attribute: “userAccountControl”

Disabled BitMask: 2

GROUPS

Object Class: “group”

Name Attribute: “name”

Search Attribute: “sAMAccountName”

NB: Leave out the quotes above, they are there to show you exactly what should be entered for each field.

Rancher AD customize schema config screen

Rancher AD customize schema config screen

TEST ACCOUNT

Lastly, add a user and password in the section provided at the end to test the connection with. If all goes well, you should see a successful bind and the AD setup saved.

The account Used to test Active Directory will inherit all global permissions, as well as the project and cluster rolebindings of the local rancher user.

Rancher AD account test

Rancher AD account test

AUTHENTICATE RANCHER WITH AD

Last part is to click the “Authenticate with Active Directory”. If you are successful you will be greeted with a new screen summarising the integration details.

You will now be able to grant authenticated AD users access rights to global, cluster, project or namespace level objects allowing granular control to your Rancher Kubernetes environments

Rancher AD setup complete summary screen

Rancher AD setup complete summary screen

 

Zak McGregor

Why Every Line of Code You Write Should Be Under Version Control

Why Every Line of Code You Write Should Be Under Version Control

Posted on June 15, 2021April 22, 2022 by Seagyn Davis
INSIGHTS

There was a time in my career when I did not use any form of version control. Well, actually, I did. I basically copied the folder I was working on and appended a date to it. I survived and chances are that if you do the same so will you. However, when I finally made the move to Git (the version control system of choice) it made my life a lot better and made updating code a lot easier.

Here are a few reasons why you should keep your code under version control:

DISTRIBUTED BACKUP

Keeping a copy of your code is always a good idea. Keeping a copy of the code you’re currently busy with is even better. A distributed version control system like Git allows you to store your main code and your work in progress on a server or service (like Gitlab or Github). This means that a stolen or corrupted device doesn’t lead to a loss of code.

To add to that, because Git is distributed, it means that every person in your team that has cloned the project also has a copy of the code. There would have to be a catastrophic failure for a complete loss of code.

MERGING CHANGES TO CODE

I’m slightly ashamed to admit this, but the first project I ever worked on didn’t have version control (you already know this) but when it came time to release code to our production servers, I would reference a list of all the files I made (on a sticky note most likely) and then upload those files to our servers. I would also hope that someone else hadn’t updated the file I was going to write over without me knowing.

Nightmare. It was also the cause of many issues and bugs, especially on production servers.

Git, and really any version control system, handles the above scenario very well. Whether you use a branching strategy or not, when you take code and either merge into another branch or try to push to a remote branch, it will first do a comparison. This can lead to a couple of scenarios:

  • your code creates not conflicts and will be merged/pushed into the target,
  • your code doesn’t have the latest updates and you’ll need to fetch them first, which happens with no issue and you can then push/merge your changes,
  • or your code doesn’t have the latest updates and the updates you are pulling contains changes to code you also changed (known as merge conflicts). This can then be changed and your code can be merged/pushed.

For me, working in a team of people, being able to resolve these kinds of conflicts is probably the greatest feature of any version control system.

EASIER COLLABORATION

This may seem like a complete about-turn on what I was saying about Git being a distributed version control system, but using a central system like Gitlab or Github vastly improves and simplifies the process of onboarding new developers. This makes collaboration easier and opens up great policies like merge/pull requests with peer reviews.

I don’t even remember how I first shared code with colleagues but I’m guessing it wasn’t as easy as asking them to clone a project from Gitlab.

CONTINUOUS INTEGRATION AND DELIVERY/DEPLOYMENT

One of the coolest processes a tool like Git opens up for you, especially when using something like Gitlab, is CI/CD (continuous integration and continuous delivery/deployment). Automating what can be the most time consuming and stressful part of the software development life cycle (SDLC) is what makes platforms like Gitlab and Github so powerful and your version control system is the backbone of all those processes.

Git is an important part of my toolset and I firmly believe that it should be used by every person writing code, whether it’s for applications or for infrastructure. If you’re new to Git or you haven’t even heard of it, Gitlab has a great write-up on getting started with Git.

If you have any questions around version control, Git, CI/CD or even Gitlab, reach out to me on Twitter or LinkedIn and I’ll gladly answer any questions. Feel free to let me know where I can improve this post as well.

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

Posts navigation

Older posts

Recent Posts

  • Fun With GitOps – ArgoCD + Tekton By Julian Gericke
  • Vim To Vs Code – A Story About A RHCA Who Became A TKGi Platform Developer
  • Enneagram: Understanding LSD’s People
  • Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge
  • Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Recent Comments

No comments to show.

Archives

  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Uncategorized
  • Video
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

logo
  • Home
  • Solutions
    • Managed Kubernetes
    • Managed Observability
    • Managed Event Streaming
  • Services
  • Cloud
    • Cloud Kubernetes
    • Cloud AWS
  • Software
  • Partners
    • Confluent
    • Elastic
    • Gitlab
    • Red Hat
    • SUSE
    • VMWare Tanzu
  • Blog