Home Use Cases Partners Blog Contact Us
LSD

LSD

Kubernetes Professionals

  • Home
  • Managed Kubernetes
  • Managed Observability
  • Managed Event Streaming
  • Software
  • Services

Author: Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).
Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge

Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge

Posted on February 4, 2022April 22, 2022 by Seagyn Davis
INSIGHTS

This is Part 2 in the Red Hat Hackfest review series. Click here to read Part 1.

After finding out about the use case and receiving the hardware (thanks again Red Hat, Intel and IBM), we set off on getting the base software installed on the devices. Setting up the hardware consisted of installing Single-node Openshift on the Intel NUC and RHEL (Red Hat Enterprise Linux) for Edge on the Fitlet2.

We initially looked at using PXE to install the OS on each unit but the time sink for this was too large within the scope of our normal day to day job (nevermind the additional hardware requirements).

The installation process also took a turn for the worse when we realised that we needed an HDMI cable and ironically our large boxes of cables were without HDMI.

INSTALL SINGLE-NODE OPENSHIFT (SNO) ON THE INTEL NUC

Single-node Openshift is a proof of concept by Red Hat to experiment with deploying Openshift in environments where it is not feasible to implement large compute resources. This is ideal for an edge location like a factory where you may not have space to set up a mini datacenter/server room for multiple hosts and a virtual environment to run fully-fledged Openshift. It does come with a warning that it isn’t HA (one control plane and ETCD store etc.) and that it isn’t officially supported by Red Hat yet but this is fine for our use case.

Once we had an HDMI cable (and additional USB keyboard for good measure) at hand, we proceeded to install SNO on the Intel NUC. There is an in-depth guide on the QioT website here but the basic premise was as follows:

Download a Discovery ISO (technology preview so be beware) selecting SNO as an installation option from https://console.redhat.com/openshift/

Burn the ISO to a USB flash drive

Boot the Intel NUC from the USB flash drive

Go back to the Red Hat Console and finalise the installation from there

Eventually, the Console will let you know that you need to remove the USB flash drive

Update/configure DNS

The installation took about 30-minutes although we did it a few times as we went further along when we wanted to change hostnames and/or base domains for the project.

INSTALL RHEL FOR EDGE ON THE FITLET2

A use case we could prove to customers was using RHEL for Edge on the Fitlet2. This is an ideal scenario where a customer would want to use an enterprise-grade distribution with enterprise-level support for their edge devices.

Installing RHEL for Edge edge was relatively simple although, again, we would have preferred to use a local PXE server that we could preconfigure our devices (imagine 100s of machines that needed to be deployed). There is an in-depth guide to installing RHEL for Edge on the Fitlet2 here but here is the main premise:

Boot the Fitlet2 and enter the BIOS

Make sure the date/time are correct, turn off Secure Boot and update boot priorities (HDD, USB, etc.)

Download the RHEL 8 iso from https://access.redhat.com/downloads.

Create a bootable USB flash drive

Insert microSD card and USB flash drive into Fitlet2 and boot

During boot, change the boot params to target the RHEL for Edge target

The install should work through by itself

Once installed, there wasn’t much we needed to do on the actual device other than configuring the WiFi.

DNS AND PORT-FORWARDING

Because LSD Open is a distributed company (we’re not even remote because we have nothing to be remote too), we had to expose the environment to the greater team. To do this we configured public DNS entries to point to the network it was in (using dynamic DNS).

We then added port forwarding on the router into said network to forward requests to the SNO (API route for Openshift and the various routes for the services like AMQ via AMQP).

Ideally in this scenario, we would have set up a separate network using Wireguard or the likes allowing everyone to hop onto this network when they needed to work on it. Again, this is something we would have needed more time to execute on.

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Posted on January 25, 2022April 22, 2022 by Seagyn Davis
INSIGHTS

BUILDING AN EDGE COMPUTING USE-CASE FOR HACKFEST

Editor’s note – We are very proud of Team LSD winning Red Hat Hackfest 2021. The team comprised of Seagyn Davis and Julian Gericke, who put all of this together over the month of November and bowled the judges over with their winning project. To give more context about their project, we asked Seagyn and Julian to write a series of blog posts to unpack it in detail. Please enjoy Part 1):

We found out about the Red Hat Hackfest from the Red Hat South Africa Partner Channel Manager (thanks Ziggi!). It’s basically a hackathon spanning over a few weeks where we set out to achieve working on an element (or a few elements) of a much larger blueprint that has been done by the Red Hat Hackfest team.

The use-case for this iteration of Hackfest was edge manufacturing with the basic architecture being a central data center (Openshift cluster), a factory edge location (Single Node Openshift running on an Intel NUC) and machinery simulated on IoT devices (Fitlet2 running RHEL for Edge).

Red Hat Hackfest diagram

The Intel NUC and the Fitlet2 were supplied to us so a big shout out to Red Hat, IBM and Intel for sponsoring and supporting the Hackfest to make this happen.

HACKFEST USE CASE

The premise of the Hackfest was an interesting blend of IoT and Cloud Native tech to support a scalable t-shirt manufacturing process. Hackfest participants were asked to deploy a fleet of containerized factory-level services onto the Single-node OpenShift environment.

A machinery service was to be implemented on edge devices – in our simulated environment this was the Fitlet2 running RHEL for Edge – which would facilitate the industrial control side of T-shirt production.

Red Hat implemented datacenter (or plant) level services running within an OpenShift cluster, to register and orchestrate multiple factories, capture factory metrics and implement higher-level business logic in support of what a typical global T-shirt manufacturing behemoth would require towards world domination (of the T-shirt manufacturing vertical).

World Domination meme

HARDWARE SPECS

The hardware specs on each device were pretty amazing. The Intel NUC had 6 cores, 64GB of RAM and 250GB of M.2 SSD storage – it was more than enough to run Single Node Openshift (SNO) and the run time simulations/services for the factory. The Fitlet2 has an Atom x5 processor, 4GB of RAM and we were supplied with a 64GB SD card to run in it.

SOFTWARE SPECS

The Intel NUC was set up with Single Node Openshift (more on SNO and setting it up in the next post). We ran Openshift 4.8 purely because of a dependency on a version of Red Hat AMQ that we needed to run that was not yet compatible with Openshift 4.9.

On the Fitlet2 we set up RHEL (Red Hat Enterprise Linux) for Edge 8.4. Both RHEL and RHEL for Edge provide enterprise-level features and support.

OUR UNDERSTANDING AND THE PLAN

Because of the complex architecture, it took a while for us to get our bearings and figure out what was needed to be done. Fortunately, there were regular drop-in clinics that allowed us to ask questions and/or find out how certain things needed to be done.

Ultimately, the minimum requirement was for us to get the provided software and services operating and to deploy a simulated machine service on the edge device. After that, anything we did would be additional work and more points towards potentially winning the Hackfest.

Our plan was to use the example machinery service provided by the Hackfest team and add an additional metrics emitter that would be pushed onto a metrics queue on the Factory. We would then create a metrics service on the Factory which we could then use to create graphs and give observability into the factory and machines running there.

That concludes Part 1 of the Red Hat Hackfest review. Part 2 will be published soon!

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

Why Every Line of Code You Write Should Be Under Version Control

Why Every Line of Code You Write Should Be Under Version Control

Posted on June 15, 2021April 22, 2022 by Seagyn Davis
INSIGHTS

There was a time in my career when I did not use any form of version control. Well, actually, I did. I basically copied the folder I was working on and appended a date to it. I survived and chances are that if you do the same so will you. However, when I finally made the move to Git (the version control system of choice) it made my life a lot better and made updating code a lot easier.

Here are a few reasons why you should keep your code under version control:

DISTRIBUTED BACKUP

Keeping a copy of your code is always a good idea. Keeping a copy of the code you’re currently busy with is even better. A distributed version control system like Git allows you to store your main code and your work in progress on a server or service (like Gitlab or Github). This means that a stolen or corrupted device doesn’t lead to a loss of code.

To add to that, because Git is distributed, it means that every person in your team that has cloned the project also has a copy of the code. There would have to be a catastrophic failure for a complete loss of code.

MERGING CHANGES TO CODE

I’m slightly ashamed to admit this, but the first project I ever worked on didn’t have version control (you already know this) but when it came time to release code to our production servers, I would reference a list of all the files I made (on a sticky note most likely) and then upload those files to our servers. I would also hope that someone else hadn’t updated the file I was going to write over without me knowing.

Nightmare. It was also the cause of many issues and bugs, especially on production servers.

Git, and really any version control system, handles the above scenario very well. Whether you use a branching strategy or not, when you take code and either merge into another branch or try to push to a remote branch, it will first do a comparison. This can lead to a couple of scenarios:

  • your code creates not conflicts and will be merged/pushed into the target,
  • your code doesn’t have the latest updates and you’ll need to fetch them first, which happens with no issue and you can then push/merge your changes,
  • or your code doesn’t have the latest updates and the updates you are pulling contains changes to code you also changed (known as merge conflicts). This can then be changed and your code can be merged/pushed.

For me, working in a team of people, being able to resolve these kinds of conflicts is probably the greatest feature of any version control system.

EASIER COLLABORATION

This may seem like a complete about-turn on what I was saying about Git being a distributed version control system, but using a central system like Gitlab or Github vastly improves and simplifies the process of onboarding new developers. This makes collaboration easier and opens up great policies like merge/pull requests with peer reviews.

I don’t even remember how I first shared code with colleagues but I’m guessing it wasn’t as easy as asking them to clone a project from Gitlab.

CONTINUOUS INTEGRATION AND DELIVERY/DEPLOYMENT

One of the coolest processes a tool like Git opens up for you, especially when using something like Gitlab, is CI/CD (continuous integration and continuous delivery/deployment). Automating what can be the most time consuming and stressful part of the software development life cycle (SDLC) is what makes platforms like Gitlab and Github so powerful and your version control system is the backbone of all those processes.

Git is an important part of my toolset and I firmly believe that it should be used by every person writing code, whether it’s for applications or for infrastructure. If you’re new to Git or you haven’t even heard of it, Gitlab has a great write-up on getting started with Git.

If you have any questions around version control, Git, CI/CD or even Gitlab, reach out to me on Twitter or LinkedIn and I’ll gladly answer any questions. Feel free to let me know where I can improve this post as well.

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

“Run PHP and Composer commands via Docker” via seagyndavis.com

“Run PHP and Composer commands via Docker” via seagyndavis.com

Posted on February 24, 2021April 22, 2022 by Seagyn Davis
INSIGHTS

Seagyn Davis joined LSD OPEN’s technical team in 2021 and contribution has already been felt. Seagyn also has his own blog on which he publishes mostly technical pieces, which we’ll feature here on the LSD People Blog from time to time. Here’s an excerpt from the post, and you can read more on his blog by clicking on the button below:

“I moved to a new machine recently and have been actively making sure that I don’t have any reliance on the existence of this machine. All the config that I usually need is stored in a git repo along with a bunch of other useful things like aliases, common stuff I need to be installed, and my ssh config.

Along with this, I also don’t want to have to install every bit of software on the planet. This made me search for a better way to use the PHP CLI and Composer. For me, this made sense to run them via Docker. If you use Laravel, this means you can run Artisan commands via Docker as well.”

Read his blog piece here.

Seagyn Davis

Experienced Software Engineer with a demonstrated history of working in different industries. Skilled in JavaScript (React/Node) and PHP (Laravel/WordPress).

Recent Posts

  • Fun With GitOps – ArgoCD + Tekton By Julian Gericke
  • Vim To Vs Code – A Story About A RHCA Who Became A TKGi Platform Developer
  • Enneagram: Understanding LSD’s People
  • Red Hat Hackfest Part 2: Setting Up The Hardware, SNO And RHEL For Edge
  • Red Hat Hackfest Part 1: Building an Edge Computing Use-Case For Hackfest

Recent Comments

No comments to show.

Archives

  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • November 2020
  • August 2020
  • July 2020
  • June 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • Cloud Native Tech
  • News
  • Uncategorized
  • Video
Managed Kubernetes Managed Observability Managed Streaming Services Software
Usecases Partners Thinktank (Blog)
Contact Us Terms of Service Privacy Policy Cookie Policy

All Rights Reserved © 2022 | Designed and developed by Handcrafted Brands

logo
  • Home
  • Managed Kubernetes
  • Managed Observability
  • Managed Event Streaming
  • Software
  • Services
  • Partners
  • Blog