Published
June 25, 2024

Simple edge Kubernetes device setup for your field engineers

Yuliia Horbenko
Yuliia Horbenko
Senior Technical Writer

Edge after the honeymoon

According to our 2024 State of Production Kubernetes Report, 38% of Kubernetes adopters are running clusters at the edge today, up from 31% in 2023. 

As edge AI, IoT devices, edge data processing and other trends gather momentum, we expect adoption to grow even more. The benefits of edge computing are just too compelling.

But as you start to build out your edge deployments, you’ll run into challenges. Adopters tell us they’re most concerned about maintaining security, performing Day-2 operations, and… the cost of field engineering visits.

38% of Kubernetes adopters are running clusters at the edge today, up from 31% in 2023. 

Disconnected sites mean truck rolls

Field engineering visits are an intrinsic part of managing edge computing deployments. Edge computing is a distributed model, so someone needs to leave the centralized data centers and physically be there (wherever ‘there’ is) to plug in the box and get it running, with all the edge applications deployed on it. 

And they may need to come back to perform upgrades or troubleshooting, whether that’s weekly, annually or yearly. 

That’s because many edge Kubernetes deployments happen in environments with little to no internet connectivity, whether by necessity or design. If an environment has to operate without the internet, more often than not it’s done for security. Think military, manufacturing, power plants, the Pawnee Kernston's Rubber Nipples factory, etc.

When there’s no network connection, there’s no remote access. You won’t get a chance to SSH into the edge devices or have someone local email the error logs over to you. 

No: when it comes to deploying, managing, and troubleshooting Kubernetes at the edge, you have to have someone on site. 

But who? It can’t be any junior IT person, let alone a frontline soldier, power-plant worker or retail store manager. That someone has to know their way around Kubernetes.

Sending experts on site? Sounds expensive

And here’s the rub. There are very few field technicians who can proudly boast their CKAs, CKSs, and other Kubernetes certifications, and they’re way too expensive to be going on road trips whenever there’s an edge node failing somewhere.

One of the people we interviewed for the 2024 State of Production Kubernetes Report has put it nicely, with an air of ‘cross your fingers and hope’:

The biggest challenge for us is that nobody on site has any idea about the technology… We also have some use cases where they have no internet access.

So we’re working on solutions to mainly run these scripts on these clusters and see if we can get that to work.

— DevOps Engineer, Manufacturing

Making edge K8s so easy, the experts can stay at home

If you’re a regular reader you’ll already know that we’re big on making Kubernetes simpler, particularly at the edge. You may be familiar with our low-touch and no-touch onboarding, and our rolling upgrade capability that reduces risk of downtime.

But in our recent releases, we’ve been digging into the challenges that operators face when managing disconnected sites in particular, where field engineering visits are unavoidable. 

Our goal is to simplify the activities that need to happen on site, to reduce time spent, and make operations easy enough that less expensive resources can perform them. 

We do that in lots of ways — like Local UI — but today we’re going to walk through one of the underlying concepts we’ve developed: cluster profile variables.

What is a cluster profile variable?

First we need to back up a little and explain what a Cluster Profile is.

Cluster Profiles are how Palette models all the software that makes up your Kubernetes clusters, from the OS and Kubernetes distributions all the way to ingress, observability, and security tools. 

You can build Cluster Profiles from our extensive library of Palette Packs, Helm charts, manifests, and Zarf packages.

build Cluster Profiles from our extensive library of Palette Packs

There are three flavors of Cluster Profiles: infrastructure with your OS, Kubernetes, CNI, and CSI, add-ons with your specific applications, and full, which combine the two. 

Once you have a basic set of Cluster Profiles, you can reuse them across any number of clusters and… You guessed it, edge devices!

The problem: retyping values across a YAML hellscape

Even though Cluster Profiles are an amazing capability, they don’t address the fact that your field engineers will still have to go in and manually edit the cluster configuration to enter device-specific details across loads of different YAML files. 

For example, each edge device would have its own host name, IP address, passwords, and other secrets, hidden behind obscure names in a dozen or so YAML configs.

As you’d expect, when an engineer is crouched with their laptop in a hot and dusty cabinet at the back of a factory, or at an oil rig in the middle of the ocean, they are likely to make mistakes. 

A finger slips when entering an IP address, a property is missed, some password characters melt into each other, and, as a result, the cluster just doesn’t come up — cue lots of debugging and wasted time.

The solution: define, validate, propagate

Cluster profile variables take this pain away by allowing you to define some parts of cluster configuration as placeholders. 

They’re built with data validation in mind, so you can define cluster profile variables to expect the standard strings, numbers, and booleans, but also IPv4, IPv6, IPv4 CIDR, and passwords, or even go wild with regular expressions (regex).

When applying a Cluster Profile with variables, your field tech will need to enter each required value only once, and Palette will propagate it to every field where the variable appears, across all of the Cluster Profile layers.

Combine this with the built-in reusability of Cluster Profiles, and you get an abstracted cluster config that you can reuse any number of times, further customize during each deployment, and safeguard against most types of human error.

And, perhaps most importantly, cluster profile variables enable people with near-zero Kubernetes knowledge to configure and spin up clusters at the edge, while your experts stay back at HQ.

Let’s see how cluster profile variables work in action, shall we?

From a cluster profile to your edge device

We’re going to create a cluster profile variable for pod CIDR, one of the most common configurations you’d want to define in an edge device.

If you’re not very familiar with what pod CIDRs are, here’s a refresher.

Kubernetes clusters consist of pods, which run different parts of your application. Pods communicate with each other to make your app work. For pods to communicate, they need to be on the same network.

Kubernetes uses a container networking interface (the CNI layer in your Cluster Profile) to maintain pod networks and assign IP addresses to pods. There are a bunch of CNIs to choose from, and you’ve probably heard of a few, such as Flannel, Calico, Cilium, and so on. Our very own Palette eXtended Kubernetes distribution works with most CNIs out there.

While the specific configuration might differ, the baseline is that the cluster administrator (or your field technician) provides the CNI with a range of IP addresses that can be assigned to pods. This range of IP addresses is called pod CIDR.

Because you might have multiple edge devices working on the same network, you will want to assign distinct pod CIRDs to each edge device to avoid conflicts.

This is where cluster profile variables truly shine.

Imagine yourself as an experienced platform engineer (sat in an air-conditioned office!) creating an Edge Cluster Profile. You see a tab called Variables, and click on it to open a menu with some fields to fill out.

First, you specify a name, description, and a YAML definition for your variable. Then, you pick what data type the variable should expect—in our case, that’s IPv4 CIDR. You can also add different types of validation, such as making the variable required, or providing it with a default value. If you’re making a password variable, for instance, you’ll want the value to be required and masked for security reasons.

Here’s how our variable definition looks like.

overview of variable definition

Once the variable is ready, you’ll add its YAML definition to the CNI layer config in the format parameter: "{{.spectro.var.variable_name}}" and click Confirm Updates.

Note that our current variable is quite specific, but the beauty of cluster profile variables is that you define them only once per cluster, and then reuse, reuse, reuse—as many times as you want to.

So, with the Edge Cluster Profile ready and your cluster profile variables defined, you’ll export the Cluster Profile and send it on its merry way to your edge location, where your field technician will apply it to the necessary edge devices.

During the edge device setup, the Palette Local UI — a graphical interface deployed as part of your edge instance — will prompt your field tech to enter the required pod CIDR value. As opposed to, you know, opening the CNI cluster profile layer, figuring out all sorts of YAML stuff, locating the pod CIDR parameter, and entering the value by hand.

blog profile variables

Once your field tech finishes the cluster setup, Palette will get to work and spin up your cluster.

If you ever decide to update the software running on your edge devices and make any changes to the profile variables, you’ll go through these steps in a new Cluster Profile version, and your field tech will be prompted to revise the profile variables during the cluster update.

Why are cluster profile variables awesome, actually?

Well, because they add another layer of reusability to your Cluster Profiles and don’t require your field technicians to manipulate the cluster profile YAML during deployment. It’s always better to have a simple UI field to fill in than a wall of code to edit.

If you’d like to learn more, check out the Palette Docs to learn more about Cluster Profiles, cluster profile variables, and the whole range of Edge features Palette comes with.

You can also watch our Palette Local UI demo to see more examples of our simplified field experience in action.

Tags:
Cluster Profiles
Using Palette
Edge Computing
Research
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy