Published
April 8, 2025

Shipping apps for airgap? Turn the full stack into a bootable image

Dan Roessner
Dan Roessner
Senior Solution Architect

How do you deploy a tangle of dependencies across the airgap?

In the Department of Defense (DoD) space, many organizations outsource their software development to contractors. 

These contractors are effectively independent software vendors (ISVs). They’re fully responsible for designing, building, and deploying software often intended for airgapped environments.

While that might sound straightforward, the reality is complex. Airgapped environments are isolated from the internet. You can’t remote manage clusters or servers located in them. And infrastructure deployed in airgapped sites can’t reach out to software registries in the wider world to pull down a new container or library.

This challenge is multiplied because a lot of the time the customer — often a government agency — is expected to set up the necessary infrastructure and prerequisites ahead of time. And not all customers have a lot of experience with building Kubernetes clusters. 

I recently encountered this issue while working with a partner of Spectro Cloud. They offer a cloud-native application for government customers, but it requires a pre-built Kubernetes cluster with an ingress controller already configured. 

More often than not, the partner had to step in, deploy and configure the Kubernetes environment on behalf of the customer. That was costly, slow, and logistically painful.

This is exactly the kind of challenge Spectro Cloud’s EdgeForge Workflow can solve.

The solution? Spectro Cloud’s EdgeForge Workflow

EdgeForge is the workflow we use to prepare a new edge host (that is, a server) with all the components and dependencies it requires to run its workloads, from the base OS to user data.

(It’s worth noting at this point that although we originally built EdgeForge with the edge in mind, customers are also using it in data center and cloud environments – tech has a way of taking on a life of its own!)

EdgeForge Workflow diagram

In our traditional use case, EdgeForge starts with building a custom installer ISO for deployment on the edge device. The edge device boots, then connects to Spectro Cloud’s Palette platform to build a Kubernetes cluster, pulls down the rest of the software it needs, and settles in under management by Palette.

While this is a powerful capability, it assumes that the Palette management platform is accessible over the internet, or itself resides on the local network, installed at each customer site. That doesn’t always make sense for the way our partners deploy their software directly to their customers’ airgapped sites.

But EdgeForge has an answer to this. One of EdgeForge’s key features is the ability to package all dependencies from a Spectro Cloud Cluster Profile into a single ISO image. The result is an all-in-one installation file for your application, from OS and K8s distribution to the app itself.

So we can use the EdgeForge workflow to bundle the ISV’s complete software and dependencies into that single ISO file, simplifying the process of transferring the app into airgapped environments. Everything is local.

Example Cluster Profile used with EdgeForge

It all starts with the boot

Once you have the all-in-one ISO file from EdgeForge, you can use it just like a CD-ROM. At the customer site, attach the ISO to the target server or virtual machine and boot from it. The system will automatically install the operating system you selected during the build process.

EdgeForge currently supports the following operating systems:

  • Red Hat Enterprise Linux (RHEL)
  • RHEL with FIPS enabled
  • Ubuntu
  • Ubuntu with FIPS enabled

After installation is complete and the server reboots, you’ll see a prompt similar to the example below:

 attaching ISO server or virtual machine and booting from it

This indicates that the server is up and running. At this point, you can open a web browser and access the Palette Local UI Console using the address shown in the prompt. You'll be asked to log in with a user account. This user can either be pre-configured during the EdgeForge build process or set up interactively during the first boot. 

Next step: build a cluster and deploy the app

After logging in to Local UI, navigate to the Clusters tab in the left-hand menu and click the Create Cluster button. This will launch a step-by-step wizard to help you configure and deploy your Kubernetes cluster and applications.

Let’s walk through the wizard:

  • On the Basic Information page, enter a name for your cluster.
  • On the Cluster Profile page, select “Use embedded config”.
  • On the Profile Config page, you’ll see fields for any custom configuration variables defined during the Cluster Profile build process.

These custom variables can be mapped to any layer within the profile. For example, the variable shown below is mapped to the metal-lb layer. This allows you to tailor deployments to the specific needs of each site, all while using a single ISO image across multiple destination environments.

Map any custom variable within the profile

On the Cluster Config page, you have the option to provide NTP servers and SSH keys for the cluster. The SSH keys enable remote access to the edge device, but both fields are optional. The only required field on this page is the Virtual IP Address (VIP). This IP address is used for the Kubernetes API server, allowing you to interact with the cluster using tools like kubectl or k9s.

Setting up SSH keys for remote access to the edge device

On the Node Config page, you’ll configure the Kubernetes nodes. In this example, we're creating a single-node cluster, but EdgeForge also supports multi-node configurations if needed.

Since we’re using just one node, make sure the Allow Worker Capability option is enabled, and remove the worker-pool section. This configuration allows application pods to run on the control-plane node — a setup that is typically disabled by default but necessary in single-node clusters.

Configuring your node

Finally, we will continue to the review page where the wizard will check for any errors. If everything is properly configured we can click the Deploy Cluster button.

Deploying the cluster

Now, we wait. The deployment typically takes 20–30 minutes. During this time, the system will:

  • Reboot the device
  • Bring up the Kubernetes cluster
  • Deploy Harbor (a container registry)
  • Import all necessary container images into Harbor
  • Deploy each application layer defined in the Cluster Profile
Deployment overview

Once complete, Local UI’s Cluster Overview page will display the status of your cluster. You’ll see a list of running services, each with clickable links to access the deployed applications. 

Additionally, a kubeconfig file will be provided, which you can use with command-line tools like kubectl and k9s to manage your newly created Kubernetes cluster.

Upgrading your cluster and applications

Upgrading your cluster and applications with EdgeForge is straightforward. 

Through the Local UI, you can upload a new Cluster Profile and content bundle, both created during the EdgeForge workflow and carried across the airgap. 

Once uploaded, the Local UI imports updated container images from the content bundle into the Harbor repository and automatically begins upgrading each layer of the cluster as needed — until the update is fully applied.

Better for customers, better for software vendors

At the start of this blog I mentioned how a partner triggered us to explore this use case for EdgeForge. Our architects are currently working closely with that partner, which expects to make EdgeForge its primary method for delivering software to airgapped customer environments.

As you've seen, the EdgeForge workflow enables you to build and configure your application using Cluster Profiles in Palette. Because Cluster Profiles provide repeatable configurations, they allow for thorough testing and troubleshooting of the exact software stack first in your own lab — rather than reacting to issues on-site from something built inside the airgap.

EdgeForge also allows you to package the entire Kubernetes cluster and application stack into a single ISO file, significantly reducing the complexity and number of files required for deployment. There are no loose bundles of files that have to be mounted and installed in just the right order. With EdgeForge and Local UI’s user-friendly, wizard-based interface, even customers with little to no Kubernetes experience can deploy your software — getting it up and running in hours instead of days.

If you'd like to learn more about the EdgeForge workflow or dive deeper into the build process, please visit our docs or book a 1:1 demo with one of our experts.

Tags:
Security
Edge Computing
Using Palette
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy