Published
June 18, 2024

Can it run Doom? Palette is your BFG for secure edge Kubernetes apps

Lenny Chen
Lenny Chen
Technical writer

You’ve probably heard about “Can it run Doom?”, the viral movement to run the retro video game classic Doom on anything imaginable, from pregnancy tests to one thousand moldy potatoes

As a technical writer working at Spectro Cloud specifically covering Palette Edge and a gamer myself, I couldn’t resist the opportunity of using Doom to demonstrate how Palette can manage applications as Kubernetes workloads on the edge. 

In this blog, I talk about my experience getting Doom to run on a Kubernetes cluster with Palette Edge, and how I modified the configurations to make the game compliant with Kubernetes’s default security policies.

What is Doom?

In case you are not familiar, Doom is a first-person shooter game developed and published by id Software in 1993. 

You play as a Marine who is deployed to a research facility on Mars where a failed experiment opened the portal to hell. Oops. As one of Earth’s finest fighters, you must shoot your way out of the facility, which has become overrun by demons. 

Doom’s cutting-edge graphics and innovative gameplay earned the game a blockbuster release. Now three decades later, it still boasts overwhelmingly positive reviews on Steam with a 10/10 score.  

The game was open-sourced in 1997, which cemented its place as one of the most influential video games of all time. The open source release inspired the “Can it run Doom?” phenomenon, and now, just like Kubernetes, you can find Doom everywhere.

What are the challenges?

Many brave souls have already established that the game Doom will run on practically anything with silicon inside, so the doubt was never whether Doom will run on an edge device's hardware (in my case, an Intel NUC from 2012, pictured below). 

The Intel NUC Mini computer I used to run Doom on Kubernetes

But there are still a few hurdles I’d need to overcome in this project:

  • Obtaining a containerized version of Doom, either by finding it or building it.
  • Abstracting the deployment of the application into a Palette Cluster Profile layer for declarative deployment and management.
  • Exposing the Doom application correctly on my local network so I can actually access it.

These are the same tasks that you’re likely to face when you’re exploring deploying your first workload to Palette — so with that in mind, this deeply unserious project will hopefully offer some valuable insight into running real workloads with Palette.

Step 1: Running Doom in a kind cluster

The first thing I did was try to find if there are existing dockerized images of Doom. A quick search yielded me David Zuber's kubedoom repository, a great find that allows me to deploy this special version of Doom using Kubernetes. This version of Doom only contains the free shareware version of the game. To play the whole game, you can still purchase this classic on Steam. Following the instructions in the repository, I quickly had an instance of Doom running in a local kind cluster on my laptop.

$ kubectl apply -k manifest/
namespace/kubedoom created
deployment.apps/kubedoom created
serviceaccount/kubedoom created
clusterrolebinding.rbac.authorization.k8s.io/kubedoom created

If something works on generic Kubernetes, Palette can manage it. I just need to figure out how to refactor the manifests to work on a Palette cluster.

Step 2: Bringing manifests into a Palette profile layer

Palette uses Cluster Profiles to model the desired state of a Kubernetes cluster. Each layer in the profile is made up of Helm charts or Kubernetes manifests, which specify the workloads you need to run along with their configurations.

The concept of profiles is incredibly powerful. Because you can build a library of reusable Cluster Profiles, you get consistency across multiple cluster deployments. Access to the profiles is managed by Palette, for secure sharing and enterprise guardrails. And over time, you can iterate on and update existing profiles, giving you full version control. When it comes to day 2 operations activities like patching and upgrading clusters, profiles make it easy: change a profile, and all the clusters with that profile applied get changed too. To learn more, you can check out my colleague Adelina’s recent blog Simplify Kubernetes day 2 ops with Palette Cluster Profiles.

So: if I bring the Doom manifests into a Palette Cluster Profile, I will be able to provision Doom on any Kubernetes cluster running on any infrastructure provider, including my own little NUC.

Turning manifests into a profile layer was very simple. All I had to do was to copy each manifest, paste them into a single file, and use a divider to organize them. About 20 seconds later, I have myself an add-on Palette profile.

Step 3: Provisioning the application with Palette

With the profile ready, the next step is to provision a cluster and fire up the app. I provisioned an AWS IaaS cluster and added the Doom profile we created in the last step. 

However, after 15 minutes, my cluster was still not operational. Usually, this means something is not right. I downloaded the kubeconfig file of the cluster, and used K9s to view the state of the cluster. The kubedoom pod, containing the workload that I need, is refusing to come up.

I checked the log of the pod and saw the following error message:

Warning FailedCreate 15m replicaset-controller Error creating:
pods "kubedoom-6bfb65c8b4-vfshj" is forbidden: violates 
PodSecurity "baseline:v1.28": host namespaces (hostNetwork=true),
hostPort (container "kubedoom" uses hostPort 5900)

This tells me that the security policy of Kubernetes is preventing the pod from being created, because the manifest I am using set hostNetwork=true. 

Upon seeing this issue, I wondered why I did not run into this issue when I provisioned a local kind cluster. I had not changed anything related to the default security standard or the configurations. Whatever error I am seeing now, I should have seen the same thing before

Going back to the kind deployment, I realized that the kind cluster was using Kubernetes version v1.23.0, while my AWS cluster was using 1.29.0. Aha!

Kubernetes v1.25 introduced the Admission Controller to enforce pod security standards, as well as the three built-in levels of security (privileged, baseline, and restricted), with baseline as the default standard. Pod security was previously enforced by Pod Security Policies (PSP). It's most likely this change that led to the security issues in the newer version. (If you want to learn more about Admission Controllers, check out this video.

With a solid hypothesis about the root cause, all I need to do is adjust the security standards for my Doom cluster. I recalled that I wrote the documentation for this issue a couple of months back and found the solution in the Troubleshooting section of Spectro Cloud docs: I need to include a line in the pack YAML that adjusts the security standard to privileged.

pack:
  namespace: "kubedoom"

  namespaceLabels:
    "kubedoom": "pod-security.kubernetes.io/enforce=privileged,pod-security.kubernetes.io/enforce-version=v1.29"

After making the changes, I re-deployed the cluster to AWS. This time the cluster came up and the pods were deployed without issue. The Doom service was exposed on port 5900 as a remote desktop and could be accessed with a Virtual Network Computing (VNC) viewer. Since I didn’t have access to AWS’s network, I had to set up port forwarding between my host machine and the cluster in AWS. But there it was, Doom, running on Kubernetes, right in front of me.

With deployment on AWS successful, I proceeded to deploy the application on my trusty little NUC. To suit the needs of an edge deployment, I built a custom image that contains the OS and Kubernetes distributions and versions of my choice through our EdgeForge workflow, and used the exact same add-on layer that I used in the AWS deployment. 

The beauty of using Cluster Profiles to manage Kubernetes workloads is that it is endlessly scalable: I can now provision this application on thousands of machines in different environments if I wanted to. In addition, once the cluster has been provisioned, since no communication is required between the edge host and Palette management plane, the game will continue to run just fine even if I disconnect it from the internet.   

After about 15 minutes, the cluster entered the running state. Since the edge host is on my home network, I was able to access the game directly without having to set up port forwarding. 

Step 4: Making Doom more secure

At this point, I could declare the project complete, as I managed to deploy and run the game on an edge device on an up-to-date version of Kubernetes from Palette, as I set out to do. 

However, downgrading the security standard just to be able to provision the pod seemed like cheating. 

Nobody would expect Doom to be a mission-critical application that needs to be thoroughly protected, but I cannot just have this workload be allowed to do whatever it wants, especially with hostNetwork enabled. What if the Cyberdemon ever gains consciousness and hijacks my machine to do a rm -rf /? Our hero Space Marine cannot take such risks!

Lore reasons aside, the line of code that changes the pod security standards also requires adjustment depending on the Kubernetes version that is used by a cluster, which adds to the overhead of using the profile. I need to make the workloads viable with the default baseline security standard.

According to the error that I received previously, the offending configuration was that I set hostNetwork to true. 

This allows the pod to access the network of the host machine, which reduces the network isolation between the pod and the host, potentially exposing the host to security risks if the application running inside the pod is vulnerable to attacks. 

The developer who wrote the manifest likely only used this option to make it easy to access the service, as there is no need for the pod to have access to the network of the host as far as I can tell.

After removing the host network configuration, I still need to provide a way for external users to access the service. Since my plan is to run Doom on my edge device that lives in my home network, I only need to use a NodePort-type service (and you can check out a couple of blogs for more on these service types if they’re new to you). 

I defined a simple NodePort service by adding the following YAML to my cluster profile.

apiVersion: v1
kind: Service
metadata:
	name: kubedoom-service
	namespace: kubedoom
spec:
	type: NodePort
	selector:
		app: kubedoom
	ports:
		- port: 5900
		  targetPort: 5900
		  nodePort: 30059

This code provisions a NodePort type service in the cluster and exposes the service on port 30059 on all nodes of the cluster. 

With the updated manifest in my Cluster Profile, I deployed a new cluster, this time using my edge host that I had already registered with Palette, and prayed to the Kubernetes gods that it works. About 10 minutes later, my prayer was answered and the cluster was up and running in a healthy state.

I accessed the service from my home Wi-Fi network where the edge device resides, by connecting to the private IP address on port 30059 with a VNC viewer and voila, there it was, Doom running in my little NUC.

Note: If you are wondering how to get an edge device to connect to Wi-Fi automatically, follow the Connect Edge Devices to Wifi guide on the Spectro Cloud documentation site.

Ready to lock and load?

With the game set up and running out of my edge box, it's time to shoot some demons! Unfortunately, I have never been much of a first-person shooter guy and cannot aim a gun to save my life, and I found myself getting killed almost immediately after stepping out of the starting area. 

So I will have to trust the mission of eliminating the demons in the Doom to you. You can use the following manifest to provision your own Doom cluster.

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubedoom
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubedoom
  namespace: kubedoom
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubedoom
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubedoom
    namespace: kubedoom
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kubedoom
  name: kubedoom
  namespace: kubedoom
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubedoom
  template:
    metadata:
      labels:
        app: kubedoom
    spec:
      serviceAccountName: kubedoom
      containers:
        - image: docker.io/spectrodocs/kubedoom:1.0.0 
          env:
            - name: NAMESPACE
              value: kubedoom
          name: kubedoom
          ports:
            - containerPort: 5900
              name: vnc
---
apiVersion: v1
kind: Service
metadata:
  name: kubedoom-service
  namespace: kubedoom
spec:
  type: NodePort
  selector:
    app: kubedoom
  ports:
    - port: 5900
      targetPort: 5900
      nodePort: 30059

If you already have access to Palette, I encourage you to follow the Deploy a Cluster tutorial on Spectro Cloud documentation to learn how to deploy a cluster and use the above manifest as an add-on profile layer.

Rip and tear — it's doomin' time.

Tags:
Using Palette
Cluster Profiles
Security
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy