NVIDIA BlueField-3 Data Processing Units (DPUs) are transforming data center infrastructure — delivering high-performance cloud networking, elastic provisioning, and zero-trust security. But until now, deploying them in Kubernetes environments has required a multi-step manual process involving Kubernetes, networking drivers, and more.
As an NVIDIA Inception member and user of accelerated computing, we saw an opportunity to automate the DPU deployment process using our Palette management platform, resulting in a more streamlined, repeatable workflow. By removing manual configuration steps, we have cut DPU deployment time, ensuring fast, reliable, and scalable rollouts.
In this blog, we’ll show you how you can use this automation to simplify NVIDIA BlueField-3 DPU deployment in your Kubernetes environments.
How BlueField DPUs transform data center architecture

NVIDIA Bluefield-3 Data Processing Unit (DPU)
NVIDIA BlueField-3 DPUs are specialized high performance processors designed to offload, accelerate and isolate networking, storage, and security tasks from your CPU cores. If you're unfamiliar with these devices, there's an excellent primer from Serve The Home available here. DPUs significantly enhance Kubernetes networking performance by offloading Open vSwitch (OVS) processing to dedicated hardware.
Using NVIDIA’s OVN-Kubernetes integration, NVIDIA BlueField-3 DPUs offload computationally expensive operations such as:
- Packet classification and flow matching
- Tunnel encapsulation and decapsulation
- Stateful connection tracking
The NVIDIA BlueField-3 DPU delivers up to 400Gbps of networking bandwidth, improving Kubernetes networking performance by freeing up CPU cycles for application workloads.
The DOCA platform enables DPU acceleration
The foundation for this acceleration is NVIDIA DOCA platform, a set of software frameworks that provide the libraries, drivers, and runtimes needed to enable BlueField functionality.
To deploy BlueField DPUs in production environments, we rely on the DOCA Platform Framework (DPF). The NVIDIA DPF provides the building blocks needed for developing, deploying, and operating BlueField DPU-accelerated, cloud-native software platforms. DPF simplifies BlueField provisioning, lifecycle management, and service orchestration, facilitating integration within Kubernetes environments.
NVIDIA DPF addresses three primary deployment scenarios:
Combines both for full overlay and underlay acceleration.
For the purposes of this blog, we’re choosing to focus on the OVN + HBN scenario, as it provides both overlay and underlay acceleration, delivering the highest performance benefits.
We follow NVIDIA's deployment guide in conjunction with its detailed Reference Deployment Guide (RDG) that outlines the required infrastructure setup.
The multi-step deployment process
After configuring the network and base OS according to the RDG specifications, NVIDIA's comprehensive guides outline a multi-stage deployment process:
- Deploy the control plane of a new Kubernetes cluster, with the nodes all having their own k8s.ovn.org/zone-name=$node_name label
- Deploy Nvidia’s fork of the ovn-kubernetes CNI
- Deploy a CSI for persistent storage (required by Kamaji)
- Deploy the DPF Operator
- Create an NFS-based persistent volume for the DPF Operator
- Create a DPFOperatorConfig resource and a DPUCluster resource to kick the DPF Operator into gear
- Deploy Nvidia’s Network Operator Helm chart
- Create some policy resources for Multus and SRIOV
- Deploy 17 more resources to actually start provisioning firmware, settings and applications to DPU adapters on worker nodes when those get added later
- When everything has finished deploying, Adjust a setting in the OVN-Kubernetes CNI to start injecting SRIOV adapters to future pods
- And then finally, start adding worker nodes to the cluster, with the nodes all having their own k8s.ovn.org/zone-name=$node_name label as well as 2 more labels and 1 annotation.
This process requires deep expertise across Kubernetes, networking, and NVIDIA’s software stack — and takes several days to complete, per cluster. Large-scale deployments involving tens or hundreds of clusters become a huge undertaking.
Automating NVIDIA BlueField DPU deployment with Spectro Cloud Palette
What if there was a way to simplify these steps? Well, there is.
We used Spectro Cloud Palette to automate each step of the deployment process, ensuring a fast, reliable, and repeatable rollout.
Not familiar with Palette? It’s a Kubernetes management software platform that makes it easy for IT operations teams to design, deploy and manage complex cloud native software stacks across different compute environments, with an emphasis on consistency, scale and automation.
With Palette we reduced deployment to a few guided steps:
- Provisioning bare metal servers with Ubuntu 24.04.
- Deploying Kubernetes with the required configurations.
- Deploying NVIDIA’s software stack (DOCA, DPF Operator, OVN-Kubernetes, Network Operator).
- Initializing DPUs and flashing them with firmware, resource injection, and networking policies.
The process is scalable and repeatable, allowing multiple clusters to be deployed with consistent configurations.
With this approach, we eliminated the risk of misconfigurations, manual errors, and inconsistencies, making hardware-accelerated networking accessible to everyone.
The blueprint that powers automation
At the core of Palette’s ability to automate infrastructure tasks is its concept of Cluster Profiles: a structured, reusable, declarative blueprint that models the entire DPU deployment stack. Instead of manually configuring Kubernetes, networking, storage, and NVIDIA’s software, we automated every layer with Palette, ensuring all components are correctly installed and configured.
We built a Palette Cluster Profile that encapsulates all the required Kubernetes and NVIDIA components. This approach ensures a fully automated, error-free deployment.

Palette’s DPU Cluster Profile
Working from bottom to top, the Cluster Profile includes several key components:
- OS and Kubernetes Layer: Configures the base Kubernetes cluster with the required settings. Palette can bootstrap bare metal machines directly through Canonical MAAS, or build a cluster on OSs that you’ve provisioned through other means.
- OVN-Kubernetes Layer: Deploys NVIDIA's fork of OVN-Kubernetes with hardware offloading capabilities.
- Storage Layer: Configures the necessary persistent storage for DPU management.
- DPF Operator Layer: Automates the deployment of NVIDIA's DPF Operator.
- Network Operator Layer: Installs NVIDIA's Network Operator for SR-IOV and Multus support.
- DPU Service Layers: Sets up the required DPU services including host-based networking.
Note that you can also add any other components you need to a Cluster Profile to create a truly production-ready cluster, including logging and monitoring, security tools, AI frameworks or application workloads themselves.
Parameterizing deployment configuration with variables
Manually entering configuration details like IP addresses is tedious and error-prone, especially when you need the same values across multiple YAML files.
Palette enables you to define cluster profile variables that capture the essential configuration parameters in a ‘write once, use many’ way.
In this scenario, we defined eight variables that are presented to users via a wizard interface each time the Cluster Profile is applied, with sensible defaults to guide their selections:

Cluster Deployment Profile Variables
These variables include:
- Target Cluster API Server Host: The Kubernetes API server IP address
- Target Cluster API Server Port: The Kubernetes API server port
- Target Cluster Node CIDR: The IP range for cluster nodes
- DPU Cluster VIP: The virtual IP for the DPU cluster
- DPU P0 Interface: The name of the first DPU port
- DPU P0 VF1 Interface: The name of the second Virtual Function of the first DPU port
- DPU Cluster Interface: The management interface for the control plane
- NFS Server IP: The IP address of the NFS server for BFB storage (the control plane node)
By parameterizing the deployment in this way, we've ensured that critical values are consistently and automatically applied across all components.
Deployment workflow: from complexity to simplicity
Our automated approach simplifies the deployment process to just three steps:
- Provision the control plane: Deploy only the control plane nodes first, using the Palette wizard with eight variables. Wait for all layers to complete successfully.
- Enable resource injection: Adjust the CNI configuration to enable the OVN-Kubernetes resource injector.
- Add worker nodes: Add DPU-equipped worker nodes to the cluster.
We’ve been using this automation to build clusters in about 30 minutes, instead of the day or more it was taking. This is a game-changing time saving for our operations teams.

DPU cluster details

Adding hosts into an existing cluster
Once the worker nodes are added, the DPF Operator automatically begins flashing and configuring the DPU on each worker. This process includes a node restart and takes some time to complete. After the DPUs are fully provisioned, they join the DPU cluster and become available for application workloads.
By the way, you can easily deploy additional DOCA services directly to the DPU. With Palette, you can model these as manifest resources in a Cluster Profile (like the HBN services we've deployed). NVIDIA provides a variety of DOCA services in the NGC Catalog registry, making it simple to extend your DPU capabilities with specialized workloads.
Unlocking the power of NVIDIA BlueField DPUs in Kubernetes
NVIDIA's BlueField DPUs represent a major leap forward in networking performance, enabling new levels of efficiency and speed.
With Spectro Cloud Palette automating deployment, what was once a multi-step process is now a streamlined, reliable workflow that can be completed in hours. This makes it easier than ever to take advantage of hardware-accelerated networking while maintaining the flexibility of Kubernetes.
With just a few clicks, BlueField acceleration is now within reach, bringing next-generation networking performance to any environment.
Get early access
The automated NVIDIA BlueField-3 DPU deployment with Spectro Cloud Palette is currently in beta. We're looking for early adopters who want to supercharge their Kubernetes networking!
If you're interested in taking this solution for a spin in your environment, drop us a line at nvidia-integrations@spectrocloud.com. Our team will chat with you about your use case and help you get up and running with hardware-accelerated networking goodness.
And to learn more about how Palette can help you with all aspects of your Kubernetes infrastructure management, why not book a one-on-one demo with one of our experts?