New Palette Virtual Machine Orchestrator makes VMs a first-class citizen in your clusters
Remember when virtualization was the hot new thing?
Twenty years ago, I was racking and deploying physical servers at a small hosting company when I had my first experience of virtualization.
Watching vMotion live-migrate workloads between physical hosts was an ‘aha’ moment for me, and I knew virtualization would change the entire ecosystem.
Perhaps then it’s not a surprise that I became an architect at VMware for many years!
I had a similar ‘aha’ moment a decade later with containers and Docker, seeing the breakthrough it represented for my dev colleagues. And in the years after it was clear that Kubernetes presented a natural extension of this paradigm shift.
I’m sure many of you reading this will have been through similar awakenings.
Despite 20 years of innovation, reality has a way of bringing us back down to earth. Out in the enterprise, the fact is we have not completely transitioned to cloud-native applications or cloud-native infrastructure.
Millions of VMs are here to stay
While containerized apps are gaining popularity, there are still millions of VM-based applications out there across the enterprise. A new technology wave doesn’t always wipe out its predecessor.
It may be decades before every enterprise workload is refactored into containerized microservices. Some will never even be refactored: for example, if their code is too complex or too old.
So we have a very real question: how do we make virtualization and containers coexist within the enterprise?
We have a few options:
Keep it all separate: Most enterprises today run separate virtual machine and container infrastructures. Of course, this is inefficient and risky: it requires different hardware, multiple teams, sets of policies, access controls, network and storage configurations, and much more.
Bring containers into the virtualized infrastructure: You can run containers within VMs in the virtualization infrastructure. But this is a deeply nested environment, which means inefficiency and complexity. It’s not easy to scale and you often need proprietary solutions to make it happen.
Bring VMs into the Kubernetes infrastructure: This is a more sustainable approach, if Kubernetes is the future of your infrastructure. Kubernetes has demonstrated its ability to provide a highly reliable, scalable and extensible platform through the Kubernetes API, with the advantages of declarative management.
And indeed there is a solution to make this third option possible: KubeVirt.
KubeVirt: making VMs a first-class citizen in Kubernetes clusters
KubeVirt is a CNCF incubating project that, coincidentally, just hit version 1.0 last week.
Leveraging the fact that the KVM hypervisor is itself a Linux process that can be containerized, KubeVirt enables KVM-based virtual machine workloads to be managed as pods in Kubernetes.
This means that you can bring your VMs into a modern Kubernetes-based cloud-native environment rather than doing an immediate refactoring of your applications.
KubeVirt under the hood
KubeVirt brings K8s-style APIs and manifests to drive both the provisioning and management of virtual machines using simple resources, and provides standard VM operations (VM lifecycle, power operations, clone, snapshot, etc.).
Users requiring virtualization services are speaking to the Virtualization API (see the diagram below), which in turn is speaking to the Kubernetes cluster to schedule the requested Virtual Machine Instances (VMIs).
Scheduling, networking, and storage are all delegated to Kubernetes, while KubeVirt provides the virtualization functionality.
KubeVirt delivers three things to provide virtual machine management capabilities:
- A new Custom Resource Definition (CRD) added to the Kubernetes API
- Additional controllers for cluster-wide logic associated with these new types
- Additional daemons for node-specific logic associated with new types
Because Virtual Machines run as pods in Kubernetes, they benefit from:
- The same declarative model as Kubernetes offers its resources.
- The same Kubernetes network plugins to enable communication between VMs and other Pods or services in the cluster.
- Storage options, including persistent volumes, to provide data persistence for VMs.
- Kubernetes’s built-in features for high availability and scheduling: VMs can be scheduled across multiple nodes for workload distribution, affinity and anti-affinity rules, etc.
- Integration with the Kubernetes ecosystem: KubeVirt seamlessly integrates with other Kubernetes ecosystem tools and features, such as Kubernetes RBAC for access control, monitoring and logging solutions, and service mesh technologies.
KubeVirt in the wild: what’s the catch?
KubeVirt sounds amazing, doesn’t it? You can treat your VMs like just another container.
Well, that’s the end goal: getting there is another matter.
Installing KubeVirt: manual configuration
KubeVirt is open source, so you can download and install it today.
But the manual installation process can be time-consuming, and you may face challenges with integrating and ensuring compatibility with all of the necessary components.
To start, you need a running Kubernetes cluster, on which you:
- Install the KubeVirt operator (which manages the KubeVirt resources)
- Deploy the KubeVirt custom resource definitions (CRDs)
- Deploy the KubeVirt components (pods, services and configurations)
You need to do this for each cluster. While a basic installation allows you to create simple virtual machines, advanced features such as live migration, cloning or snapshots require you to deploy and configure additional components (snapshot controller, Containerized Data Importer, etc).
The challenge of bare metal
We mentioned above about the inefficiency of ‘nested’ infrastructures. Although it’s technically possible to run KubeVirt nested on top of other VMs or public cloud instances, it requires software emulation, which has a performance impact on your workloads.
Instead, it makes a lot of sense to run KubeVirt on bare metal Kubernetes — and that, traditionally, has not been easy. Standing up a bare metal server, deploying the OS and managing it, deploying Kubernetes on top — the process can be convoluted, especially at scale.
Operations: challenging UX
When it comes to day 2 operations, KubeVirt leaves the user with a lot of manual heavy-lifting. Let’s look at a couple of examples:
First, KubeVirt doesn’t come with a UI by default: it’s all CLI or API. This may be perfectly fine for cluster admins that are used to operating Kubernetes and containers, but it may be a challenging gap for virtualization admins that are used to operating from a GUI.
Even an operation as simple as starting or stopping a virtual machine requires patching the VM manifest or using the virtctl command line.
Another example is live migration: to live migrate a VM to a different node, you have to create a VirtualMachineInstanceMigration resource that tells KubeVirt what to do.
If you’re running at scale, performing many such operations each day across multiple clusters, the effort can be considerable. Building out scripting or automation can solve that, but itself increases the learning curve and adds to the setup cost.
Introducing Palette Virtual Machine Orchestrator
We saw an opportunity to take all the goodness that KubeVirt offers, address all these issues, and create a truly enterprise-grade solution for running VMs on Kubernetes.
And today we’ve announced just that: meet Virtual Machine Orchestrator (VMO), new in version 4.0 of our Palette Kubernetes management platform.
VMO is a free capability that leverages KubeVirt and makes it easy to manage Virtual Machines (VMs) and Kubernetes containers together, from a single unified platform.
Here are the highlights.
Simplified setup
If you’re not familiar with Palette, one of the things that makes it unique is the concept of ‘Cluster Profiles’, preconfigured and repeatable blueprints that document every layer of the cluster stack, from the underlying OS to the apps on top, which you can deploy to a cluster with a few clicks.
We’ve built an add-on pack for VMO that contains all of the KubeVirt components we talked about earlier, and much much more, including:
- Snapshot Controller to provide snapshot capabilities to the VMs and referenced volumes
- Containerized Data Importer (CDI) to facilitate enabling Persistent Volume Claims (PVCs) to be used as disks for VMs (as DataVolumes).
- Multus to provide VLAN network access to virtual machines
- Out of the box Grafana dashboards to provide monitoring for your VMs
Palette can not only build a cluster for you, but deploy the VM management capability preconfigured into that cluster thanks to the Cluster Profile. The result is much less manual configuration effort.
What’s more, Palette’s multi-cluster decentralized architecture makes it easy to deliver the VMO capability easily to multiple clusters instead of having to enable it manually per cluster.
Streamlined Bare Metal Experience
We talked about the importance of running KubeVirt on bare metal, and how hard it is to provision and manage bare metal servers for Kubernetes.
Well, Palette was built to simplify how you deploy Kubernetes clusters in all kinds of environments, and bare metal is no exception.
There are many ways of orchestrating bare-metal servers, but one of the most popular ones is Canonical MAAS, which allows you to manage the provisioning and the lifecycle of physical machines like a private cloud.
We’re big fans of MAAS, and we’ve included Canonical MAAS and our MAAS Provider for Cluster API in our VMO pack to automate the deployment of the OS and Kubernetes on bare metal hardware. It truly makes deploying a new Kubernetes bare metal cluster as easy as cloud.
Of course, you can use your own bare metal provider if you don’t want to use MAAS.
Powerful management features and intuitive interface
Once everything is up and running, Palette’s always-on declarative management keeps the entire state of your cluster as designed, with automated reconciliation loops to eliminate configuration drift. This covers your VM workloads too.
While DIY KubeVirt leaves you on your own when it comes to some of the more powerful features you’ve come to expect in the world of virtualization, Palette provides a long list of capabilities out of the box.
These include VM live migration, dynamic resource rebalancing, and maintenance mode for repairing or replacing host machines, and the ability to declare a new VLAN from the UI. You also get out-of-the-box monitoring of clusters, nodes, and virtual machines using Prometheus and Grafana.
And while with DIY KubeVirt the platform operator (that’s you) must select, install and configure one of the open-source solutions to get a UI, Palette already looks like this:
For the future of your VMs
As you can tell, we’re pretty excited about the launch of Palette 4.0 and the Virtual Machine Orchestrator feature.
We’ve built on the open-source foundations of KubeVirt, and delivered a simpler and more powerful experience for enterprises.
The result? Organizations that have committed to Kubernetes on their application modernization journey, and have already invested in Kubernetes skills and tools, will benefit from a single platform to manage both containers and VMs.
And that’s not just as a temporary stepping stone for the applications that will be refactored — but also for hybrid deployments (applications that share VMs and containers) and for workloads that will always be hosted in VMs. Even after nearly 25 years of virtualization, VMs are certainly not dead yet.
To find out more about Palette’s VMO feature, check out our website or our docs site. We’d love to get your feedback.
Originally published on The New Stack