Published
August 12, 2024

Sustainability and green IT: how does Kubernetes fit in?

Darren Madams
Darren Madams
Solution Architect

Sustainability is on the corporate agenda

A business recently came to us with the stated goal of making its tech stack “more sustainable and energy efficient”. 

That simple statement opened up a great long discussion about:

  • What those terms mean in different contexts
  • Whether the cloud movement has made things better or worse
  • Whether Kubernetes itself is a help or hindrance

Many organizations are starting to get on board with sustainability. Put it this way: would any sane person want to be less efficient and pollute more? But like any project, if they’re to be successful, your sustainability efforts need not just a vision and purpose, but clear objectives and realistic tactics. 

In this post we’ll point you to some information to help you on your own sustainability journey, of course through the lens of Kubernetes. Let’s get started.

Enterprises face growing legal and social pressure

Is the whole ‘green’ thing a flash in the pan? Certainly not.

Wherever you are, your company faces both legal requirements and social pressure to reduce its energy consumption.

  • In the United States, the Energy Act of 2020 dictates that federal agencies should have certified practitioners evaluate their data centers every four years along with a host of other regulations. 
  • In the EU, the European Green Deal initiative contains multiple regulations that will need to be adhered to by service providers and commercial companies including reporting on how their activities impact the environment. Additionally, there are targets for a 55% reduction in greenhouse gas emissions by 2030. 
  • In Germany, the Energy Efficiency Act (EnEfG) has a particular focus on data center efficiency and the use of renewable sources as well as targets for reducing consumption. 

On the social side too, employees and consumers are seeking to work for and do business with companies that place an emphasis on sustainability:

  • A recent Deloitte survey shows that 46% of Gen Z have or are planning to switch jobs due to climate concerns. 64% are willing to pay more for sustainable products. 25% have stopped or reduced doing business with companies due to their unsustainable practices. 

In response to the growing legal and social pressure, many public companies have ramped up how they report their sustainability accomplishments. 

In addition to their annual financial reports, it’s common now to issue a separate Environmental, Social, and Governance (ESG) report with metrics and scoring covering all areas of a business’s operations, from supply chain to customer. There are several laws in the works to make such reports mandatory.

“Greenwashing” (marketing a company as environmentally friendly without actual action) is very real. Many products and companies want to appear “green” while not taking proper steps to reduce carbon emissions or improve their environmental impact. Thankfully, ESG reports are driving much more transparency.

IT, cloud infrastructure, Kubernetes environments… assessing the impact

But how big a part of the overall green story is IT, anyway?

Whether you’re talking about your own data centers or cloud services, the numbers are terrifying.

Computing resources are carbon intensive, and with the rise of AI and ML workloads, we have seen a rush for more and more compute power. Goldman Sachs forecasts a 160% increase in power demand by 2030. 

Whether that manifests through new hardware purchases or new cloud instances, the result is the same: an increase in your environmental impact.

While hard to quantify, experts do believe that Kubernetes and containerization in general has led to more efficient computing — even though the speed, flexibility, and complexity risks more potential waste.

So what does all this mean for you and me, as Kubernetes platform engineers? 

While our primary goal is to keep the lights on for the services we support, we can’t ignore that our clusters are contributing to energy usage. We should all be thinking about how to reduce our environmental footprint, and report on our progress at doing so. 

Three steps to Kubernetes sustainability

Step 1: Observability and monitoring

To improve, first you have to measure: it’s as true in cloud native as anywhere. 

Any of the standard Kubernetes observability tools such as Prometheus, Grafana, DataDog, Dynatrace and New Relic can monitor consumption and trend over time and provide data for compliance and financial reporting.

The next step is to convert cloud consumption and utilization data into actual energy breakdown and CO2 emissions. Hardware manufacturers such as Intel, NVIDIA, Dell, etc. are doing a good job of exposing their power consumption metrics to the open source community. 

Projects like Kepler take advantage of this data and present it in usable dashboards broken down to the pod level at one second intervals, almost in real time.

Thankfully, public cloud providers are also adding visibility through tools like the Azure Emissions Impact Dashboard, the AWS Customer Carbon Footprint Tool, and the Google Carbon Footprint tool.

The CNCF Environmental Sustainability TAG also maintains a list of interesting observability tooling.

Step 2: Reduce waste

Clean up the cluster clutter

Reducing the extent of your cluster footprint can quickly and easily make the biggest difference to energy savings, and in our experience, most organizations have an alarmingly extensive estate of orphaned and unused clusters, left over from experiments, development environments, and testing suites.

If a cluster is unnecessary, kill it. It will immediately improve your metrics.

A central management tool like Spectro Cloud Palette can help show where clusters are deployed across multiple providers or data centers and also give insight into whether they are being actively used or not.

Rightsize your production clusters

While you never want to undersize a cluster and run out of resources, it is common to over-provision a cluster at creation time rather than expanding as needed. 

These days additional nodes can be provisioned at the press of a button, so there is much less reason to pre-allocate resources.

Kubernetes cluster right-sizing strategies can be as simple as thinking hard when picking the right node type and quantity, but autoscaling is vitally important for dynamic workloads.

We have previously covered Kubernetes autoscaling patterns in another blog that is worth reviewing. Modern sophisticated autoscalers can be configured to work on a variety of metrics and quickly expand and contract as actual real-world workloads shift.

There are also dedicated tools like kubegreen that are designed to ‘sleep’ idle Kubernetes resources and show you the immediate CO2 impact.

Adopt multitenancy to increase utilization

There are plenty of advantages to running lots of independent clusters — not least, security. But every new cluster means increased overhead (a new control plane), and reduced utilization.

If efficiency is important to you, and if your security posture permits it, multitenancy is still a valuable strategy. Combining multiple applications or workloads on a single cluster can reduce your overhead, and to avoid one bursting workload affecting others, it’s fairly straightforward to add extra worker pools if you need it.

You can also consider virtual clusters as a way to get isolation without the overhead of a control plane. Check out our blog on multitenancy to understand your options. 

Step 3: Optimize your hardware and software

Template your stacks for optimal efficiency

On top of the overhead of the Kubernetes system itself, every tool and application you install will contribute to resource consumption. Even misconfigurations or sub-optimal settings — such as making unnecessary replicas or too frequent refresh intervals — can have a substantial impact on efficiency.

The answer? Define your technology stack designs and configuration best practices, and apply them consistently across all your new clusters. Palette’s Cluster Profile blueprints are a great way to make this happen.

Slim down your application code

Reducing the size, complexity, and overhead of your code can drastically reduce the hardware required to support it.

There are several software development methodologies and tools that can help reduce the footprint of your application itself. The Green Software Foundation is an excellent resource if you are a developer interested in improving your development processes.

Place and schedule for efficiency

Not all Kubernetes environments are equally sustainable. If you could spin up clusters or migrate workloads to take advantage of low-carbon locations or low emissions time of day, would you?

Today’s fast-growing AI workloads are often great candidates for this kind of optimization. If models can be trained overnight or on the weekends when power is cheap and clusters are not overloaded, you can make huge savings.

We are just now starting to explore the full capabilities of being able to deploy Kubernetes workloads in low-carbon areas or run jobs at low emissions times. Expect to see a push to other places as countries improve their carbon footprint and take advantage of natural power sources (like geothermal or solar) and cooling.

While new generations of hardware improve performance and efficiency, the trade-off in increased operating costs for power and cooling don’t always equate to less overall environmental impact. Of course, existing data center and cloud hardware doesn’t just go away, and we know that this physical hardware is a large contributor to the overall CO2 generation of a data center. Efforts by manufacturers have helped extend the useful service life of equipment, which in turn reduces their overall carbon contribution.

Leverage AI tools

AI has reached its way into the tools we use and the technology can help us by automating a lot of the improvement tasks outlined above. Projects like Predictkube can automatically scale nodes based on AI predicted future workloads and there are several automated tools that can do carbon aware scheduling or pod scaling.

Our 2024 State of Production Kubernetes research found that a majority of organizations are already using AI for cost optimization, or plan to.

Don’t expect sustainability to drive immediate cost savings

You might assume that reducing environmental impact will reduce financial cost. While this does make the CFO happy when it works, the correlation is rarely direct. 

While it is true that reducing resources used reduces cloud spend, moving to a more environmentally friendly system may also end up costing more due to the time and direct cost of:

  • Surcharges for green power or data centers that meet environmental targets 
  • Hardware migration efforts and environmental systems modernization in data centers 
  • Migration efforts from private data centers to the cloud

FinOps is a relatively new discipline and term. Their focus is on tracking and optimizing costs through cloud deployment. Many of the FinOps tools can also expose and report on carbon cost, which can be another factor they seek to optimize. 

With a proper top-down directive to improve environmental sustainability, it is absolutely possible for FinOps and Kubernetes teams to work together to improve all applicable metrics, rather than fighting opposing battles.

Next steps for cloud native sustainability 

If you’re looking at improving Kubernetes environmental sustainability, there is a growing body of resources out there for you to use.

Start with the CNCF’s Environmental Sustainability TAG, which since 2022 has already provided much needed guidance and brought together expertise from a variety of backgrounds, companies, and functions. 

There is also a Green Reviews working group that helps integrate environmental sustainability reviews into release cycles within the CNCF ecosystem.

They also maintain a list of sustainability tooling that is worth checking out if you’re interested in optimizing resource usage.

They are the organizers of the CNCF Sustainability Week which will happen in October 2024 this year.

And until then, check out some of the backlog of CNCF sustainability videos on YouTube from previous KubeCons. 

While designing and building sustainable systems with Kubernetes is not always simple, the benefits for the future of our planet are great. Let us know how we can help you reach your goals by simplifying your infrastructure and application deployments and ongoing operations.

Tags:
Cloud
Best Practices
Operations
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy