Published
June 27, 2024

Edge computing vs cloud computing (and how Kubernetes fits)

Dmitry Shevrin
Dmitry Shevrin
Infrastructure Specialist

Defining cloud and edge computing

Cloud computing: it’s here to stay

It's a meme, but it's kinda true...

It’s now 18 years after AWS launched EC2 — you’ve surely heard of cloud computing, and you almost certainly use it. 

Most of us can immediately name the three hyperscalers responsible for the cloud revolution: Amazon, Google and Microsoft.

Their public cloud services, AWS, GCP and Azure, include hundreds of different cloud computing services, from storage to analytics, AI to security, databases to business applications. 

But at their heart they provide computing infrastructure environments for you to run your applications. 

Cloud computing environments may be offered up in different ways:

Infrastructure as a Service (IaaS)

IaaS allows a simple migration from your existing infrastructure to the cloud, as basically it offers the same raw stack as you had in your data centers: virtual machines, storage and networking elements available with little to no change when compared to traditional environments. 

An example might be getting a number of VMs from EC2/Azure/GCP and installing Kubernetes on them yourself: the cloud provider will know nothing about Kubernetes, as it’s only providing VMs, you are responsible for the content of these VMs.

Platform as a Service (PaaS)

PaaS is a step up the stack from IaaS. Here you’d delegate some of the parts of your infrastructure under hyperscaler management. A good example may be getting a Kubernetes cluster from EKS/AKS/GKE: you wouldn't manage the Kubernetes master nodes yourself, the cloud provider will manage and upgrade them for you. 

In general, PaaS is easier to maintain than IaaS, but it’s less flexible. Let’s say your company requires you to install a monitoring/compliance stack on your VMs: you can easily do so in IaaS, but it can be challenging on PaaS.

Software as a Service (SaaS)

With SaaS you’re only consuming an application, often through a web interface. The cloud provider is responsible for running the entire stack, and from your perspective, management of the various infrastructure components (including Kubernetes) is non-existent. 

A good example here could be Spectro Cloud’s Palette: this solution can be delivered as a SaaS (among other options as described here) and can be easily consumed from your web browser.

Why did cloud become so popular?

The cloud itself was born as an alternative to the complexities of maintaining a traditional enterprise data center environment. 

Ordering physical hardware, keeping it regularly replaced, maintaining firewalls, storage, security and many other points is far from core business for many companies. 

What if you could simply access the resources you need, on demand, pay for only what you use, and turn them off again when you’re done?

This is the (simplified) reality of cloud: all of the hyperscalers are using their own data centers filled with a huge number of modern powerful reliable servers, fast fibre networking, redundant power, lots of security measures, and some highly qualified technical people around.

This underlying technology is then abstracted away so that you could log in to AWS/Azure/Google, click “next-next” and get your computing resources quickly and efficiently. 

All this of course eventually gives you an opportunity to look at cat memes or quiz ChatGPT with blazing speed.

Beyond ‘public cloud’ hyperscalers

When you think of cloud services like EC2 or S3 you’re thinking of ‘public cloud’ — services that the hyperscalers run that are generally available to all in a shared, multi-tenant environment. When you fire up an EC2 instance, your workload may be running shoulder to shoulder with workloads from many other AWS customers, and you’d never know.

But there are other types of cloud usage models today:

  • Specialist cloud providers exist in their hundreds. You can get cloud services from Tencent, Oracle, IBM, DigitalOcean, OVHcloud, Alibaba, and lots of smaller regional providers too.
  • Multicloud means distributing your workloads across services from multiple cloud providers, whether for convenience or as part of a strategy to spread risk. Just watch out for the cost implications of data ingress and egress!
  • Private cloud means adopting cloud concepts (such as elasticity and usage-metered pricing) and running them in private environments, like your own data center.
  • Hybrid cloud means mixing different cloud models: for example, running a workload in a private cloud data center but ‘bursting’ into the public cloud to provide overflow capacity in the event of a sudden spike in demand.

What about edge computing?

Computing doesn’t just happen in big centralized data centers (whoever owns them). There’s another game in town: edge computing, and it’s growing fast.

Depending on who you ask, the edge computing market is going to double over the next five years. In our own research, specific to Kubernetes, 41% anticipate doing more with edge in the next year, with AI being a major driver: 68% of respondents said that the popularity of AI is driving interest in edge computing.

Defining the edge

What is edge computing and how does it work? Well, there are many different kinds of edge — a whole spectrum or continuum, as the LF Edge has defined in the graphic below.

The LF edge continuum

But essentially edge computing is a distributed thing — exactly the opposite of cloud computing, which is all about leveraging economies of scale through massive data centers. 

Edge use cases

Edge computing locations are all around us. Use cases may include telco base stations, fast food restaurants, or dentist practices around the world, like Dentsply.

These edge computing devices may be used to support IoT devices and sensors or perform other local application workloads, often including data collection, that are impractical to centralize in the cloud.

As a result, each edge device likely has its own storage and network connections, although there are also many edge environments that run ‘air gapped’, with no connection back to the internet at all. 

Edge computing limitations

From a technical perspective, edge involves lots of small(er) devices distributed across various locations with potentially questionable network access, consumer-grade electricity supply… and very rarely any IT personnel around.

Edge devices may have very limited computing power, but that’s OK because they likely deal with smaller amounts of data than cloud services.

The main challenge is actually the staffing and expertise needed for successful deployment and ongoing operations. To manage costs, edge technology however should be abstracted from the end user. Without a knowledgeable IT person on site to install and configure these devices, it should happen almost automatically — in other words, “zero-touch provisioning”.

Edge vs cloud: key differences

To summarize, edge vs cloud computing key features can be put in the following table:

Parameter Cloud Computing Edge Computing
CPU power high (Intel Xeon) low (Intel NUC, ARM)
Electricity production-grade consumer-grade
Internet connectivity fibre, reliable, low latency varies, unreliable, high latency
Local skills usually some IT skills usually no IT skills
Adding new nodes easy difficult
Storage solution enterprise-grade SAN local disks, not resilient
Typical use case Big data storage Local AI processing

How edge and cloud work together in the real world

We’ve established how these two domains are different from each other. Now let’s explore how they fit together in real-world scenarios we’ve encountered:

Distributed data processing

Let’s say you’re 100% on cloud computing and happy with it. However, as one of our customers in the manufacturing segment you might establish that sending all data to the cloud for processing is way too expensive and it’s easier to process data right on the edge, sending only highlights back to home base. 

That means that you now have to deploy edge appliances with an application stack on it. Your scripts written specifically for cloud might not fit and you need to develop a similar stack for the edge

Digital twins

Another one of our customers was doing edge deployments across the world, and managed it successfully. However, they found it was costly to experiment with new technologies on the edge, as each mistake would mean expensive device reinstallation. 

To maintain the pace of innovation, the company implemented a concept of a “digital twin”, allowing it to quickly build a copy of the edge deployment in the cloud, test it and, when successful, deploy this model onto the real device.

Preparing for the future

In this IT landscape, there are always new market trends and changes: look at the rapid emergence of AI, suddenly driving the demand for real-time processing at the edge, or changes in privacy and compliance regulation that may require a change in cloud vendor. 

Your business may find itself in a merger or acquisition, or with a new CIO who has different ideas. Being able to react quickly and manage multiple kinds of infrastructure from different vendors is always a sensible course of action!

Trying to bridge the gap

Can you do both edge and cloud with one vendor? It’s tempting to try. A number of cloud vendors have tried to extend their solutions to work on the edge. However, hyperscalers are very cloud-centric by nature and, while powerful, tend to lock clients in, hence their overall adoption at the edge is somewhat modest.

Expansion the other way round wasn’t easy either: when your edge technologies are working well, it’s difficult to reinvent the wheel and to build another hyper-scaler. There were some attempts by smaller cloud providers to break up on the hyper-scaler scene, but they’re not quite successful either: some were struck by financial barriers, others didn’t manage to build their estate to a required level of reliability. 

What to do then?

Kubernetes to the rescue! Or, is it?

The great unifier

“Why have we grown our use of Kubernetes? It’s the ability to scale seamlessly. It’s agnostic, at least from a cloud platform perspective.”

CTO, Healthcare

This year Kubernetes has turned 10, so at this point you’ve probably heard of it. The good news is that it’s going to help us to bridge the gap between edge computing and cloud computing. Fortunately, it’s flexible enough to be able to construct a foundation for both of these worlds.

By itself, Kubernetes is just an orchestrator of containers, but it’s very adjustable and powerful. It permits Kubernetes to be in the heart of huge cloud deployments as well as to manage small 1-device Intel NUC or NVIDIA Jetson Nano-based clusters. 

And this is good because it means you have consistent patterns for application development, deployment and management across all your computing environments, from edge to cloud to data center — Kubernetes becomes the unifier.

Managing at scale is a whole different story

Kubernetes may be the great unifier, but the reality of Kubernetes is it’s hard and requires expertise and tenacity — especially when applied to such different use cases. 

Starting one EKS/AKS/GKE cluster is not hard and literally requires a couple of clicks. Same with edge: deploying one device in your lab is not hard: you can probably complete it in a couple of hours.

The real challenge is not in the first deployment. It’s in consistently managing a fleet of clusters together with a fleet of edge devices. To reiterate: having one cluster with a “hello world” application is easy. Managing hundreds or thousands of clusters with real-world applications on them is hard.

How do you maintain consistency across different environments?

With time your Kubernetes clusters will inevitably spread across multiple environments, exposing you to configuration drift. The whole idea of Kubernetes being agnostic of underlying infrastructure works in principle, but as soon as you start the implementation phase and especially day 2 operations (monitoring, backup, compliance and others) you’ll see that the devil is in the details:

  • Which CSI/CNI would you choose for different environments?
  • Which LoadBalancer would you choose for edge?
  • How would you manage all Kubernetes resources programmatically, especially when some of the companies are being acquired by IBM introducing risks?

Security and scale

Another important topic for hybrid Kubernetes deployments is security: you’ve probably secured your cloud install with encryption mechanisms for cloud (and if you didn’t, you should probably go do it now), but edge by itself presents a set of specific challenges:

  • What happens if one of your edge devices gets stolen — would an attacker be able to get access to business-critical information?
  • When you scale to hundreds or thousands of edge locations, are you confident that your management solution will scale without becoming a bottleneck?
  • If a local device suffers a problem, would a local person be able to do anything with it?

Solving edge-to-cloud management with Palette

If you’re looking for a single solution to help you effectively manage Kubernetes across both cloud computing and edge environments, let us introduce you to Palette.

Palette gives you a unified management interface that’s designed to be easy to use and high performing, even as you scale to thousands of clusters across multiple environments. 

It leverages and extends CNCF projects like Cluster API so you can automate how you deploy and manage full Kubernetes software stacks to all the major cloud hyperscalers, both IaaS and PaaS, as well as edge environments from a single pane of glass. And you can drive it either from its gorgeous interface, or via API, Terraform or Crossplane providers.

Because, as we’ve discussed, edge has some special challenges, Palette has some special edge features. We offer full local management for disconnected environments, and a suite of security capabilities such as trusted boot and full disk encryption. 

Unlike some of the cloud providers with their tightly opinionated stacks, we’re all about openness. Spectro Cloud stays agnostic and permits you to use a cloud provider of your choice, as well as a rich set of tested integrations, allowing you to forget about complexities of managing Kubernetes and to concentrate on adding value for your company even on top of PaaS products like EKS.

So wherever Kubernetes takes you, from edge to cloud, see how we can help.

Tags:
Edge Computing
Cloud
Networking
Using Palette
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy