The growth of edge processing has encouraged a market in software designed specifically for edge deployments. For example, all of the major Linux vendors have a version of their distributions specifically designed for either edge or IoT deployments; these are much smaller than the distributions that typically run in traditional data centers, yet similar enough to basically be considered one and the same.
A similar situation is developing for running containers at the edge. Since edge computing is almost by definition a cloud-native environment, it's now become common to see scaled-down versions of Kubernetes being deployed at edge locations, such as servers located in retail outlets, branch offices or in manufacturing facilities, as well as in unmanned edge facilities that might be processing data from cell phone apps or surveillance cameras.
Minified Linux at the Edge
A new generation of edge and IoT devices are now generally capable of running standard versions of Linux, but small may still be preferable for edge processing. First, a smaller version of Linux offers a smaller attack surface, thus boosting security. More importantly, since edge and IoT operating systems generally run in read-only mode, Linux stripped to its bare essentials is less likely to develop a maintenance problem. This is especially important at edge locations where the nearest technician might be a day or more away.
The big three major Linux vendors (Red Hat, Canonical and SUSE) and others have developed minified edge distributions stripped of all but essential packages for IoT and edge deployments.
Red Hat, for example, offers Red Hat Enterprise Linux for Edge.
"With RHEL for Edge we use a tool called Image Builder that with about four clicks will give you an image that will have just shy of 400 packages, so it's fairly small," said Ben Breard, Red Hat's senior principal product manager for RHEL and edge offerings. "It's not a five-megabyte distro by any stretch, but it's reasonably small."
By contrast, he said, a standard installation of standard RHEL running in a data center or cloud will contain 8,000 or more packages.
Breard said that RHEL for Edge installs in read-only mode, with the exception of the Linux directories etc and var, which are made writable.
"Because when you're out on an edge, if you're 100% immutable that's not very practical, because that often requires more infrastructure to pipe in configs and other things that are super simple in a cloud environment, but not in an edge," he said. "I think it's a perfect balance of giving you mutability where you need it and want it and benefit from it, and then not so much where it's a nightmare to manage."
RHEL for Edge isn’t the only edge processing offering. The server vendor SUSE distributes a similar product, SUSE Linux Enterprise Micro, and Canonical offers Ubuntu Core, which out-of-the box supports Snaps, Ubuntu's easy way to install sandboxed software on Linux. The latter gives Ubuntu something of an advantage in non-containerized environments, since Snaps can be built to contain any Linux packages an application might need that aren't included in the stripped-down edge distribution.
Other tailored-for-edge distributions include Yocto, a Linux Foundation project for those who want to roll their own minified Linux for use at the edge or on IoT devices.
"The gist of the [Yocto] project is that you can build your own Linux distribution with just the things you actually need," said Joao Correia, a technical evangelist for CloudLinux's KernelCare, a Linux-focused security patch service that supports Yokto.
"It gives you an à la carte choice of components that you can embed on your Linux device. If you only need a specific subset of drivers or if you only need to write to a specific file system, you only pick those modules. Then through the tools that the Yocto project provides, you get a small Linux distribution that's tailored for your special case."
Containers, which not only make it possible to add new applications to an edge deployment, but which can also easily keep applications patched and up-to-date, are increasingly likely to be part of an edge processing environment, so Kubernetes plays an essential role in most edge deployments.
"Kubernetes tends to come in where there's a need for compute, but compute doesn't happen in isolation, especially at the edge," said Sheng Liang, president of engineering and innovation at SUSE. "Compute really only happens when there's data to process, generally from some kind of a medical device, energy platform, surveillance platform or something that actually collects data to such a degree that they obviously would not be able to stream it to the web."
It's only natural that IT departments needing to run containers in edge deployments would want to use Kubernetes, since that's likely what they're using in their on-premises data centers or in the public clouds. The problem is that Kubernetes is an unweildy and complex platform that's considered difficult to use, which doesn't fit the edge's need for small, nimble, no-maintenance operations.
In 2019, the Kubernetes-focused startup Rancher (now a division of SUSE) tackled the problem and released a much stripped-down but fully functional version of Kubernetes' for edge and IoT, K3s, which SUSE/Rancher now uses in RKE, its Kubernetes Engine for data centers, instead of upstream Kubernetes.
"K3s is designed to be run unattended, lights out, embedded, whereas regular Kubernetes needs to be babysat," Liang said. "With K3s sometimes it's you just deploy and forget it. It works more like firmware, or like embedded software."
Although still distributed by SUSE/Rancher, these days K3s is officially a CNCF project, as is Canonical's MicroK8s (pronounced "micro-kates'), a Ubuntu-branded version of Kubernetes. Like K3s, MicroK8s has a small footprint, but has an added advantage of including add-ons to ease the task of deploying Istio, Knative, Cilium, Grafana and other much used enterprise software in containers.
Kubernetes edge processing alternatives to K3s and MicroK8s include KubeEdge, another CNCF project that's designed for synchronizing Kubernetes deployments between cloud and edge, and which requires software running in both the user's cloud and edge infrastructure; and Minikube, good for running Kubernetes in a Windows environment, but which requires a hypervisor such as VirtualBox or KVM to use because it creates a single node cluster in a virtual machine.