As containers become increasingly central to cloud computing architectures, you’re likely to hear more and more talk about container runtimes. Container runtimes may sound esoteric and technical, and that’s because they are. With that said, it’s not only developers who should care about runtimes. Understanding what container runtimes do, which runtimes are available and what the differences are among them is key for planning an effective containerization strategy.
Toward that end, keep reading for a primer on container runtimes
Definition of Container Runtime
In the context of containerization strategy, a container runtime is, simply put, the process that runs containers. In other words, it provides an execution engine that allows you to take a container image (which is basically a blueprint that tells a container how to behave), then start a container based on it.
There are a variety of other tasks related to working with containers that are not addressed by runtimes. Runtimes don’t manage container images or orchestrate container instances. They just allow containers themselves to run.
If you like analogies, think of a container as a car, and a container runtime as the engine. The engine makes the car move just like runtimes allow containers to run. But that’s all runtimes do; tasks like steering the car, feeding gas to the engine and so on are handled by other components.
You could also compare container runtimes to virtual machine hypervisors. Just as a hypervisor is what actually executes your virtual machines, a container runtime executes containers. This is not to say that container runtimes and virtual machine hypervisors are similar in other ways--they are not--but the role they play in their respective system architectures is comparable.
Examples of Container Runtimes
When developing containerization strategy, organizations must consider that there are more than a half-dozen widely used container runtimes today, including LXC, runc, crio-o and containerd. Most conform to the OCI’s runtime specification, a community-based open standard, so they all do the same basic thing. Most of them are also interchangeable, in the sense that you can take one runtime and swap it out for another within the same cloud or the same Kubernetes environment.
You may be wondering, then, why there are so many different runtimes if they all perform the same job? Part of the answer has to do with the history of the container ecosystem. Some runtimes were created before the OCI specification standardized design, and so they were not always interchangeable with each other. Others were backed by specific vendors that supported the development of particular runtimes as a way of trying to claim their slice of the container market.
But the other part of the answer is that some runtimes do have critical technical differences. Most notable in this respect are the Kata and Nestybox runtimes, which aim (among other things) to provide more isolation between containers than do other runtimes.
High-Level vs. Low-Level Container Runtimes
When developing containerization strategy, some folks draw a distinction between so-called high-level and low-level runtimes. High-level runtimes provide extra features that go above and beyond the essential task of executing containers. Low-level runtimes provide only the core functionality of container execution, and depend on other tools to handle related tasks.
I don’t think the distinction between high-level and low-level runtimes is very useful, because at the end of the day all of the runtimes still perform the same basic task of container execution. Still, wrapping your mind around the concept of high-level vs. low-level runtimes might at least help you to make better sense of what the differences between the various runtimes are.
Runtimes and Kubernetes
If you are talking about containerization strategy, especially in the cloud, you’re probably also talking about Kubernetes, the open source container orchestration platform. Kubernetes is now widely used to simplify the task of deploying containerized applications at scale.
If you’re wondering which runtime Kubernetes uses, the answer is that it is compatible with all the mainstream runtimes. So, if you want to use Kubernetes, you’re free to use basically any runtime you like.
The “Best” Runtime
Now, the ultimate question: Which runtime is best for your containerization strategy? The answer is that there is really no answer.
In terms of core functionality, all container runtimes available today basically do the same thing. In fact, since most of them are based on the same design specifications from the OCI, they are built according to the same plans. And, in most contexts, container runtimes are interchangeable; the runtime you use will not restrict your ability to pick and choose other components within your software stack according to your liking.
Thus, in most cases it’s not worth investing much time in worrying about which runtime to use. If you have a container platform that already includes a runtime (for example, if you are using a Kubernetes distribution that has a default runtime), you are probably fine sticking with it. If you don’t yet have a runtime, in most instances you can simply choose a popular solution like runc and achieve everything you need.
For certain specialized use cases, the isolation-focused runtimes mentioned above (namely, Kata and Nestybox) might be useful. But they remain in development and are not ideal at this point for production use.
Container runtimes play a key role in driving containerized application stacks, both in the cloud and on-premises. The fact that there are so many runtimes has to do with history and competition between companies at least as much as it does technical considerations. So, although understanding what runtimes do is important for designing a well-oiled containerized software stack, in most cases it doesn’t actually matter which runtime you use.