Skip navigation
kubernetes architecture.jpg Getty Images

5 Key Ways the Kubernetes Architecture Has Evolved in 5 Years

The Kubernetes architecture turned 5 this summer. Where it’s been provides insight into where it’s going.

This summer marked the five-year anniversary of the release of Kubernetes 1.0. In that time, the world’s most popular open source orchestration platform has changed remarkably, extending its tentacles into new use cases and gaining a variety of new features. Here’s a look at five of the most momentous ways in which the Kubernetes architecture has evolved since it first debuted as an open source project, as well as what we can expect in the future.

1. Kubernetes expands beyond Docker.

Early on, the Kubernetes architecture was designed to orchestrate containers powered by the Docker runtime. That was great circa 2015, when Docker’s runtime enjoyed explosive popularity.

But as competing container runtimes matured, it became clear that Kubernetes needed to expand its purview beyond the Docker ecosystem. This started with the announcement of support for the rkt runtime in Kubernetes 1.3, and it came to full fruition with the introduction of the Container Runtime Interface in 2016 as part of the Kubernetes 1.5 release.

Today, Kubernetes’ ability to work with a range of different runtimes is part of what makes it so powerful. This ability has also helped the Kubernetes architecture stand apart from other orchestration platforms, which in some cases are tied to specific runtimes or ecosystems.

2. Kubernetes supports Windows.

The 1.5 release was also notable for expanding the Kubernetes architecture into the Windows world. It made it possible to orchestrate Docker and Hyper-V containers on Windows servers.

At the time, there was good reason to think that containerization would become as big a deal on Windows as it already was on Linux. I’m not so sure that has proved the case; there are still some significant limitations to the ways you can use containers on Windows. I’ve yet to hear of anyone using Kubernetes on Windows for production purposes, although I’m sure at least someone out there is doing it. (And I’d love to chat about it if you are.)

Still, it is important symbolically that Kubernetes supports Windows (on certain releases and for certain types of apps, at least) because it underscores Kubernetes’ drive to be the universal container orchestrator. This, again, makes it different from other orchestration tools, which are designed primarily for certain ecosystems or use cases.

3. Kubernetes biomes are ML-friendly (thanks to Kubeflow).

You could argue that Kubernetes orchestration itself is a form of machine learning, in the sense that the Kubernetes architecture intelligently manages workloads by collecting and analyzing data from the environment where it runs.

But that’s not quite the same thing as supporting workloads that themselves center on machine learning applications. That type of support didn’t arrive in Kubernetes until late 2017, when Kubeflow debuted. Kubeflow streamlines the process of deploying machine learning stacks on a Kubernetes cluster.

For the record, it was possible to deploy machine learning workloads on Kubernetes before Kubeflow. But Kubeflow provides more abstraction between the workload and underlying clusters, making it simpler to create machine learning stacks that can be quickly ported from one Kubernetes environment to another.

4. Kubernetes provides multicluster support.

Namespaces, which allow Kubernetes admins to isolate workloads running in the same cluster, were a key Kubernetes feature from the beginning.

In fact, namespaces were so important that they initially made it difficult, I think, for Kubernetes developers to envision a situation where you’d want to run workloads on different clusters. Early on, if you asked about tooling to help manage multiple clusters at the same time, you’d receive an answer like, “Just use namespaces within the same cluster!”

But, over time, the Kubernetes ecosystem has grown much friendlier toward the idea of managing multiple clusters in an efficient way. This has become a key use case for some major Kubernetes distributions, like Rancher and VMware Tanzu Kubernetes Grid, which cater to multicluster setups.

Today, you no longer look like a rebel if you insist on devoting a different cluster to each workload, instead of relying on namespace segmentation. Multicluster support has become a mainstream feature of Kubernetes.

5. Kubernetes spans multiple clouds.

The same is true of Kubernetes environments that span multiple clouds. During the past year or so, the idea of using Kubernetes as a central control plane for deploying apps across multiple clouds has surged in popularity. It’s the vision behind platforms like Google Anthos. It has also become a target use case for vendors like Rancher.

Understandably, some public cloud vendors, including AWS and Azure, have been less eager to make their Kubernetes platforms work with competing clouds. But I suspect that may change as multicloud Kubernetes becomes more and more common.

Conclusion: Kubernetes today

By and large, Kubernetes has evolved into a much more expansive platform--and one that addresses a wider variety of uses cases--than it was five years ago. It supports virtually any runtime you want, works on Windows as well as Linux, and can manage multiple clusters spread across multiple clouds.

Given Kubernetes’s broad functionality, it’s easy to see why Kubernetes overtook competitors like Swarm to become the de facto container orchestrator, a feat that few foresaw when Kubernetes debuted in 2015.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish