When to Use Kubernetes Orchestration: 5 Factors to Consider

Kubernetes is a great tool, but it’s not for every organization. Here’s how to decide when to use Kubernetes.

Christopher Tozzi, Technology analyst

March 16, 2020

6 Min Read
When to Use Kubernetes Orchestration: 5 Factors to Consider
Getty Images

Kubernetes is all the rage these days, and there is no shortage of advice out there from IT analysts telling you when to use Kubernetes. But, the fact is that Kubernetes, like any software platform, is not the right solution for everyone. In fact, I tend to think that Kubernetes is sometimes over-hyped to the point that many IT teams think they can’t live without Kubernetes when, in fact, it’s overly complicated for their needs.

Toward that end, here’s a look at how to decide whether it makes sense to adopt Kubernetes.

What Kubernetes Does

As you likely know if you follow the container scene, Kubernetes is an open source tool for orchestrating containers. It automates tasks like starting and stopping containers, and balancing load between different instances of the same container.

In short, Kubernetes’ main purpose is to make it possible to run complex containerized applications on a large scale by minimizing the amount of management that human engineers have to perform manually.

When to Use Kubernetes--and When Not to

So, if you love automation and hate having to perform repetitive tasks manually, Kubernetes may be a good tool for you.

But you shouldn’t go out and install Kubernetes based on these factors alone. There are some important additional considerations to weigh when deciding whether and when to use Kubernetes.

1. Kubernetes infrastructure size

One important factor that influences whether Kubernetes will work well for you is how large your infrastructure is.

Kubernetes is designed to orchestrate containers that are spread across truly massive environments--meaning many dozens of host servers. As a rule of thumb, if you have fewer than 50 servers in your infrastructure, you probably don’t have enough to leverage the full benefits of Kubernetes.

This isn’t to say that Kubernetes won’t work on smaller infrastructures. Indeed, you can run Kubernetes on just a single host machine if you want. But since part of the point of Kubernetes is to provide high availability by spreading containerized applications across very large clusters, you miss out on some of its value if you have only a handful of servers. And, given how complex it is to set up and maintain Kubernetes, it’s probably not worth the effort if your infrastructure is not large enough to deliver fully on Kubernets’ high-availability promise.

On smaller infrastructures, stick with a less complex orchestrator, like Swarm, or use a cloud-based container service with built-in orchestration, such as AWS ECS.

2. Kubernetes operating system environment

Kubernetes is primarily a Linux technology. Although it can be used to manage containerized applications hosted on Windows servers that run as so-called worker nodes within a Kubernetes server cluster, the main servers (in other words, the “master” nodes) that host Kubernetes’ core services have to be Linux.

So, if you’re a Windows-centric shop, Kubernetes is not a good tool for you. You’re better off using an alternative container orchestrator, like Docker Swarm, that is more Windows-friendly.

3. Installing and setting up Kubernetes

Before deciding to use Kubernetes, it’s important to assess the amount of time you can devote to setting it up.

Kubernetes itself (meaning the plain, open source version) does not have a built-in installer, nor does it offer much in the way of one-size-fits-all default configurations. You’ll likely need to tweak (or write from scratch) a lot of configuration files before you get your cluster up and running smoothly. Thus, the process of installing and configuring Kubernetes can be a very daunting one that consumes many days of work.

Some Kubernetes distributions offer interactive installer scripts that help automate much of the setup process. If you use one of these, setup and installation is easier to accomplish in a day or two. But it’s still by no means a turnkey process.

A third option is to run Kubernetes as a managed service in the cloud using a solution like Google Kubernetes Engine. In that case, installation and setup are handled for you. The tradeoff is that you have less choice in determining how to configure your Kubernetes environment.

The bottom line: Don’t underestimate how hard it will be to configure Kubernetes. Make sure the effort is worth it before you start down the rabbit hole. (Pro tip: If you’re unsure just how hard it will be to set up Kubernetes for your team on a production cluster, you can experiment with a miniaturized Kubernetes distribution, such as k3s or minikube, to get a sense of how much effort setup and configuration entail.)

4. Kubernetes and declarative configuration management

Kubernetes adopts what is known as a declarative approach to management configuration. What this means is that you write configuration files to tell Kubernetes how an application should behave, and Kubernetes automatically figures out how to make the application conform.

Declarative configuration management is the opposite of imperative configuration management, in which you configure each component of your app yourself to get it to behave the way you want.

Declarative configurations are part of what makes Kubernetes so powerful and scalable for many use cases. You can set up a configuration once and apply it as many times as you want.

But what if your configuration needs change constantly, or vary among different parts of your workload or environment? In that case, declarative configuration management can become a hindrance because you’ll find yourself constantly tweaking configuration files that are supposed to more or less be a set-it-and-forget-it affair.

So, think about your application’s configuration needs. Kubernetes is a good choice only if your desired configurations are relatively universal and static.

5. Kubernetes and multicloud

One of the main selling points of certain Kubernetes distributions, such as Rancher, is that a single Kubernetes deployment can orchestrate multiple clusters--even clusters that are spread across different public or private clouds. This makes Kubernetes a great tool for helping to tame the complexity of a multicloud architecture.

This isn’t to say that you need Kubernetes to do multicloud effectively. Kubernetes on multicloud only makes sense if you are deploying containerized apps across multiple clouds (as opposed to, say, using multiple clouds for storage, in which case managing the clouds with Kubernetes would not be very logical) and if the Kubernetes setup and configuration effort is worth the payoff.

The takeaway here is that your current multicloud strategy and/or plans for multicloud expansion should be a consideration when thinking about whether and when to use Kubernetes.


Kubernetes is a fantastic tool that offers tremendous value when used in the right settings. But it falls short of killer-app status because it doesn’t deliver value in all use cases. Before being consumed by the hype and becoming convinced that you can’t live without Kubernetes, perform a sobered assessment of your needs and determine whether Kubernetes will actually help you run apps more efficiently and reliably.

About the Author(s)

Christopher Tozzi

Technology analyst, Fixate.IO

Christopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like