data center art racks illustration getty.jpg

Is Kubernetes Changing Data Centers in Perceptible Ways?

Containerization has promised to carry the torch further after virtualization catalyzed a re-imagining of the data center. Has it delivered?

While making life easier for developers by further abstracting computing infrastructure, software containerization has also been hailed as a driver for more efficient utilization of processing power in data centers. As we get more out of a single server chip than before, shouldn’t it follow that data centers that house those servers can be smaller? Instead of a 100MW complex, maybe we can have a 40MW facility and save hundreds of millions of dollars in the process?

Although the containerized software movement, now led by the Kubernetes orchestrator, is barely six years old, that should be enough time for a pattern to have emerged. Has containerization led to more energy efficient data centers?

“I’m not sure that we have a lot of really exhaustive, quantitative data around this yet,” admitted Jonathan Bryce, executive director of the OpenStack Foundation.

Since the Foundation’s establishment in 2012 Bryce has been the face of the open infrastructure movement in software. OpenStack’s original objective was to establish an easily deployable framework for building data centers (not the buildings and infrastructure, but the IT systems they support) that could be provisioned like public clouds. Administrators should be capable of deploying virtual machines on their own servers the same way they could on Rackspace, GoGrid, or AWS. Bryce witnessed server virtualization drive processor utilization and efficiency. The trend resulted mostly from enterprises’ desire to replicate the public cloud’s ease of use in their own data centers.

He watched data centers shrink. By his own estimate, facilities can process workloads in one-fifth to one-tenth the space they consumed as recently as 2005. Other factors also played into that net footprint reduction, including the increasing density of data storage, the miniaturization of processing power (up until Moore’s Law ran out of space), and the introduction of accelerators into high-performance calculations.

“The way that virtualization was initially implemented was as an efficiency tool for IT departments,” remarked Bryce. “If I wanted a server, I submitted a ticket. On the other end of the ticket, rather than physically going into a data center, racking the server, and connecting cables... the IT administrator would go into VMware, punch some buttons, drag-and-drop some things, and then I’d have that server more quickly. But most of the efficiency was for the IT person rather than the developer or the end user.”

Virtualization replaced the requisitioning process for new hardware with an automated ticking process for the same resources, just on a software level. Along with automation came the need for rich infrastructure telemetry. Increasingly, software and systems were making their status and operating histories available – not so much with dedicated logs but through APIs that were contacted by logging tools. Power management software made use of the emerging API libraries that were being opened up by servers’ internal management functions.

“What the API did was give you the ability to get those resources really quickly and then change them really quickly as well, if your application performance suffered,” Bryce explained. “Once we started to see the actual cloud-native environments, they provided that API for the data center. That gave me the ability to ask for the resources I thought I’d need, and then adjust that if it turned out I needed something different.”

This innovation came during a period where Bryce noticed average CPU utilization rates jump from 15 percent to 80 percent. Processors that were running one workload at any one time were suddenly running four or five.

In theory, containerization can drive utilization even further.

For the past five years, since the dawn of the containerization revolution, David Kramer has been a solutions architect for transitioning data center customers. First with Red Hat, then with Docker, Inc., and now with Mirantis, which acquired the Docker Enterprise portfolio in 2019. His job has been to design the agendas for customer data centers making the move to containerized software infrastructure from the first generation of VMs.

“We’d go up to a whiteboard and draw out a typical data center deployment,” mapping out what happens when a box gets pressed into production and becomes a host for a workload, Kramer said. The usual steps would be: order a server, evaluate the various rack-and-stack scenarios, select one, provision the server’s bare metal, add the hypervisor, and lay the golden image of the VM workload onto the hypervisor. From there IT would give the server to a developer who would provision and install the workload.

“The second question would be, ‘Okay, what is the actual CPU utilization of that workload?’” he continued.  “And we typically found that it was somewhere in the neighborhood of 30 percent.”

But that 30 percent was attributable to the application and all its dependencies, such as code libraries and the operating system. Running all these dependencies certainly accounted for increased utilization. But in a way, that was the problem. VM environments were not taking advantage of the fact that multiple applications inhabiting the same server but sitting in separate VMs often had the same dependencies, meaning a lot of duplicate code being executed, using up hardware resources.

“Sometimes those dependencies deployed on a server don’t co-exist with another application’s,” Kramer went on. “So what do we do? We deploy another VM. All of a sudden, we have the overhead of that VM stacking up versus the actual overhead of compute resources. Containerization almost inverts that story.”

In other words, once a container engine such as Docker becomes the support layer for applications, even if it’s mounted in its own VM, the next step for appropriate workload allocation is to eliminate all that overhead of redundant dependencies. Using the old formulas for determining how busy your CPUs are, this would drive utilization down while driving efficiency up.

Perhaps then, solutions architects could take advantage of the opportunity to put even more applications on every physical host, and data centers should get smaller. Is this happening?

“The answer is no,” responded Chip Childers, CTO at the Cloud Foundry Foundation. “From a purely engineering perspective that excludes a whole bunch of other trends and realities that are going to impact the full answer.”

“What are some of the other things that are going to happen if you use containers?” he continued.  “You’re also probably trying to speed up your development teams. Your development teams are probably developing more distributed architectures — starting to blend apps running purely in your data center with apps running in the public cloud. You’re also probably going to start producing more and more software.”

It’s an example of the Jevons’ Paradox, he said. If you increase efficiency of the way you use a commodity and improve access to that commodity, “you’re actually going to use more of it. Because the more efficient you are at using something, the more possibilities you then expose. It applies to the use of cloud services, of cloud-native architectures within your data center, and a lot of the people and process changes that are generally made to take better advantage of those. We see more applications being developed and more technology being consumed.”

A 2017 report written by Robert Sty, global director of the Phoenix-based technical facility design firm HDR, and published online by Stanford University, suggested a direct, causal link between the complete virtualization of data center infrastructure and a phenomenon he was observing in large facilities, particularly hyperscale operations such as Facebook: the elimination of the raised floor.

“As the implementation of virtualization became more commonplace and servers became more efficient and powerful,” Sty wrote, “data center operators no longer feared operating IT cabinets at higher power densities.” Hyperscalers led the way with situating server racks directly on the slab, replacing the costly raised-floor approach with hot aisle containment (movable barriers that separate hot exhaust air streams from cool intake streams). Theoretically, the increased server densities made feasible by virtualized systems that subdivided CPU power among VMs should have made server racks more difficult to cool. But since they consumed less space, the cooling solutions that emerged were radically simplified and made less expensive.

Some have made the case that hyperscale architectures would not have been feasible without a plan that allowed for cooling high-density server racks on slab floors. In some hyperscale designs, so-called high-availability servers are housed in separate compartments. These may require racks that allow for open space and that need an alternate cooling strategy. But such a compartmentalization presumes that workloads will generally be divided into “critical needs” and “general needs” pools, or something of that sort. That’s easy enough to accomplish when each workload has its own golden image. But when they’re all orchestrated throughout server pools, in seas of containerized microservices, such subdivision may make less sense.

With a non-containerized infrastructure, workloads that require high availability tend to thrive in isolation. In that isolated environment, processing and cooling power draw can climb. Some hyperscale facility designs allow for these HA compartments. With fully automated containerization, however, each workload declares its own processing priority.

The formula Mirantis’s Kramer typically arrives at with customers, he told us, is best represented by Boxboat, one of Docker’s partners, on its website: “Server cost reduction: Increased container density and compute efficiency via Swarm will significantly reduce underutilized servers. Plus, the distributed architecture decreases your need for dedicated high availability servers.”

“What they’re saying is, your app now runs in a container. The orchestrator of the container delivers that workload in a much more efficient manner than what we can do through traditional methods,” Kramer explained.

To sum it all up, isolating workloads in containers and decoupling them from internal dependencies reduces overhead and leads to a near-term utilization decrease. Then, reapportionment of workloads throughout server clusters by Kubernetes leads to more normalized utilization levels and creates an opportunity to eliminate servers. It also leads to an opportunity to eliminate dedicated high-availability sections on the data center floor, which means less space is required for cooling infrastructure.

Still, Chip Childers’ warning persists. When a utilization window opens in a data center, the tendency is to fill it with something, and that may be the reason we haven’t seen data centers shrink as a result of containerization.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish