Embracing the Evolution to Cloud-Native Infrastructure

The emergence of cloud-native infrastructure represents the next significant evolution in infrastructure approaches.

Gartner Blog Network

January 31, 2023

3 Min Read
cloud-native
Alamy

The IT industry regularly undergoes significant evolutions in infrastructure approaches, starting with the shift from mainframes to minicomputers in the 1970s; the adoption of client/server architecture based on industry-standard hardware and software in the 1980s-90s and the rise of virtual machines in the early 2000s. Now, the emergence of cloud-native infrastructure represents a change at similar scale.

cloud-native

cloud-native_1

Gartner broadly defines the term “cloud-native” as something created to enable or leverage cloud characteristics. Cloud-native infrastructure is used to deliver platforms with agility that mirrors the agile processes for delivering cloud-native applications. Cloud-native infrastructure therefore needs to be programmable, resilient, immutable, modular, elastic, and declarative (PRIMED). There are different ways to deploy cloud-native infrastructure, but in practice, large-scale cloud-native initiatives will most likely be based on containers and Kubernetes. As Kubernetes becomes the foundation for a growing number of applications, both internally developed and delivered by ISVs, it effectively becomes the “infrastructure” on which these applications are deployed.

Related: What Is Cloud-Native?

Compared with machine-centric virtual infrastructure, cloud-native infrastructure is fundamentally application-centric. When based on Kubernetes, cloud-native infrastructure introduces some practical changes, such as pods effectively becoming the CPUs; persistent volume claims (PVCs) becoming the data storage devices, and service connectivity layers such as services meshes becoming the network. Cloud-native infrastructure will also exploit the evolution of compute, storage, and networking technology at lower levels of the infrastructure, such as running containers on bare metal servers; offloading tasks to specialized function accelerator cards (FACs); using processors based on architectures such as ARM, and running code with micro-VM approaches like WebAssembly (Wasm).

More importantly, unlike previous waves of infrastructure evolution, the introduction of cloud-native infrastructure will require more than just the adoption of new architecture principles and technology. New operational practices like GitOps, which leverage the active control plane in Kubernetes and its declarative configuration, and consumption-based models for infrastructure sourcing are also fundamental to its implementation. To achieve the full potential of cloud-native infrastructure, all three of these aspects must be addressed holistically.

This document (Gartner subscription required) offers guidance to I&O technical professionals for implementing infrastructure that is optimized for cloud-native architecture using Kubernetes. The goal for deploying cloud-native infrastructure is to support a self-service platform for developing and/or delivering applications based on cloud-native architecture. Kubernetes is typically the core of cloud-native infrastructure today, but it will become less visible over time as it is increasingly delivered with a serverless experience, even as it is pushed further to the edge. Ultimately, software product teams will expect to work in an environment where low-level compute, storage and networking resources are abstracted away.

This article originally appeared on the Gartner Blog Network.

About the Author

Gartner Blog Network

The Gartner Blog Network has expert views on today’s technology and business topics and trends. 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like