What do you get when you teleconference a group of IT engineers connected to the Cloud Native Computing Foundation and ask them to talk about "What's Next in Cloud Native?" You get a conversation centered around the future of edge computing, and at the heart of that future is the container orchestration platform Kubernetes -- the glue that holds the cloud native ecosystem together, and arguably the most important project under the Cloud Native Computing Foundation’s guidance.
"Sometimes we should all pause and look at the amazing thing we've done, which is create a common platform everywhere," Jason McGee, VP and CTO of IBM Cloud Platform said. "I think that platform has started to evolve over the last few years to handle all these different kinds of use cases -- stateful workloads, datacentric workloads and scale out workloads. Part of the reason people say Kubernetes is complicated sometimes is because it's a general purpose platform that can handle all these things."
Kubernetes and Hybrid Cloud
The discussion touched on a number of subjects feeding into the future of edge computing, like infrastructure development investments to support the moderniztion of apps, and tooling to make managing Kubernetes easier for DevOps teams). However, the most widely accessible discourse centered around hybrid cloud and the edge – two technologies that seem to be at the center of the current cloud native revolution.
"I think hybrid cloud is just the reality for most people," McGee said, starting that part of the discussion. "[Those people are] using multiple cloud providers, they have on-prem, and they're doing things at the edge. In a lot of ways, it's just not a debate, it is just the reality of the environment. With Kubernetes becoming the de facto runtime platform, we have a lot of commonalities across all those environments now, which is great.”
McGee seemed to be particularly excited about the metered services that public clouds have been rolling out over the last couple of years, such as AWS Outposts, Azure Arc, Google Anthos, and IBM's recently announced Satellite, which allow public cloud services to run directly from equipment located at an on-premises or colocation data center, and often from another cloud.
"At least in the public cloud space, there's this notion of the public cloud now expanding into those other environments so we can combine the common platform that we have in cloud native with an as-a-service consumption model like we have in the cloud, and apply that not only to cloud providers and data centers, but to on-prem environments and edge environments, so people can consume as-a-service in this diverse landscape," he said.
Because these consume-it-anywhere capabilities being pushed by the clouds will often continue to provide a service even when the cloud connection goes down, they might be especially useful for some edge deployments, especially those with unreliable network connections.
"I think once you have as-a-service consistently everywhere on the same platform, then you very quickly get into new classes of applications," he said. "There's a bunch of really interesting edge applications where we're doing edge AI data processing, IoT sensor processing, and factories where we can run in those locations, but you also want to be able to coordinate between multiple edge locations, branch network or retail store network."
"This idea of cloud, I think, is becoming even more ubiquitous as we combine the power of having one technical approach with a common consumption model that we're seeing kind of evolve out of the public clouds."
Justin Turner, director of engineering at the Texas-based grocery chain H-E-B, was on the panel to talk about cloud native from a user's perspective. He talked about how the use of multiple clouds evolved at his organization.
"As we've grown our cloud competency at H-E-B and modernized more and more of our stack, what we found is that different teams have different needs, like they look at the cloud and determine what is offered that they need," he said. "That's led to us having a multi-cloud approach. As far as I know, we're not running any of the same workloads across clouds, it's more that we might process data into one cloud and use some feature functionality that makes sense for us."
He said that building out to multiple clouds is made easier by using a write once, run anywhere cloud service that takes the pain out of deploying containers across a hybrid infrastructure using multiple clouds.
"A lot of the challenge is just connecting those environments successfully and making sure that the systems that need to talk to each other do, and we do this while still connecting to an on-premises data center," he said. "That's where the hybrid cloud comes in, where you're looking at technologies like Anthos that allow us to bring some of the magic of the cloud into our data center."
Liz Rice, chairperson for CNCF's Technical Oversight Committee and chief open source officer at the cloud-native networking startup Isovalent, pointed to Kubernetes' declarative programming model, in which apps communicate their needed results without listing the necessary steps needed for that to happen, as a benefit for hybrid deployments.
"We're seeing more and more projects looking at things like provisioning bare metal servers, and infrastructure as code genuinely managing the infrastructure down to bare metal," she said. "We're seeing that kind of Kubernetes reconciliation loop, declarative definitions, expanding all the way up the stack, and I think that's really exciting. It's a really strong validation of that declarative approach."
Not everyone on the panel painted a rosy picture of hybrid/multi-cloud, however. Chris Aniszczyk, CNCF's CTO, chimed in with a cautionary note.
"It's a mess," he said. "Sometimes you have a company that acquires a bunch of companies and they're using different stacks -- welcome to multi-cloud at that point -- or you're large enough where you have teams that don't even know what they're doing sometimes, so they're empowered to maybe use some other type of cloud. I think it's just truly the reality of the situation, unless your organization is very top-down and could standardize everything, but I've rarely seen that happen."
Cloud Native at the Edge
It's not surprising that the conversation eventually got around to edge computing, which is increasingly becoming a part of hybrid cloud solutions.
"Lots of projects tackling this space," Rice said. "There are lots of different approaches to how much you push to the edge and how you coordinate between your edge and your cluster, and how you deal with network failure and resiliency.
"I think the interesting thing about this is that Kubernetes, as the distributed operating system, seems to be turning out to actually be really quite appropriate for a lot of different environments, including edge," she continued. "It's a little bit like saying, 'How do we think Linux would play out?' Well, it turns out it's pretty much everywhere, and I think we'll see the same thing with Kubernetes."
Aniszczyk backed up Rice's comparison with Linux.
"People will stretch Kubernetes into interesting directions just like they did with Linux, stuffing it on phones and cars," he said. "The same thing will happen in Kubernetes. It's just kind of an attractive thing, where you have already one system for a type of developer workflow that you got used to, and you're like, 'Why can't I throw that on the edge?'
"Why create something else?" he asked. "I could either strip down Kubernetes to K3s style or smaller, or create some crazy proxy mechanism via Virtual Kubelet that will pretend that it's running on something different. I think there'll be lots of different approaches and innovation in this space to make Kubernetes a decent solution for edge-based compute."
Edge environments tend to be constrained and grapple with staffing and scheduling issues, so they present difficulties that don't exist in traditional data centers or public clouds, McGee says. He pointed out that some of the issues the edge brings to the table are challenging for Kubernetes as well.
"I think there are a couple of real challenges to adapt Kubernetes to work well in that environment," he said. "Some of it is about simplifying and shrinking it down -- projects like K3s and other related things are good examples of that -- to make it smaller, because one of the characteristics of edge is it's a more resource constrained environment."
"I do think edge will drive changes, because Kubernetes was designed in a largely cloud and large data center context," he added. "As we do more and more at the edge I think we'll find more and more things to improve in the platform to make it fit well there."
For the time being there doesn't appear to be anything close to a one-size-fits-all standard for edge deployments, because each edge location and use case presents it's own set of issues.
"Remote management becomes part of the challenge," McGee said. "Like, do I have to Kube-cluster at the edge? When my edge is like 20,000 things, do I have 20,000 Kube-clusters? How do I manage 20,000 Kube clusters? There's some new management challenges to deal with in the edge.
"I think there's a fundamental application question too," he added. "I'll oversimplify, but is the edge long running containers or is the edge a more dynamic serverless kind of model? What is Kubernetes doing to adapt itself to run a more serverless style scale-to-zero, spin it up when you need to, kind of run time? All those things are possible; there are projects around all of them to do remote worker nodes, to do small footprint, to adapt Kubernetes."