Although service mesh architectures only came on the scene four years ago with the release of Linkerd, the technology is increasingly being considered as essential for organizations deploying containers, especially in hybrid cloud environments.
The current rise of service mesh architectures as part of container platforms is just another example of the transformative – some might say disruptive – effect that container technology has had in the modern data center.
Containers have made it easier for companies to deploy hybrid and multiple cloud solutions. At the same time, they're bringing changes to the ways that enterprises are consuming their infrastructures, which has resulted in the increasing use of managed container services. Indeed, it's the added complexity and traffic that serverless functions such as microservices bring into play that makes service mesh architectures attractive, despite the added complexity that they bring to the table.
What Are Service Mesh Architectures, Anyway?
The purpose of a service mesh is to act as a sort of secondary network that sits between the network and applications – and it performs two big functions: to help services find and connect to each other; and to make sure that repeated attempts to connect after a connection failure don't overwhelm the system.
Service mesh also makes it easy to consistently deploy Kubernetes in hybrid-cloud environments where IT pros are looking at on-premises data centers, multiple clouds and edge locations as a single infrastructure – something that isn't easy using traditional, software-defined networking technologies.
"If you go back a few years, I think everyone thought that SDN technology would result in this virtual network that spanned all these locations, but that's actually really hard for most people to operate," said Jason McGee, VP and CTO at IBM Cloud Platform. "I think what you'll see instead is a service mesh that spans those environments, where you have this kind of logical application-level connection on top of multiple distinct physical networks."
Since the launch of Linkerd, the service mesh field has become quite crowded. Amazon Web Services, Microsoft Azure and Google Cloud Platform all offer their own branded service mesh architectures, as does VMware with its proprietary Tanzu and Hashicorp with Consul. Other open source meshes (besides Linkerd) include, Kuma and Istio.
Linkerd, which just became a graduated project at Cloud Native Computing Foundation, is considered to be the easiest to use, although it's a rather limited containers-only solution that isn't able to incorporate virtual machines and monolithic legacy apps running on bare metal, which is essential to many – if not most – enterprises. Those organizations will need to employ a full-featured service mesh like Istio, an open project started by Google, IBM and Lyft, which can handle the traffic from those mixed environments.
Although it's considered complex and difficult to master, Istio is the service mesh architecture getting the most attention. It's included by default in Red Hat's OpenShift container platform, as well as being the default mesh in both Google's and IBM's clouds. Like other "full service" meshes, it works with the open source proxy server Envoy, which is the source of both its rich set of capabilities and its complexity.
"We are perceived to be complex because we're the most feature rich service mesh," said Lin Sun, director of open source at Istio-focused solo.io and a member of the Istio Technical Oversight Committee. "A lot of our users are adopting Istio just because their security team says, 'I want mutual TLS among all my services. I don't want to just protect the edge, I want to be able to do zero trust communication among all my services, and I want keys and certificates to rotate, maybe every 10 days or every 30 days or even every day, and I want that automation consistently.' A service mesh like Istio can solve this really easily, and without requiring any code change on your micro services."
Sun says that most enterprises typically begin their journey to containers simply to lift and shift legacy monolithic applications to the cloud, and don't really need service meshes.
"In that case, they're running one big container and there's no connectivity issue," she said. "As they start looking to break that monolithic application into multiple containers, into microservices, to improve speed and agility, that's when they start looking into a technology like service mesh to help them solve connectivity issues among their microservices."
At that point, adopting a service mesh can make things easier for nearly everyone in an IT department. For example, a service mesh’s built-in security controls means security staff no longer has to worry that a DevOps team will misconfigure a security setting on a container, because security settings can be universally configured by application type, making the workload lighter for both security pros and DevOps teams.
Service Mesh Architectures Are Destined For Containers
The consensus among people working with this technology seems to be that service mesh architectures will be integrated into all container platforms, whether they're based on Kubernetes, Docker or something else.
"I think service mesh will become a standard component of the commercial Kubernetes offerings, so that it becomes just another functionality of the container platform," said Vladimir Galabov, head of the cloud and data center research practice at the research firm Omdia. "One of the challenges that I had when I first started tracking the market is that I was trying to look at the other components separate from the management components, separate from the service mesh, separate from security, but the reality is that they're very tightly coupled. So when you buy one you kind of are buying all of them at the same time."
IBM's McGee echoed that sentiment.
"I think right now we think of Kubernetes and we think of service mesh as two things," he said. "A few years from now I don't know that we'll think of it that way. I think it'll be pretty pervasive to have a service mesh concept built into the platform that you're using."
Brian Gracely, senior director of product strategy at Red Hat, said he thinks that in the future there will be numerous managed service mesh offerings, which has already started with Linkerd-focused Buoyant Cloud, the first managed service mesh offering.
"As service mesh becomes more popular and more widely used, just like we saw with Kubernetes it's very normal to expect to see managed versions of service mesh," he said. "People want the benefit of it and they'd like to offload managing the underlying pieces to someone else."
"Service mesh is still a space that's evolving," Gracely added. "There's a lot of innovation happening and there's still not necessarily one standard that everybody agrees upon. Allowing somebody else to help you is a natural progression, I think."