In some ways, edge computing offers tremendous opportunity for data center colocation providers, whose data centers can form the backbone of edge infrastructure. In other respects, though, edge presents a critical challenge for colo, because organizations may opt to design edge architectures that take colocation facilities out of the picture.
Fortunately for colo companies, the emergence of next-generation hybrid cloud frameworks like AWS Outposts and Azure Stack promises to make it easy to include colocated infrastructure within architectures that seamlessly extend public cloud services to the edge.
Here’s how these frameworks fit within edge computing strategies, and how they may help colo providers remain competitive, as more and more organizations deploy workloads at the edge.
Edge Computing and Colocation
Edge computing is a broad term that refers to any type of architecture in which workloads -- or parts of them, at least -- reside physically close to the end users who access them.
“Close” in this context is relative. Building an edge architecture could involve hosting applications or data on network routers or other specialized devices running within your own local data center, a strategy known as device edge. Or, you could do edge computing simply by hosting workloads in a data center that is within a hundred miles or so of your end users, which would place the workloads closer than those hosted in a public cloud data center thousands of miles away.
The latter type of edge architecture -- which is sometimes called cloud edge, although that’s a loosely defined term -- is the one that offers the greatest opportunity for colocation providers. Because there are thousands of colocation data centers spread around the globe, colocation facilities offer an easy way to deploy servers close to whichever group of end users you want to support. Public cloud data centers are much fewer and farther between.
Hybrid Cloud, Colocated at the Edge
The major question for the colocation industry regarding edge architectures is the extent to which organizations will choose to adopt a cloud edge model that relies on colocation centers, versus a device edge configuration, in which colocation has a less obvious role to play.
From this perspective, hybrid cloud frameworks like Outposts and Azure Stack are a boon to colocation providers. These frameworks make it very simple to take a workload that is currently hosted in the public cloud and move it to colocated infrastructure. There, the workload can still be deployed and managed just as it was in the public cloud, while enjoying the advantage of greater geographic proximity to end users.
For companies that are already running workloads in the public cloud but want to benefit from the latency and data privacy opportunities afforded by edge architectures, hybrid cloud frameworks offer a very simple way to migrate to the edge with minimal changes to workload configuration.
Migrating a workload that is currently hosted in the public cloud to a device edge architecture would be more challenging. You would need to figure out how to redeploy the workload on whichever edge devices comprise the frontier between your cloud infrastructure and your end users. Given that edge devices typically take the form of routers, multiplexers, or other specialized hardware, you can’t simply lift and shift an application hosted on a public cloud service like EC2 onto an edge device. You’d need to reconfigure the workload and the infrastructure architecture that supports it.
Nor can you, for that matter, easily take a conventional on-premises workload and redeploy it as part of a device edge architecture. If you have an application that is currently hosted on premises, and you want to move it to the edge in order to take advantage of a distributed architecture while still keeping the application close to your end users, it would be simpler in many cases to use a hybrid cloud framework and a colocation facility. With that approach, you could lift and shift most conventional types of applications and data -- those hosted on a virtual machine, for example -- onto a colocated server and manage it using public cloud compute and storage services.
A Bright Future for Colocation and the Edge
For many organizations, then, the road to edge architectures seems likely to run through a colocation facility and to involve hybrid cloud frameworks. Provided they are willing to accept the costs and hardware restrictions that come with hybrid cloud frameworks like Outposts and Azure Stack, companies that currently host their workloads on-premises or in the public cloud will have an easier time moving to a hybrid cloud edge architecture rooted in colocation centers than they will building a device edge.
That’s not to say that every organization will take this route of course. Device edge models provide benefits that a hybrid cloud edge can’t, such as ultra-low latency and the ability to support specialized workloads that can’t be deployed with conventional cloud services. For some companies, these advantages will outweigh the challenges of moving to a device edge architecture.
Still, if I had to place a bet on what the majority of edge architectures of the future will look like -- assuming the edge revolution doesn’t turn out to be just a bunch of hype -- I’d put my money on colocation centers and hybrid cloud forming their foundation.