Skip navigation
Edge Computing Internet of Things.jpg Getty Images

Why Edge Computing Architecture Is Over-Hyped

In 10 years, ‘edge computing architecture’ may be on a list of trends that turned out to be fads.

Edge computing ranks high on the list of cloud trends du jour. We’re told that edge computing architecture is key to overcoming the Achilles’ heel of the cloud--network bottlenecks--and that it will unlock new opportunities for optimizing app performance, reinvigorating the IoT and more.

But I suspect that if 10 years from now you were to compile another list--one comprising the cloud trends that turned out to be fads--edge computing would be on it. That’s true partly because edge computing is an ambiguous term that doesn’t mean much, partly because people were doing edge computing long before it became (forgive me) edgy, and partly because computing on the edge is just not as technically innovative as it may seem.

To prove the point, let me explain in more detail why edge computing architecture is over-hyped.

What Is Edge Computing Architecture?

Although it can be hard to define edge computing in a very specific way, its meaning in a broad sense is discernible enough. Edge computing refers to cloud architectures where data is stored and/or processed on hardware that is geographically close to the devices that consume that data.

In other words, data in an edge computing architecture spends most of its life on the edge of the cloud instead of in the center, so to speak.

The big idea behind edge computing architecture is that when data is brought to the edge of the cloud--rather than being stored and processed on central servers that are thousands of miles from end users--performance and reliability increase. In theory, at least, edge computing shaves precious milliseconds off the time it takes to move data from the cloud to an end user’s device. It also decreases the risk of failure by shortening data transfer paths and reducing the number of places where something could go wrong. It may even deliver data security benefits.

The Problems with Edge Computing Architecture

In principle, edge computing sounds great. But in practice, it often doesn’t add as much value as you might think. Here are three reasons why:

1. Edge computing doesn’t mean anything in particular.

First, it’s often unclear where the edge of the cloud actually lies in the first place. By extension, edge computing lacks a specific meaning.

Some cloud servers are always going to be closer to users, in relative terms, than others. There is no metric for measuring what counts as edge computing and what is simply running a cloud workload in a data center that happens to be geographically proximate to certain users.

If your users are mostly located in the United States and you host your workloads in AWS or Azure data centers that are also in the United States, does that mean you’re doing edge computing? Probably not, although you could argue that it does.

What if you use an on-premise data center to do some data processing to complement operations hosted in a public cloud? That could also be considered an example of edge computing architecture. But it might also be interpreted merely as a hybrid cloud infrastructure.

My point here is that you could put the edge computing label on virtually any cloud architecture you want. That fact alone undercuts much of the value of the edge philosophy.

2. Edge computing is a new term for an old idea.

My second objection to the edge computing architecture hype is that, in a sense, every good data center architect has always been striving to implement edge computing, long before anyone was calling it that.

One of the tenets of cloud architecture 101 is that you should select cloud regions that are close to your users. That’s not an innovation that emerged out of the edge computing revolution. Companies have been doing that ever since the public clouds started offering different regions.

Similarly, companies have been using CDNs for decades to bring data closer to their users. And, yes, you could argue that edge computing architecture is different from CDNs because CDNs are only about serving data from a location that is close to users, whereas edge computing lets you process the data close to users, too. That’s true, but I’m not sure it’s a big enough difference to make edge computing architecture fundamentally different from CDN infrastructure.

More to the point, processing data on the edge of a distributed network is by no means a new idea. Since the dawn of the Web, developers have been writing apps that split data processing between server-side and client-side responsibilities, in much the same way that edge computing shares processing responsibilities between central servers and local devices.

Thus, unless you think JavaScript (a language that became popular in the 1990s due in part to its ability to process data on users’ individual computers instead of on Web servers) is still revolutionary, it’s hard to see edge computing as being as innovative as some folks would claim.

3. Edge computing doesn’t always live up to its promise.

My third objection to the edge-computing trend is that edge models don’t always deliver their promised benefits--at least, not in the most cost-efficient way.

It’s certainly true that data transfer constraints are often the weakest link in cloud architectures. It’s faster and easier to process and move data within a cloud than it is to get the data from central cloud servers onto end user devices when the only network connection you have available is the public internet.

You could address that problem by constructing lots of data centers on the edge of your cloud so that there is a shorter distance for data to travel. You could also pay more to host your workloads in multiple cloud regions that are close to your users. Both of these approaches fall under the umbrella of what you could call edge computing.

But if you want a more cost-efficient solution (not to mention one that is ultimately more scalable), you could just design your workloads in a way that would reduce the amount of data that needs to be transferred in the first place. Being smart about data compression and cloud ingress and egress goes a long way toward improving performance and reliability, even without a sophisticated edge infrastructure in place.

On top of this, I’m not sure that cutting out milliseconds’ worth of data transfer delays is really going to revolutionize many end user experiences. And even if it does, there are lots of other reasons why your users might suffer poor performance or reliability.

This is all to say that investing in costly edge computing architectures may not always be worth it, because edge computing just isn’t as singularly innovative in a technical sense as it may seem.

Conclusion

On a balance, I should point out that I’m not entirely anti-edge. There are certainly scenarios where edge computing makes sense. But most of them involve very, very large infrastructures that include non-traditional devices, like IoT sensors. In those cases, where performance is measured in real time and uptime needs to be as close to 100% as possible, being able to reduce data transfer times by a few milliseconds or process data close to the devices that need it might be worth it.

But I think it would be a mistake, in many cases, for companies that are running traditional cloud-based workloads--like web apps or data processing--to jump on the edge-computing bandwagon. The technologies we’ve been using for years to add efficiency to distributed architectures, such as CDNs and applications that are programmed for internal efficiency, are already doing the job well enough.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish