Why Mobile Edge Computing Architecture (Sometimes) Edges out the Cloud

The cloud is a good solution, except when it isn't. Here's how to tell when a mobile edge computing architecture makes more sense.

Craig Mathias

November 15, 2018

4 Min Read
Cliff and cloud

First, a personal bias: I believe that essentially all organizational computing will move to the cloud over time--and the public cloud, at that. PCs will be reserved for content creators and the vast majority of end users--content consumers--using mobile handsets, tablets, thin clients (like Chromebooks) and similar devices across increasingly powerful and available wireless networks. Most processing and storage will also be in the cloud. The cloud is, in fact, an ideal platform for content consumers (who, to be fair, will usually make at least a few changes and updates to their content), as it minimizes IT expense and local infrastructure, eases the implementation of collaborative solutions, improves reliability, and replaces capital-intensive client/server with operating-expense-oriented IT services that scale nicely and decline in cost over time, thanks to a thriving competitive market. And the cloud is a perfect platform for the communications and collaboration at the heart of essentially every organization today.

The problem, however, is that even though the centralization of data and the processing of that data in the cloud clearly optimizes many applications, it doesn’t work well in every single case. As an example, consider editing and rendering video sequences. The editing part might work well in the cloud, but all of the raw video involved would need to be uploaded and edits downloaded. Moving large data objects between the cloud and the end user device, typically a PC, can really slow the creative process.

It’s also important to point out that the real edge of the organizational network--end user computing and communications devices--continues to benefit from growing innate computational power, with even handsets suitable for computing tasks that only a few years ago would have required a PC. (And, yes, some folks do indeed edit video on handsets!) The question at hand, then: How should organizational IT utilize computing at the edge, in the form of either end user devices or servers within the organization located at the edge of the cloud?

I’m going to argue that the key element in the decision here is the I/O performance required by any given application running at the edge. Sure, even a 1 Gbps link to the internet is available to many today. This is impressive performance, indeed, but it's still well below the 6 Gbps of today’s SATA 3.0 connection, the interface currently on most storage devices like disks and SSDs. And that’s going to grow to 16 Gbps with SATA 3.2 and, while we’re at it, 40 Gbps with Thunderbolt 3.

This means that data on storage local to the computational device will essentially always be available many times faster than fetching it from the cloud--even given increasing Wi-Fi and WAN (wired and 4G/5G) speeds. This isn’t an issue for applications running entirely in the cloud and simply reflecting the results of those computations to handsets and such, but for data-intensive local applications, the cloud will be described by many users as as “slow.” And there’s an additional problem in that many IT managers are simply uncomfortable with mission-critical data being available only via the cloud, given the possibility of service outages and the ever-present potential for security compromise.

The solution? When clearly optimal, move computing and storage to the edge of the cloud--still on an organization’s premises--rather than the cloud itself. Yes, this does look a lot like the client/server model that’s dominated organizational computing since the 1980s, but note that we can still rely on the cloud--eventually transparently--for backup, failover, and integrity and availability as required. The options here include running applications (primarily those related to personal and not group productivity) directly on end user devices, hybrid private/public cloud (originally assumed to be a transitional architecture while everything move to the public cloud), and more creative approaches such as Cisco’s Open Hybrid Cloud that encompasses multiple cloud strategies into a single unified offering. We expect to see many more capabilities provisioning this do-anything, adaptable approach.

Beyond pure edge computing and hybrid cloud, there’s also the rapidly emerging field of fog computing, which is focused on dealing with the I/O bandwidth constraints we noted above. While a formal distinction with edge computing is not always entirely clear, fog computing is often mentioned in conjunction with IoT as a methodology for dealing with vast numbers of devices and large amounts of I/O. Conceptually, fog computing conceptually includes local cloud computing--again, in an attempt to manage the WAN bandwidth issue at the heart of the motivation for edge computing in the first place.

So, while many think of edge computing solely as processing performed on mobile devices--and that’s a perfectly valid definition--the value of local-server-based edge computing, hybrid cloud, and fog computing cannot be underestimated. Again, with communications and collaboration at the heart of IT today, simply extending client/server to the cloud isn’t going to work very well. Increasingly, then, we’ll be provisioning more capabilities at the edge, with this model of computing dominating until 10+ Gbps WAN services become the norm--and that, as they say, may take a while.

About the Author

Craig Mathias

https://www.linkedin.com/in/craigmathias/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like