An Introduction to Remote Direct Memory Access over Converged Ethernet

Remote Direct Memory Access over Converged Ethernet is moving into storage territory.

Brien Posey

May 6, 2020

4 Min Read
An Introduction to Remote Direct Memory Access over Converged Ethernet
Getty Images

Although once solely a server technology, Remote Direct Memory Access over Converged Ethernet (RoCE) has begun to make its way into the storage market.

To understand RoCE (pronounced rocky), and why it is beneficial to storage, it is necessary to know a little bit about Remote Direct Memory Access (RDMA). RDMA was introduced as a tool for improving application performance in the data center. As you are no doubt aware, many of today’s applications are clustered. Many data center applications are also multi-tiered. In either case, high-speed connectivity is required among the servers that are hosting the various application components.

This inter-server network connectivity can become a bottleneck for high-performance applications. If an application that is running on one server needs to send data to a component that is running on another server, the data transfer would ordinarily involve the entire network stack. The transfer originates at the application layer, but progresses through the other stack layers (presentation, session, transport, network, data link and physical). After the data is sent to the destination host, it must pass through that machine’s network stack to be used by the application. Because this is such an intricate process, it tends to be relatively slow.

RDMA is designed to simplify the communications between applications that are running on different hosts, and has become especially popular for use in clustering. RDMA uses a technology called zero copy to allow an application to access a remote computer’s memory directly. An application can read data from a remote computer’s memory or write data to a remote computer’s memory. This process does not involve the network stack, and it completely bypasses the CPU. This allows data transfers between two computers to happen much more quickly than would otherwise be possible.

RDMA works by offloading the transfer function to a specialized network adapter that has been specially designed for memory to memory transfer. Because transfer functions are being handled by dedicated hardware, other system resources such as the CPU and conventional network adapters are freed up for other functions, thereby providing better overall application performance.

Remote Direct Memory Access over Converged Ethernet can be thought of as an extension to RDMA. Unlike normal RDMA, though, RoCE leverages Ethernet networks. This technology has actually been around for a while, but some of the early implementations didn’t gain widespread acceptance because they were extremely limited. Some of the initial RoCE solutions, for example, required a lossless network, weren’t routable and worked only over short distances. Modern RoCE solutions largely overcome these and other limitations.

While it is tempting to think of RDMA and Remote Direct Memory Access over Ethernet solely in terms of networking, both are beginning to play a role in storage, as well. In fact, some storage array vendors have begun integrating native RoCE support into their appliances. Nexsan, for instance, recently announced that it is now natively enabling 40 gigabit RoCE connectivity between its Assureon server and Assureon Edge servers. This will allow data to be retrieved from storage with minimal CPU involvement. The company estimates that this will provide a 2X improvement in performance.

There are several possible use cases for Remote Direct Memory Access over Converged Ethernet with regard to storage, but the most obvious involves pairing RoCE with Non-Volatile Dual In-Line Memory Modules (NVDIMM). As a type of non-volatile memory, NVDIMM can act as high-performance storage. As previously explained, RDMA and RoCE are designed to perform memory-to-memory transfers between computers. So, with that in mind, imagine that a storage appliance has been outfitted with NVDIMM storage. With the proper hardware and software support, an application could conceivably use RoCE to access the appliance’s NVDIMM storage. 

Normally, when NVDIMM storage is accessed remotely, there is a strong possibility that the storage connectivity will become a bottleneck, preventing the NVDIMM storage from performing to its full potential. The key to improving performance is therefore to use the fastest possible storage connectivity and to do whatever you can to reduce latency. The fact that RoCE is able to bypass a server’s CPU and its networking stack means that RoCE connections will experience significantly less latency than a comparable Ethernet connection, thereby maximizing performance. 

About the Author

Brien Posey

Brien Posey is a bestselling technology author, a speaker, and a 20X Microsoft MVP. In addition to his ongoing work in IT, Posey has spent the last several years training as a commercial astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space.

https://brienposey.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like