IT Innovators: How Do You Get 50-Gigabit Ethernet Traffic to a Virtual Machine?

IT Innovators: How Do You Get 50-Gigabit Ethernet Traffic to a Virtual Machine?

How do I get 50 GbE traffic to a Virtual Machine (VM)? That’s a really good question, especially considering the stress increasing data center traffic is placing on Ethernet links these days. The stress comes from a variety of sources—increasing use of cloud computing, Big Data and the rise of mega data centers, just to name a few. However, the net effect is the same. Today’s network administrators are left scrambling to meet capacity requirements. It’s a question of networking performance really, and for modern data centers, it’s a huge challenge.

The 50 GbE specification aims to help by enabling cost-efficient 50-Gb/s signaling between a data center’s server network interface controller (NIC) and top-of-rack (ToR) switch. That’s good news for IT operators as it means they can now more easily inter-network large numbers of servers and storage to meet growing bandwidth demands. But, with many modern data centers going virtual thanks to the adoption of VMs it also presents a few challenges. And this brings us back to the original question:  How does one get 50 GbE traffic to a VM?

The short answer is through the use of Software Defined Networking (SDN); preferably a SDN solution based on a host model to allow for the utmost flexibility and scale. With a host-based model, each entity in the network platform is programmable. Consider the case of Windows Server 2016, for example, which is based on a host SDN model. It features programmable rule/flow tables that are used to perform per packet and per connection operations, in turn allowing the data plane to enforce policies with extreme efficiency to enable scaling to accommodate 40GbE+ traffic with all the underlying offloads.

Of course, using a SDN solution based on a host model is just part of the answer. The solution must also utilize new and improved technologies with the ability to help drive the market to actually consume the greater amount of bandwidth. Prime examples are Virtual Machine Multi Queue (VMMQ) and Converged Remote Direct Memory Access (RDMA), both found in Windows Server 2016.

VMMQ efficiently offloads virtual network processing to physical adapter hardware, significantly increasing overall throughput and reducing the network processing burden on host servers. Converged RDMA is a technology innovation designed to reduce cost while simplifying the network infrastructure requirements. It works by eliminating the need to maintain separate physical networks for networking and RDMA-based storage. Instead, RDMA-based storage and the SDN fabric are converged to run on the same underlying NIC. Doing so means that RDMA can be used to connect one’s compute nodes to their storage nodes using the same NICs used to service the rest of their network traffic (storage and regular tenant). 

It’s technology innovations such as this, which have come about thanks to continued investment from companies providing SDN solutions that are today making it possible for you to get 50 GbE traffic to a VM. If you have any thoughts on this topic drop me a line at [email protected]. In the meantime, for a look at past and future blogs on a whole range of IT-related topics check out this page.

This blog is sponsored by Microsoft.

Cheryl J. Ajluni is a freelance writer and editor based in California. She is the former Editor-in-Chief of Wireless Systems Design and served as the EDA/Advanced Technology editor for Electronic Design for over 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish