Skip navigation
Understand Bandwidth Usage with 10Gbps Network Adapters

Understand Bandwidth Usage with 10Gbps Network Adapters

Q: I have two Hyper-V hosts with 10 Gbps network connectivity between them and use the link for live migration, and if I use a virtual network adapter in the management OS for live migration, I don't get 10Gbps bandwidth, but I do get it if I don't use a virtual network adapter--why?

A:  There are a lot of different parts to this Hyper-V question. I'll explain this by discussing my lab environment, which I configured to try and replicate this issue.

In my lab, I have two Hyper-V hosts, each with a 10Gbps network adapter. Initially I configured the 10Gbps adapters with a static IP address and configured that network to be used for live migration. No Hyper-V virtual switch was used at this point.

A single virtual machine (VM) Live Migration wouldn't consume the 10Gbps link--maybe it would consume 4Gbps at most (the reason why will become clear later). However, if you have three to four simultaneous live migrations (make sure you set the Hyper-V hosts to have at least four simultaneous live migrations on both hosts) then all the 10Gbps of bandwidth is used.

Next I created a Hyper-V virtual switch, then created some virtual network adapters for use by the management OS (I explain this scenario in my YouTube video).

One of these virtual network adaptors I used for live migration. However, I got only around 4.5Gbps for the same simultaneous live migrations that before used all 10Gbps. Let’s look at why.

A lot of processor work goes on during 10Gbps network communications that would saturate a single processor core and limit bandwidth. Because I was running multiple simultaneous live migrations, when I used live migration over the native NIC, Receive Side Scaling (RSS) was used to distribute the network load over multiple processor cores on the live migration target, allowing the full bandwidth to be realized because a single core was no longer the limiting factor.

A single live migration used only around 4Gbps--it was maxing out a single core, as the load couldn’t be distributed using RSS because it's a single stream of traffic and its packet order must be preserved.

When I tried to use a Hyper-V virtual switch for the network adapter, RSS was automatically disabled (because RSS can’t be used with the Hyper-V virtual switch since RSS and VMq are mutually exclusive) and the same simultaneous live migrations were once again limited to around 4Gbps (the exact speed will depend on the speed of your processor cores).

The limiting factor is actually the receiving host--because RSS can’t be used on the receiving host (because it’s now a virtual NIC on a Hyper-V virtual switch), all traffic is processed by a single processor core.

This is where the bandwidth bottleneck is reintroduced, because the single core will max out at 100 percent (additionally a VMq from the network adapter might have been assigned to the virtual network adapter--which also limits traffic processed to a single core--but even without VMq, the traffic is still limited to a single core).

The outbound traffic from the Hyper-V host is distributed over various available cores. The receiving host can be seen in the screen shot below, along with its single maxed-out processor core.



The only way to leverage the full 10Gbps bandwidth when using a Hyper-V virtual switch virtual network adapter in the management OS for Live Migration is to have multiple simultaneous live migrations going to different Hyper-V hosts (as each target host would be using its own processor core to receive traffic).

Note that if you're using a virtual network adapter in the management OS on the source and the destination uses a physical network adapter, you would get the full 10Gbps--remember, it's the receiving host that needs to leverage RSS to spread the inbound load, and not the sender.

This isn't really very practical though, as likely you need to perform live migration in both directions and might want the full 10Gbps.

The flip side to this argument is that 3-4Gbps is still very good performance for your live migrations. Often that 10Gbps is being used for other types of traffic as well, such as clusters or VMs, and so the 3-4Gbps might be acceptable for your needs.

Even Windows Server 2012 R2 doesn't help for management OS virtual network adapters. Windows Server 2012 R2 introduces virtual RSS (vRSS), which allows RSS to be used in combination with VMQ so a virtual network adapter can now have its traffic processed by multiple proessor cores. However, vRSS applies only to virtual network adapters used by VMs and not those used by the management partition. Maybe in the next version!

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish