Skip navigation

Understanding Windows Server 2012 Hyper-V Networking Changes

Microsoft Windows Server 2012 has been optimized for cloud solutions

No man is an island. Nor should your virtual machines (VMs) be, if you want them to do anything useful.

Networking is the central nervous system of the data center, allowing communication between all the various parts of your environment. As demands on the infrastructure increase and more companies move to virtualization, the need for site resiliency increases as well. Networking must constantly evolve to meet these increasing demands.

Fortunately, Windows Server introduces a slew of new technologies. These technologies enable Windows Server systems and virtual environments to meet all manner of new requirements and scenarios, including private and public cloud implementations.

Often, this type of scenario involves a single infrastructure that's shared by different business units or even different organizations. Server 2012 (formerly code-named Server 8) has been optimized for cloud solutions. This will become apparent when I walk through the technologies for networking alone: Nearly all networking areas have been enhanced in some way for Server 8.

In this article, I'll cover three primary areas of Server 8 Hyper-V network features:

  • Network virtualization
  • Hyper-V virtual switch extensibility
  • Enhancements to support new network hardware capabilities and improved Quality of Service (QoS)

Other great capabilities include a new site-to-site VPN solution; huge enhancements to the Server Message Block (SMB) protocol, enabling VMs to run from a Server 8 file share; native NIC teaming; and consistent device naming. But I want to focus on the major network technologies that most affect virtualization.

Network Virtualization

Virtualization has always striven to abstract one resource layer from another, giving improved functionality and portability. But networking hasn't embraced this goal, and VMs are tied to the networking configuration on the host that runs them. Microsoft System Center Virtual Machine Manager (VMM) 2012 tries to link VMs to physical networks through its logical networks feature, which lets you create logical networks such as Development, Production, and Backup. You can then create IP subnets and virtual LANs (VLANs) for each physical location that has a connection to a logical network. This capability lets you create VMs that automatically connect to the Production network, for example; VMM works out the actual Hyper-V switch that should be used and the IP scheme and VLAN tag, based on the actual location to which the VM is deployed.

This feature is great. But it still doesn't help in scenarios in which I might be hosting multiple tenants that require their own IP schemes, or even one tenant that requires VMs to move between different locations or between private and public clouds, without changing IP addresses or policies that relate to the network. Typically, public cloud providers require clients to use the hosted IP scheme, which is an issue for flexible migration between on-premises and off-premises hosting.

Both these scenarios require the network to be virtualized, and the virtual network must believe that it wholly owns the network fabric, in the same way that a VM believes it owns the hardware on which it runs. VMs don't see other VMs, and virtual networks shouldn't see or care about other virtual networks on the same physical fabric, even when they have overlapping IP schemes. Network isolation is a crucial part of network virtualization, especially when you consider hosted scenarios. If I'm hosting Pepsi and Coca-Cola on the same physical infrastructure, I need to be very sure that they can't see each other's virtual networks. They need complete network isolation.

This virtual network capability is enabled through the use of two IP addresses for each VM and a virtual subnet identifier that indicates the virtual network to which a particular VM belongs. The first IP address is the standard address that's configured within the VM and is referred to as the customer address (using IEEE terms). The second IP address is the address that the VM communicates over the physical network and is known as the provider address.

In the example that Figure 1 shows, we have one physical fabric. Running on that fabric are two separate organizations: red and blue. Each organization has its own IP scheme, which can overlap, and the virtual networks can span multiple physical locations. Each VM that is part of the virtual red or blue network has its own customer address. A separate provider address is used to send the actual IP traffic over the physical fabric.

Figure 1: Virtual networking example
Figure 1: Virtual networking example 

You can see that the physical fabric has the network and compute resources and that multiple VMs run across the hosts and sites. The color of the VM coordinates with its virtual network (red or blue). Even though the VMs are distributed across hosts and locations, the hosts in the virtual networks are completely isolated from the other virtual networks with their own IP schemes.

Two solutions-IP rewrite and Generic Routing Encapsulation (GRE)-enable network virtualization in Server 8. Both solutions allow completely separate virtual networks with their own IP schemes (which can overlap) to run over one shared fabric.

IP rewrite. The first option is IP rewrite, which does exactly what the name suggests. Each VM has two IP addresses: a customer address, which is configured within the VM, and a provider address, which is used for the actual packet transmission over the network. The Hyper-V switch looks at the traffic that the VM is sending out, looks at the virtual subnet ID to identify the correct virtual network, and rewrites the IP address source and target from the customer addresses to the corresponding provider addresses. This approach requires many IP addresses from the provider address pool because every VM needs its own provider address. The good news is that because the IP packet isn't being modified (apart from the address), hardware offloads such as virtual machine queue (VMQ), checksum, and receive-side scaling (RSS) continue to function. IP rewrite adds very little overhead to the network process and gives very high performance.

 

 

 

Figure 2 shows the IP rewrite process, along with the mapping table that the Hyper-V host maintains. The Hyper-V host maintains the mapping of customer-to-provider addresses, each of which is unique for each VM. The source and destination IP addresses of the original packet are changed as the packet is sent via the Hyper-V switch. The arrows in the figure show the flow of IP traffic.

Figure 2: IP rewrite process
Figure 2: IP rewrite process 

GRE. The second option is GRE, an Internet Engineering Task Force (IETF) standard. GRE wraps the originating packet, which uses the customer addresses, inside a packet that can be routed on the physical network by using the provider address and that includes the actual virtual subnet ID. Because the virtual subnet ID is included in the wrapper packet, VMs don't require their own provider addresses. The receiving host can identify the targeted VM based on the target customer address within the original packet and the virtual subnet ID in the wrapper packet. All the Hyper-V host on the originating VM needs to know is which Hyper-V host is running the target VM and can send the packet over the network.

The use of a shared provider address means that far fewer IP addresses from the provider IP pools are needed. This is good news for IP management and the network infrastructure. However, there is a downside, at least as of this writing. Because the original packet is wrapped inside the GRE packet, any kind of NIC offloading will break. The offloads won't understand the new packet format. The good news is that many major hardware manufacturers are in the process of adding support for GRE to all their network equipment, enabling offloading even when GRE is used.

Figure 3 shows the GRE process. The Hyper-V host still maintains the mapping of customer-to-provider address, but this time the provider address is per Hyper-V host virtual switch. The original packet is unchanged. Rather, the packet is wrapped in the GRE packet as it passes through the Hyper-V switch, which includes the correct source and destination provider addresses in addition to the virtual subnet ID.

Figure 3: GRE
Figure 3: GRE 

 

In both technologies, virtualization policies are used between all the Hyper-V hosts that participate in a specific virtual network. These policies enable the routing of the customer address across the physical fabric and track the customer-to-provider address mapping. The virtualization policies can also define the virtual networks that are allowed to communicate with other virtual networks. The virtualization policies can be configured by using Windows PowerShell, which is a common direction for Server 8. This makes sense: When you consider massive scale and automation, the current GUI really isn't sufficient. The challenge when using native PowerShell commands is the synchronous orchestration of the virtual-network configuration across all participating Hyper-V hosts.

Both options sound great, but which one should you use? GRE should be the network virtualization technology of choice because it's faster than IP rewrite. The network hardware supports GRE, which is important because otherwise GRE would break offloading, and software would need to perform offloading, which would be very slow. Also, because of the reduced provider address requirements, GRE places fewer burdens on the network infrastructure. However, until the networking equipment supports GRE, you should use IP rewrite, which requires no changes on the network infrastructure equipment.

Extensible Hyper-V Virtual Switch

One frequent request from clients has been the ability to add functionality to the Hyper-V switch-functionality such as enhanced packet-filtering capabilities, firewall and intrusion detection at the switch level, switch forwarding, and utilities to help sniff data on the network. Windows already has rich capabilities around APIs and interfaces, specifically network device interface specification (NDIS) filter drivers and Windows Filtering Platform (WFP) callout drivers, that let third parties integrate with the OS. The Hyper-V extensible switch uses the same interfaces that partners are already using, making it easy for vendors to adapt solutions to integrate directly into the Server 8 Hyper-V extensible switch. There are four specific types of extension for the Hyper-V switch, as Table 1 shows.

Notice that these extensions don't completely replace the Hyper-V switch. Rather, they enhance it, enabling organizations to be specific about the layers of additional functionality that are required within the environment, without needing to perform a complete switch replacement. Because the extensions are embedded within the Hyper-V switch, the capabilities apply to all traffic, including VM-to-VM traffic on the same Hyper-V host and traffic that traverses the physical network fabric. The extensions fully support live migration and can be managed by using GUI tools, Windows Management Instrumentation (WMI) scripting, and PowerShell cmdlets, providing a consistent management feel across the extensions and core Hyper-V capabilities. The extensions for the Hyper-V switch are certifiable under the Windows 8 certification program, helping the extensions to meet an expected level of quality.

Getting the Most From Your Hardware

Software enhancements can go only so far. At a certain point, hardware needs to change to provide new capabilities and performance levels. Thankfully, in recent years there have been many hardware enhancements to networking, mainly in the 10Gb world. Server 8 can leverage 10Gb Ethernet to take advantage of these enhancements.

QoS. When you look at a cloud scenario that functions as both public and private and that can have multiple tenants, meeting service level agreements (SLAs)-including network bandwidth availability-with different tenants becomes extremely important. One VM mustn't consume all the network bandwidth, starving other VMs. In today's converged fabrics, in which network and storage use a shared physical cable, keeping one type of traffic from using more bandwidth and storage than is desired is also vital.

Server 8 includes a Hyper-V QoS capability. This capability uses PowerShell to make it easy to set weights for VMs, in addition to setting minimum bandwidth allocations. These settings help VMs get the required amount of bandwidth in times of contention. When there's no contention, VMs can consume the available bandwidth to be as high-performing and responsive as possible.

 

 

 

This software QoS is focused at a virtual switch port level. Hardware QoS is also available, by using a new capability in many of today's network infrastructures: Data Center Bridging. DCB allows classification of all the network traffic that's being sent over the physical NIC, whether the traffic is from the Hyper-V host or a VM. In Server 8, the traffic can be divided into eight buckets by using classifications. For example, one bucket could be for iSCSI traffic, another for SMB, and a third for general IP traffic. For each bucket, DCB can configure how much bandwidth is allocated, so that no single type of traffic consumes all available bandwidth.

When you consider software QoS and hardware QoS with DCB, the big difference is that software QoS occurs at a VM level and works through the Hyper-V switch, whereas hardware QoS is VM-independent and works across all the types of traffic going over the network. Therefore, hardware QoS enables guaranteed service levels for different types of traffic across a single fabric.

Single Root I/O Virtualization. Another great enhancement that takes advantage of NIC improvements is Single Root I/O Virtualization. SR-IOV allows one PCI Express network device to represent itself as multiple devices to VMs. This means that a physical NIC can actually present multiple virtual NICs, which in SR-IOV terms are called virtual functions (VFs). Each VF is of the same type as the physical card and is presented direct to specific VMs. The communication between the VM and the VF now completely bypasses the Hyper-V switch, because the VM uses Direct Memory Access (DMA) to communicate with the VF. Therefore, the communication between the VM and VF is very fast and very low-latency. Neither the VMBus nor the Hyper-V switch is involved in the network flow from the physical NIC to the VM, as Figure 4 shows. Because the Hyper-V switch is bypassed, any features that are exposed through the virtual switch (such as switching, ACL checking, QoS, DHCP Guard, and third-party extensions) no longer apply to the traffic that uses SR-IOV, improving network performance.

Figure 4: Traffic flow with traditional Hyper-V networking and with SR-IOV
Figure 4: Traffic flow with traditional Hyper-V networking and with SR-IOV 

With traditional Hyper-V networking, all traffic flows between the physical NIC and the VM through the Hyper-V Layer 2 virtual switch. With SR-IOV, the virtual switch is completely bypassed, as Figure 4 shows.

The first time I heard about SR-IOV, I thought, "The whole point of virtualization is to abstract the virtual instance from the underlying hardware for maximum portability. Won't SR-IOV break my mobility because the VM is directly talking to hardware, and therefore I won't be able to Live Migrate to a host that doesn't support SR-IOV?" Hyper-V takes care of this issue. Behind the scenes, the Network Virtualization Service Client (NetVSC) in the VM creates two paths for the VM network adapter (also in the VM). One path is via SR-IOV, and one uses the traditional VMBus path, which uses the Hyper-V switch. When the VM is running on a host with SR-IOV, the SR-IOV path is used and the VMBus is closed. But if the VM is moved to a host without SR-IOV, then NetVSC closes the SR-IOV path and opens the VMBus path, which is transparent to the VM. This means that you don't lose any mobility, even when using SR-IOV.

SR-IOV requires both the server motherboard and the network adapter to support SR-IOV. In addition, the OS must support SR-IOV, which Server 8 does.

Dynamic VMQ

The final enhancement that I want to discuss is the dynamic virtual machine queue. VMQ was actually introduced in Windows Server 2008 R2. VMQ allows separate queues to exist on the network adapter, with each queue being mapped to a specific VM. VMQ removes some of the switching work on the Hyper-V switch: If the data is in a particular queue, then the switch knows that the data is meant for a specific VM. The difference between VMQ and SR-IOV is that the traffic still passes through the Hyper-V switch with VMQ, which presents different traffic queues rather than entire virtual devices. In Server 2008 R2, the assignment of a VMQ to a VM is static. Typically, the assignment is first-come first-served, as each NIC supports a certain number of VMQs.

In Server 8, this assignment is dynamic, so the Hyper-V switch constantly monitors the network streams for each VM. If a VM that was very quiet suddenly becomes busy, then that VM is allocated a VMQ. If no VMQs are available, then the VMQ is taken from a VM that might previously have been busy but is now quiet. Again, network performance benefits.

Cloud-Optimized OS

With Server 8 being Microsoft's cloud-optimized OS and supporting private and public clouds, the changes to networking enable flexibility in VM deployments and mobility while taking advantage of the most recent network hardware improvements. As I stated at the beginning of this article, the OS makes other enhancements to its network capabilities. I'll cover those enhancements in future articles that apply to Hyper-V and non-virtualized scenarios.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish