Windows Server 2012 and Hyper-V are fundamental building blocks of Microsoft's private cloud strategy. The most recent Microsoft server OS comes with Microsoft Hyper-V 3.0 hypervisor, which features many changes. Hyper-V 3.0 might become the first Microsoft virtualization platform to truly challenge VMware vSphere.
Among the Hyper-V changes are several new security features. Network Virtualization is Microsoft's first step in the Software-Defined Networking (SDN) space. In SDN, the control of network traffic is managed by software that runs outside the physical network hardware, allowing more flexible network management and configuration. From a security point of view, SDN and Network Virtualization enable organizations and cloud providers to better isolate virtual machines (VMs) on the network level.
There are also many smaller—though no less important—security-related changes in Hyper-V 3.0. Good examples are the new extensible virtual network switch, the new Hyper-V Administrators group, and enhanced Windows BitLocker Drive Encryption support.
Defining Network Virtualization
With Network Virtualization, Microsoft extends its VM isolation capabilities from the host to the network layer. Isolation is crucial in multi-tenant cloud solutions, in which the applications and services of different organizations or organizational departments are hosted on the same physical server and network infrastructure. For example, a cloud provider that hosts services for both Apple and Samsung certainly wouldn't want Apple to sneak into Samsung's VMs or network, or vice versa.
Similar to the way that server virtualization allows you to set up multiple isolated VMs on a single host, Hyper-V 3.0 Network Virtualization allows you to run multiple isolated virtual networks on the same physical network. Network Virtualization leverages a software-based abstraction layer that sits on top of the physical network and is based on the concept of virtual subnets. A virtual subnet represents a broadcast boundary that ensures that only VMs on the same virtual subnet can communicate with one another. As such, virtual subnets allow administrators to set up different isolated broadcast domains between VMs.
Although Hyper-V has supported the use of virtual LANs (VLANs) for the creation of isolated virtual networks since Windows Server 2008, VLANs have limited scalability and flexibility. VLANs can support only a limited number of isolated tenant networks. This limit exists primarily because switches typically don't support more than 1,000 VLAN IDs out of the theoretical limit of 4,096. According to Microsoft, Network Virtualization can support more than 16 million virtual networks.
Furthermore, VLANs lack flexibility. They are poorly suited for dynamic cloud environments, in which tenant VMs regularly join and leave the data center and migrate across physical servers for load-balancing or capacity-management purposes. VLAN management is complex and requires reconfigurations on the switch level when a VM is moved to another host. Server 2012 also adds support for private VLANs (PVLANs), through an extension on the level of the Hyper-V virtual switch. Although PVLANs can increase the level of isolation between VMs, they don't counter the problem of complex VLAN management across virtual and physical networking devices. This problem can be addressed only through the use of Network Virtualization.
Under the Network Virtualization Hood
With Network Virtualization, each VM is assigned two IP addresses. One IP address is visible to the VM and is relevant only in the context of a given tenant virtual or software-defined network. This address is called the Customer Address (CA). The second IP address is relevant only in the context of the physical network. This address is called the Provider Address (PA). The decoupling of CA and PA brings several benefits.
First, customers easily move VMs between data centers. Thanks to the new abstraction layer, you can move a VM to the data center of another cloud service provider, without reconfiguring the VM's IP and network configuration and without changing all the IP address–based policies in the organization. You also don't need to worry anymore about the IP configuration of other tenants' VMs that are hosted in the same data center. When Network Virtualization is enabled, VMs with identical IP addresses can coexist on the same Hyper-V host and even on the same network, without IP address conflicts.
Network Virtualization also allows the live migration of VMs between physical servers on different subnets, without service interruption. If a VM has two IP addresses, then the PA can be changed without affecting the CA. A user or application that talks to the VM by using the CA will not experience interruptions and will be unaware that the VM has physically moved to a different subnet.
Besides CAs and PAs, Network Virtualization uses a third important component: the Virtual Subnet ID (VSID). Each Network Virtualization virtual subnet is uniquely identified using a VSID. VSIDs allow Hyper-V hosts to tag traffic from different virtual subnets and to differentiate the traffic of VMs that have the same CA. The Network Virtualization software logic encapsulates the VSID, CA, and PA into each network packet.
To allow specific tenant networks to span multiple virtual subnets (and thus IP subnets), VSIDs can also be grouped into a single customer network that is then uniquely identified by using a Routing Domain ID (RDID). In these situations, Network Virtualization's isolation will be enforced on the level of the defined customer networks. This is another difference between Network Virtualization virtual networks and traditional VLANs: VLANs can be linked only to a single IP subnet.
Network Virtualization requires only a Server 2012 Hyper-V host. With Network Virtualization the guest OS in the VM is totally unaware that its IP address is being virtualized. From the VM's perspective, all communication occurs using its CA. This also means that a VM that is part of a Network Virtualization–based network can run any OS: not only Windows 8 and Server 2012, but also older Windows versions and other OSs.
Figure 1 illustrates how Network Virtualization is implemented under the hood. Basically, Network Virtualization is implemented as a new network driver (called ms_netwnv) that can be bound to physical network adapter cards on each Server 2012 physical server and virtual server. The new Hyper-V virtual switch, which I'll come back to later in this article, calls on this network driver to encapsulate and de-encapsulate Network Virtualization network packets.
Transporting and Routing Network Virtualization
To transport and route IP packets with virtualized CAs across the physical network, Network Virtualization can use two mechanisms. The first mechanism is based on the Generic Routing Encapsulation (GRE) tunneling protocol that's defined in Request for Comments (RFCs) 2784 and 2890. In this context, GRE is used to encapsulate the network packets that are generated by a VM (with a CA) into packets that are generated by the host (with a PA). Together with other cloud industry players (e.g., HP, Intel, Emulex, Dell), Microsoft has submitted a draft to the Internet Engineering Task Force (IETF) to make the Network Virtualization variation of GRE (called NVGRE) a standard.
The second mechanism, IP address rewrite, can be compared with Network Address Translation (NAT). This mechanism rewrites packets with virtualized CAs to packets with PAs, which can be routed across the physical network.
At the time of writing, IP address rewrite is better suited for VMs with high throughput requirements (i.e., 10Gbps) because it can leverage network adapter card hardware-level offload mechanisms such as large send offload (LSO) and virtual machine queue (VMQ). A big disadvantage of IP address rewrite is that it requires one unique PA for each VM CA. Otherwise, differentiating and routing the network packets from and to VMs that belong to different tenants with overlapping IP addresses would not be possible.
Because GRE requires only one PA per host, Microsoft recommends using NVGRE over IP address rewrite for Network Virtualization. NVGRE can be implemented without making changes to the physical network switch architecture. NVGRE tunnels are terminated on the Hyper-V hosts and process all the encapsulating and de-encapsulating of GRE network traffic.
One disadvantage of GRE is that it can't leverage the network adapter card hardware-level offload mechanisms. Therefore, if you plan to use NVGRE to virtualize high network throughputs, I advise you to wait for the availability of network adapter cards that support NVGRE offloading. Microsoft expects vendors to release such cards for Server 2012 later this year. When using GRE, also watch out for existing firewalls that might have default GRE blocking rules. Always make sure that such firewalls are reconfigured to allow GRE (IP Protocol 47) tunnel traffic.
Implementing Network Virtualization
The process for implementing and configuring Hyper-V 3.0 Network Virtualization is different than that for setting up VLANs in Hyper-V. You can configure VLANs from the Hyper-V Manager in the VM network adapter settings. Network Virtualization configuration isn't part of a VM's configuration and can't be done from the Hyper-V Manager. This is because Network Virtualization is based on specific policies that are enforced on the virtual-switch level of a Hyper-V host.
To define Network Virtualization policies locally on the host, you must use Windows PowerShell scripts. To define the policies centrally, you can use the Microsoft System Center Virtual Machine Manager (VMM) Service Pack 1 (SP1) GUI. VMM is Microsoft's unified management solution for VMs. In larger Network Virtualization environments, I strongly recommend that you leverage VMM. VMM can run the correct PowerShell cmdlets on your behalf and enforce the Network Virtualization policies on the Hyper-V host through local System Center host agents. An important limitation at the time of this writing is that the VMM GUI can be used only to define IP address rewrite policies. To define NVGRE policies, you must use PowerShell scripts. To get you started configuring Network Virtualization by using PowerShell, Microsoft provides a sample PowerShell script in its Simple Hyper-V Network Virtualization Demo.
When you want to use NVGRE, you should also plan for Network Virtualization gateway functionality. A Network Virtualization gateway is needed to enable a VM on a virtual network to communicate outside of that virtual network. A Network Virtualization gateway understands and knows the Network Virtualization address-mapping policies. The gateway can translate network packets that are encapsulated with NVGRE to non-encapsulated packets, and vice versa.
Network Virtualization gateways come in different form factors. They can be built on a VPN gateway that creates a VPN connection to link two virtualized networks across a physical network. For this purpose, you can use a Server 2012 server that's running RRAS. Network Virtualization gateway functionality can also be provided by a dedicated networking device (e.g., a switch, a network appliance) that acts as a routing gateway for Network Virtualization. At the time of this writing, Network Virtualization gateway appliances are provided or being planned by F5 Networks and nAppliance Networks. To configure the Network Virtualization gateways, you can use PowerShell. To get started, see the sample script in the Microsoft article "Simple Hyper-V Network Virtualization Script with Gateway."
Luckily, VMM SP1 also includes extensions to centrally and automatically manage Network Virtualization Gateway policies. When a VM is created or updated, VMM can automatically update the routing topology of each Network Virtualization gateway device. For more information about Network Virtualization gateways, see the Microsoft article "Hyper-V Network Virtualization Gateway Architectural Guide."
For much more information about Network Virtualization in general, see the Microsoft article "Microsoft Windows Server 2012 Hyper-V Network Virtualization Survival Guide." A nice summary of the steps that you must follow to set up Network Virtualization is also available in the Microsoft TechNet blog post "Step-by-Step: Hyper-V Network Virtualization."
Extensible Virtual Switch
In Hyper-V 3.0, Microsoft also made significant changes to its virtual switch architecture—now referred to as the extensible switch. This Layer 2 virtual network switch can be configured on each Hyper-V host and is at the intersection of network traffic between VMs, between VMs and the Hyper-V host, and with external machines. The extensible switch is the Hyper-V component that enforces Network Virtualization policies and includes other interesting, security-related features.
Microsoft partners can build extensible switch extensions that use the Windows Filtering Platform (WFP) or Windows network device interface specification (NDIS) filters to monitor or modify network packets, authorize connections, or filter traffic. Thanks to this new architecture, the extensible switch can enforce security and isolation policies when connecting VMs to the network. A virtual switch is the ideal place to scan the inter-VM traffic on a host for malware. Classic malware and intrusion prevention solutions typically cannot scan this traffic and can examine only physical host-to-host traffic. A few Microsoft partners that have already jumped on the extensible-switch bandwagon are sFlow, which provides a monitoring extension; 5nine, which provides a virtual firewall extension; and NEC, which provides an OpenFlow-based monitoring extension. OpenFlow is an important open-source protocol for SDN.
The new extensible switch also supports the definition of PVLANs. Administrators can create PVLANs by assigning a VM to a primary VLAN and then to one or more secondary VLANs. Using these PVLANs, administrators can ensure that VMs either can communicate only with VMs with the same VLAN ID or IDs or can communicate with any other VM with the same primary VLAN ID, regardless of secondary VLAN ID. As I pointed out earlier, PVLANs don't counter the complexity of VLAN management across virtual and physical networking devices. These problems can be addressed only through the use of Network Virtualization.
Another powerful extensible switch feature is the ACL-based isolation policies that can be defined and enforced on the extensible switch virtual ports. These policies are basically lists of Allow and Deny rules that ensure that VMs are isolated and can't communicate with other VMs based on their IP or MAC addresses. These extensible switch isolation policies can also be defined from VMM.
Finally, the extensible switch supports new security features such as DHCP Guard, IPsec Task Offload, and protection against Address Resolution Protocol (ARP) poisoning. DHCP Guard can be used to prevent VMs from acting as a DHCP server. DHCP Guard works by dropping any packets that a VM attempts to send that would indicate that it's a DHCP server. DHCP Guard is a property that can be configured for each VM network adapter. You can do so from the Hyper-V Manager, in the Advanced Features section of the network adapter settings in the VM properties, as Figure 2 shows. I recommend that you enable this setting during the creation of your VM golden image.
IPsec Task Offload allows Hyper-V to offload IPsec-related processing to a network adapter. This is possible only when the network adapter supports the feature. IPsec Task Offload can reduce the Hyper-V host-processor performance hit that's associated with the use of IPsec encryption algorithms. You can also enable this setting from Hyper-V Manager: Use the Hardware Acceleration section of the network adapter settings in the VM properties, as Figure 3 shows.
The extensible switch includes protection against ARP poisoning to ensure that malicious VMs can't launch an ARP poisoning–based man-in-the-middle attack. In such an attack, a malicious machine uses fake ARP messages to associate malicious MAC addresses with IP addresses that it doesn't own. This can make unsuspecting machines send messages to the malicious machine instead of other intended destination machines. To enable ARP poisoning protection, you must leave the Enable MAC address spoofing check box in its default clear state.
To make it easier to configure the extensible switch and its extensions, Microsoft provides PowerShell cmdlets. You can use these to create automated scripts for extensible switch configuration, monitoring, or troubleshooting.
Simplified Delegation, BitLocker Extension
Hyper-V 3.0 supports simplified administrative delegation through the introduction of the Hyper-V Administrators local security group. Members of this new group have complete access to all Hyper-V features. Administrators should use this group to control access to Hyper-V, instead of adding users to the local Administrators group. The new group is also a partial replacement for the Windows Authorization Manager (AzMan), which was previously the only available solution for setting up administrative delegation in Hyper-V. Administrators can continue to use AzMan for delegation scenarios that need more granularity and that go beyond assigning the complete Hyper-V Administrator role.
Finally I want to point out that in Server 2012 you can also take advantage of the new BitLocker volume-level encryption features to protect the confidentiality and integrity of VM images that might be stored in less physically secure locations. Server 2012 BitLocker has been extended to support the encryption of OS and data volumes on Windows failover cluster disks, including cluster shared volumes. See "BitLocker Changes in Windows 8" for more information.
Important vSphere Security Differentiators
The new security features are important differentiators when positioning Hyper-V against its close competitor, VMware vSphere. A good example is the new Hyper-V extensible switch. VMware also offers a virtual network switch, but it's available only in the high-end Enterprise Plus edition of vSphere, which obviously comes at an extra cost. The vSphere switch isn't open or extensible, nor does it come with some of the advanced security features (e.g., DHCP Guard, virtual port ACLs) of the Hyper-V switch. Such features can be added to vSphere only through the purchase of additional software, such as the App component of the VMware vCloud Networking and Security (vCNS) suite.
Similar observations can be made for the Hyper-V Network Virtualization feature. To obtain similar functionality in a vSphere environment, customers must call on the vCNS suite, which supports a technology known as VXLAN. VXLAN also require the vSphere Distributed Switch (VDS), which comes only with the high-end edition of vSphere. (VMware is expected to ramp up in the SDN space soon, through its recent acquisition of Nicira.)
In summary, Hyper-V 3.0 comes with powerful new security features that also benefit the overall flexibility and manageability of Hyper-V–based multi-tenant clouds. Now is the time to learn more about the new Hyper-V. Also make sure that you familiarize yourself with the new capabilities in VVM SP1, which is an indispensable tool for managing larger Hyper-V deployments.