Skip navigation
mountain peaks

Windows Server 2012 R2 Hyper-V: What's New, Part 2

Additional new features include storage and network enhancements, network virtualization gateway, and Linux-specific improvements

In part 1 of this article series, "Windows Server 2012 R2 Hyper-V," I focused on several core enhancements to Hyper-V, including Generation 2 virtual machines (VMs), a new activation option, live cloning of VMs, enhanced session mode, and mobility and replication enhancements. In this article I focus on several additional new features, including storage and network enhancements, network virtualization gateway, and finally (perhaps strangely, if you aren't familiar with Hyper-V) improvements that are specific to Linux VMs.

Dynamically Resizing VHDX

Windows Server 2012 introduced version 2 of its Virtual Hard Disk (VHD) format—VHDX—which not only provides performance improvements over VHD by making dynamic mode the default but also greatly increases the maximum size for a single VHDX file from 2TB to 64TB. This means size is no longer a limiting factor to use VHDs or an excuse to use pass-through storage. As a point of interest, at every presentation I give on Windows Server and Hyper-V I ask the audience what the largest NTFS volume they have is; so far the largest I've heard of is a 20TB NTFS volume (the second largest is 14TB). This means that that the largest volumes I've ever encountered could be contained many times over in a VHDX file. With Windows Server 2012's CHKDSK changes to reduce downtime to 8 seconds at most during a fix, I think we'll start to see increasing NTFS volumes; however, 64TB should be sufficient for a long time.

Related: Top 10 Windows Server 2012 R2 Hyper-V New Features

Even with the huge advancements in dynamic VHDX performance, many organizations prefer to use fixed-size VHDX files, in which all space is allocated at creation. The reasons vary but include removing the risk of running out of physical space on storage, because dynamic disks grow if proper monitoring isn't in place; possible increased fragmentation when using dynamic disks; and the minimal performance difference between fixed and dynamic disks as the disk expands. These same organizations wanted the ability to resize a VHDX file if required, making it larger—but in Windows Server 2012 this required shutting down the VM that was using the VHDX file. This limitation goes away in Windows Server 2012 R2 Hyper-V with the introduction of online resize of VHDX files, which allows VHDX file sizes to not only be increased but also be decreased based on the amount of unallocated space within the VHDX file.

The ability to dynamically resize a VHDX file requires that the VHDX file is attached to the VM's SCSI controller. For a Generation 1 VM this means it won't be the system/boot drive but rather an additional disk; for a Generation 2 VM, which uses only the SCSI controller, this allows resize of any attached VHDX file. The actual resize operation can be performed using the graphical tools or via PowerShell. Once a VHDX file has been expanded within the VM, additional unallocated space will be visible in Disk Manager, at which point new volumes could be created in the new space or existing volumes extended. If you have unallocated space in a VHDX (i.e., space that isn't consumed by volumes), then the VHDX file can be dynamically shrunk to that size. If you want to further shrink a VHDX file, you must first create additional unallocated space within the VHDX file by shrinking or deleting volumes in the VM.

I walk through dynamically resizing VHDX files in the video "Dynamic VHDX Resize in Windows Server 2012 R2 Hyper-V." Note that dynamic resize of a VHDX works with Linux guests in addition to Windows guests.

Shared VHDX

With virtualization becoming the default for most workloads in the majority of organizations, guest clusters are frequently required. A guest cluster is when a cluster is created among VMs. This is a fully supported scenario. Some guest clusters require access to shared storage; historically, there have been three ways of accomplishing this:

  • Using iSCSI via the iSCSI initiator that's built in to the operating system (Windows Server 2008 R2 Hyper-V)
  • Using virtual Fibre Channel to access Fibre Channel–connected LUNs (Windows Server 2012 Hyper-V)
  • Using SMB 3.0 file shares (Windows Server 2012)

The problem with all of these approaches is that the VM is directly accessing and aware of the underlying storage fabric, which isn't ideal in many situations—especially for hosting organizations. It also breaks the abstraction between the VM and physical resources and it limits the ability of VM users to self-serve because they don't have the required rights or likely the knowledge to create LUNs on SANs to use with iSCSI or virtual Fibre Channel.

Windows Server 2012 R2 introduces shared VHDX, which as its name implies enables a VHDX file to be shared between multiple VMs that see the VHDX file as shared SAS storage, meaning it can be used as clustered storage within the VMs. Shared VHDX is available to both Generation 1 and Generation 2 VMs but it must be connected via the SCSI controller on the VMs. In addition, the shared VHDX file must be stored on Cluster Shared Volumes (CSV) or be accessed via SMB 3.0 storage on a Scale-Out File Server which itself uses CSV to store the files. This is because the sharing code is actually part of CSV instead of the regular file system. If you want to try shared VHDX without CSV you can force the CSV driver to load, but this approach won't survive a reboot and isn't supported. I outline this process in the FAQ "Manually Load Shared VHDX Driver in Windows Server 2012 R2."

Configuring a VHDX file to be shared is very simple. As Figure 1 shows, you simply select the Enable virtual hard disk sharing option in the Advanced Features section of the VHD properties. In PowerShell, you simply add the -ShareVirtualDisk parameter to the Add-VMHardDiskDrive cmdlet. I walk through the various types of guest cluster storage in the video "Guest Cluster Storage Options in 2012 R2 Hyper-V."

Enabling Shared VHDX
Figure 1: Enabling Shared VHDX

Note that if you use shared VHDX you can no longer create checkpoints of the VM, back up at the Hyper-V host level, use storage QoS (which I cover next), or use Hyper-V Replica. None of these actions work if multiple VMs are connected to the same storage.

Storage QoS and Resource Metering

With more workloads than ever being virtualized and shifts in how virtualization is offered, with single deployments being used by different users, different business groups, and in some cases different tenants, it's important to ensure the different users are getting their fair share of resource—or at least the amount of resource they paid for. Processor, memory, and network resources have traditionally had the ability to apply Quality of Service (QoS); in Windows Server 2012 R2, so does storage.

You can enable QoS and specify a minimum and maximum I/O operations per second (IOPS) value in a VHD's Advanced Features view. These two values behave very differently. The maximum IOPS value is a hard limit that the VHD is limited to. The minimum IOPS value isn't guaranteed, because the Hyper-V host might not be the exclusive user of the storage and other workloads might compete—so it's impossible for Hyper-V to guarantee a certain number of IOPS. Instead, if the number of IOPS available to a VHD drops below the minimum IOPS value, an event log is generated in addition to a Windows Management Instrumentation (WMI) event that notifies the Hyper-V administrator that the minimum IOPS value isn't being met, so that actions can be taken.

Windows Server 2012 R2 enhances resource metering to gathers metrics about VMs and add information about storage, such as:

  • Average IOPS
  • Average latency
  • Data written
  • Data read

Deduplication Support for VDI Scenarios

Windows Server 2012 introduced a native block-level deduplication technology. However, it didn't work for exclusively locked files, such as VHDs used by Hyper-V VMs. This limitation was removed in Windows Server 2012 R2, which means in-use VHDs can now be deduplicated. The only workload that's currently supported for deduplication is VMs used as part of a Virtual Desktop Infrastructure (VDI) deployment (i.e., running a client operating system). Although there's no block on deduplication of VHD files used by VMs running a server operating system, this isn't supported. You need to carefully consider trying to deduplicate unsupported workload types. Some server workloads optimize disk placement; if another process deduplicates the blocks of the disk, unbeknownst to the server workload, performance might suffer—or worse.

Virtual Receive Side Scaling

10Gbps network adapters are becoming more prevalent, and there are instances when specific VMs want to leverage this type of bandwidth. A VM typically accesses the external network through network adapters that are added to the VM and bound to an external virtual switch—which itself is bound to a network adapter or team of network adapters. Network communications require a large amount of processor resource. A single processor core can easily handle the workload on a 1Gbps network connection; however, a single processor core becomes a bottleneck on a 10 Gbps network connection, typically producing speeds between 3Gbps and 4Gbps.

The solution to this single-core bottleneck on a physical system is Receive Side Scaling. RSS works by running incoming traffic through an algorithm that separates different streams of traffic, which can then be processed by different processor cores to enable full bandwidth usage.

Virtual RSS (vRSS) provides the same capability for VMs, which can now use multiple vCPUs to process incoming network traffic to remove previous speed limitations. Note that virtual network adapters created on the host Hyper-V server can't use vRSS and are limited to the bandwidth possible through a single processor core.

SMB Bandwidth Management

Although SMB was previously used only as a means to access a file share for storing documents, SMB 3.0 is a true enterprise file–based protocol that can be used for storing VMs, SQL Server databases, and more. In addition, Windows Server 2012 R2 live migration can use the SMB protocol to take advantage of Remote Direct Memory Access (RDMA)–capable network adapters via SMB Direct. This means the single SMB protocol can now be used by various types of traffic, and existing network QoS would apply to all SMB traffic.

A new granular SMB bandwidth management feature has been added in Windows Server 2012 R2 that allows separate QoS policies to be applied to different types of SMB traffic. The types of SMB traffic that can have policies applied are Live Migration, Virtual Machine, and default (everything else).

To utilize SMB bandwidth management, you must first add the SMB Bandwidth Limit feature (FS_SMBBW). Then you can use the Set-SMBBandwidthLimit PowerShell cmdlet to configure the limits for the various types of SMB traffic, as follows:

Set-SMBBandwidthLimit -Category LiveMigration -BytesPerSecond 4GB

Multi-Tenant Hyper-V Network Virtualization Gateway

Network virtualization and Software-Defined Networking (SDN) in general are hot topics for many organizations; the ability to abstract the networking viewed by VMs from the physical network fabric is a compelling feature, but many organizations have struggled to get started with the technology. Although Windows Server 2012 offered a full-featured network virtualization solution that allowed many virtual networks to be defined using completely different and even overlapping IP schemes from the physical network fabric, there was no built-in way to actually connect these virtual networks to other networks or even the Internet. The solution was to use third-party routers to link virtual networks to other networks.

Windows Server 2012 R2 introduces Hyper-V Network Virtualization (HNV) Gateway, which provides a software-based gateway solution. Three types of gateway functionality are available:

  • Forwarding Gateway. A Forwarding Gateway can be used if the IP scheme on the virtual network is essentially an extension of your existing IP scheme and would be routable on the network. The gateway simply forwards packets between the physical network fabric and the virtual network. An HNV Gateway in forwarding mode supports only a single virtual network. Therefore, if you need forwarding for 10 virtual networks, you need 10 separate HNV Gateways.
  • Network Address Translation (NAT) Gateway. If the IP schemes used on the virtual networks wouldn't be routable on the physical fabric or if you have overlapping IP schemes between virtual networks, then NAT must be used between the virtual network and the physical fabric. An HNV Gateway in NAT can support as many as 50 virtual networks.
  • Site-to-site (S2S) Gateway. An S2S Gateway provides a connection from a virtual network to another network using a VPN connection. In most enterprise environments, IP connectivity already exists between locations, so this wouldn't be required. However, consider a host scenario in which a tenant wants to talk to its on-premises network. The HNV S2S Gateway could be used to connect the tenant's virtual network to its physical network. A single gateway can support 200 VPN tunnels. A single virtual network can have multiple S2S connections providing redundancy of connectivity failure. If I wanted to connect an on-premises virtual network and a Windows Azure virtual network I would use the S2S Gateway option.

Although the gateway functionality is built in to Windows Server 2012 R2, to actually implement this functionality you must use System Center Virtual Machine Manager (VMM) 2012 R2—and realistically you need VMM 2012 R2 to use network virtualization at all. It's possible to use PowerShell to configure and maintain a network virtualization environment, but the overhead in updating policy tables as VMs move between hosts and are added/deleted is terrible. I therefore don't recommend this approach.

Enhanced Linux Support with Dynamic Memory and File-Consistent Backup

Windows Server 2012 made a huge investment in Linux. Linux became a first-class operating system for Hyper-V, with nearly all Hyper-V features available to Linux users. In addition, Microsoft invested a lot of resources in the Linux core, which resulted in Hyper-V Integration Services becoming a core part of Linux rather than a separate download. However, some key features that were available to Windows users still weren't available to Linux users. Windows Server 2012 R2 reduces this gap (although a few holes still exist). Key enhancements for Linux include the following:

  • Improved video and mouse support, removing the double cursor that was previously common.
  • Dynamic Memory support to allow memory to be hot-added and removed from Linux VMs in a similar manner to how it works for Windows users.
  • Online backup to provide a file-consistent backup experience. This still differs from Windows, which provides an application-consistent backup capability through the Volume Shadow Copy Service (VSS) to ensure that applications have data written to disk at the time of a backup. Linux doesn’t have a consistent concept similar to VSS, which is why only file-consistent backup is possible, enabled through a file-system snapshot driver. No special action is required; you simply back up a Linux VM from the Hyper-V host through Windows Backup or a backup application such as System Center 2012 R2 Data Protection Manager.

Hyper-V isn't just for Windows workloads—it's also a great choice for Linux workloads. System Center 2012 R2 also provides support for Linux in most of its components, including Configuration Manager, Operations Manager, Virtual Machine Manager, Data Protection Manager, and Orchestrator.

One of the Leading Hypervisors

Windows Server 2012 R2 Hyper-V includes numerous new features. When you consider the changes in Hyper-V from Windows Server 2008 to Windows Server 2012 R2, the leaps in scalability and functionality are amazing for such a relatively short time period. It's no wonder that Hyper-V is one of the two leading x86 hypervisors.

Don't forget that Microsoft also offers a free version of Hyper-V Server 2012 R2. The free version is a great choice when running a pure Linux or client operating system workload such as VDI. In addition, the free version doesn't require the Windows Server guest operating system licenses that are part of Windows Server 2012 R2 Standard and Datacenter.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish