Skip navigation

Windows 2008 R2 SP1 and Hyper-V: More than a Bunch of Fixes

See the sidebar to this article for more about virtualization RAM technologies.

When you think service pack, you probably think of minor updates and fixes. You think of phrases such as "increased reliability" and "performance gains," not "industry-leading new features." These perceptions would be accurate if I were talking about Windows 7 SP1—it has updates to resolve problems and improve performance and some minor feature updates related to third-party federation integration, HDMI audio performance, and XPS document rendering, but nothing to justify an article. When we look at SP1 for Windows Server 2008 R2, however, it's a different ball game, and to say it adds industry-leading new features is no exaggeration. SP1 is a game changer for virtualization, particularly Virtual Desktop Infrastructure (VDI).

SP1 for Server 2008 R2 makes some minor updates outside of Hyper-V, but in this article I'll be looking at the Hyper-V changes. Even before SP1, the hypervisors from different vendors had nearly reached parity, with little difference between them in terms of performance and functionality. Virtualization choices were made based on price, management, and integration with the rest of an organization's infrastructure. Hyper-V, however, had one weakness: It lacked memory overcommitment. With SP1, Microsoft has addressed this deficiency by avoiding memory overcommitment like the plague. Confused? Read on.

Hyper-V 2008 R2 SP1 introduced dynamic memory, which allows you to define an initial amount of RAM for a VM and the maximum amount of RAM it can be allocated. Hyper-V intelligently allocates memory to VMs over their initial amounts based on need and on the amount of physical RAM that's available. This is different from memory overcommitment, where you start each VM with the maximum amount of memory possible regardless of if or how it's being used and hope you don't run out of resources. With dynamic memory, VMs are allocated additional memory if it's available, and memory can be reallocated from other VMs that need it less.

Figure 1 shows the dialog for configuring memory for a Server 2008 R2 SP1 hosted VM. Notice that you can still use the old Static configuration, where you assign a VM a set amount of memory that is all allocated when the VM is turned on and can't be increased. The more interesting option is the Dynamic selection as you see selected in the figure. Startup RAM is the memory allocated to the VM when it is initially turned on and Maximum RAM is the size the memory for the VM can grow to based on the VMs needs and physical memory availability. The default value for the Maximum RAM is 64GB (the maximum supported by a VM in Hyper-V). I suggest you set more realistic values, both for planning and to protect you from some rogue process in one VM that allocates as much RAM as it can. In my example, I've set the maximum to 2GB. Based on my expected workloads, this is a reasonable amount for the VM.

Figure 1: Configuring dynamic memory for a VM

Figure 1: Configuring dynamic memory for a VM

You can also see two sliders in the dialog. The first is a percentage of memory to keep as a buffer. You set a buffer because you don't want an OS to totally run out of memory before Hyper-V starts giving it more. The process of adding additional RAM can take a few seconds, and during those few seconds the performance of the VM could be crippled—the guest OS would start to move memory pages to its page file to handle the lack of RAM. To avoid this memory starvation, you set a desired percentage of memory to always be available in the VM (20 percent by default), and when the VM has less than that percentage available, more memory is added to the VM to bring it back to the desired figure (assuming RAM is available in the host). If 20 percent is too little or too much based on the VM's needs, you can change this setting using the slider, but 20 percent is generally a good value for most configurations.

The other slider is to set the priority of memory allocation for when there isn't enough physical RAM available to provide the desired amounts to all the VMs. Just like with CPU allocation, VMs with higher memory priority will receive additional memory before VMs with lower priority.

I've said that dynamic memory intelligently allocates additional memory to a VM and I use the phrase available memory, and not free memory. This is key because available memory and free memory are very different. Windows Vista and later OSs use all the memory that can for caching, helping improve performance by preloading programs into memory. The memory used for this caching can be used by applications whenever it's needed, so the cache is largely still available, so looking at free memory is fairly meaningless—you need to consider the available memory (which includes most of the memory being used for cache). Which is exactly what dynamic memory does.

An update to the Hyper-V integration services provides a new dynamic memory Virtual Server Client (VSC) in guest OSs. The VSC communicates with its corresponding Virtual Service Provider (VSP) in the parent partition to report its use of memory, specifically its amount of available memory. Based on the amount of available memory in the guest, the desired memory buffer configured for the VM, and the amount of physical RAM available in the host, additional memory may be allocated to the guest. This type of intelligent memory allocation is only possible because of the guest OS insight provided by the dynamic memory VSC. It wouldn't be possible if the hypervisor just looked at what memory is being used by a VM from the outside—Hyper-V wouldn't be able to tell if the memory was being used by an application or just for disposable purposes like pre-caching. See the sidebar for this article for more on the technologies involved. Figure 2 shows the memory that a VM has allocated and its percentage of available memory. You should see that the available memory is pretty close to the target memory buffer percentage you configure.

Figure 2: Memory configurations in Hyper-V Manager

Figure 2: Memory configurations in Hyper-V Manager

Some versions of Windows have long had the ability to hot-add memory to the OS, but dynamic memory doesn't use this capability. Hot-addition of memory was the rare instance where you'd add an entire slot of memory to your hardware. That's very different from frequently adding memory in small amounts, and it was found that memory hot addition wasn't the right solution. Instead, the integration services for Hyper-V that run inside guest OSs were enhanced with a new kernel-level memory enlightenment that communicates with the parent. When it's told that additional memory has been allocated, the integration services present it to the guest OS, which continues working with its increased amount of memory. In my tests, this method has worked great.

When a VM doesn't need some memory any more, or another VM needs it more, dynamic memory uses a balloon driver to reclaim memory. A balloon driver is a kernel mode device driver (so when it asks for memory, the OS has to fulfill the request). Hyper-V tells integration services to grow the balloon driver to a certain size. The balloon driver demands the memory from the guest OS and leaves it free for Hyper-V to reallocate. The guest OS can intelligently decide where that memory will come from, including moving data with the least need to be in memory to the local page file. If the guest needs more memory later (and it's available), the balloon driver can deflate, returning memory for the guest.

When to be Dynamic

Dynamic memory isn't the right solution for every virtual workload, but it's a benefit to most. You might typically assign some services, such as domain controllers and file servers, 4GB of memory, but when you start paying attention, you could be surprised how little memory they actually use. VDI environments are another great fit for dynamic memory, because end user machines occasionally need large amounts of memory for intensive tasks, but can usually get by with a lot less. So when isn't it a good fit? Consider services that are very intelligent about memory and allocate memory when they're started, or services that will always consume as much memory as available. (Both SQL Server and Exchange Mailbox servers fit into both of these categories.) If you try to use dynamic memory with these services, they'll just absorb all the memory you throw at them, unless you use service level configurations to limit the amount of RAM that can be used. Generally, a static memory configuration will probably be better than a dynamic one for these memory absorbent services.

Dynamic memory doesn't mean we don't need to plan. You still need to figure out the normal memory usage patterns of your virtual environments and allocate resources accordingly. Dynamic memory means you don't have to allocate the peak memory used for the entire time a VM is running—you can allow a VM to have more memory when it's needed but use less when it's not. This lets you fit more VMs in your memory, saving you money and management effort.

RemoteFX

RemoteFX is made up of three capabilities, all based around Microsoft's acquisition of Calista Technologies. Calista focused on improving the experience of presentation virtualization technologies such as Remote Desktop. One of the capabilities I'll discuss here is available for Remote Desktop Session Host (formally known as Terminal Services), but the two main capabilities are only available for Windows 7 VDI environments.

RemoteFX vGPU is aimed at providing consistent graphically fidelity for end users, no matter the capabilities of their endpoint device. Without vGPU, users connecting to Windows 7 VDI sessions from Windows 7 clients can enable desktop composition in the remote connection settings and get the full Aero Glass experience. Using multimedia redirection, certain types of supported media, such as WMV files, are sent raw to the client device and rendered locally, giving very smooth multimedia playback. Windows 7 clients get a good experience thanks to the local Windows 7 OS and fairly powerful local hardware. But when connecting from a basic thin client or legacy OS, users get a very different experience—basic graphics only and almost no rich media capabilities. vGPU is about equalizing the experience and giving all users to have the same Windows 7 VDI experience.

vGPU works using a new virtual GPU that is presented to the guest OS through an updated VMBus virtualization driver that is part of the SP1 integration services update. This vGPU uses a GPU in the host to perform the actual graphical computations. Note that you don't need one physical GPU per vGPU in a guest—one physical GPU can support multiple vGPUs in the same way that one physical processor core can be used by multiple virtual CPUs.

You have two options for the GPU requirements in the host. One option is to use graphics cards in the hosts that must support DirectX 9.0c and 10 with at least 256MB of RAM (although you'll want much more than this for any sizable implementation), PCI-Express connections (x16 ideally), and the processors in the host must support Second Level Address Translation (SLAT). Note that in production environments you'd use specialized high-end graphic cards and not consumer type GPUs, although in a lab environment a more basic GPU would be fine. (I use a GTX 275 with 1GB of RAM in my lab.) Note that these GPUs can be external in the form of an appliance. Your other option is to use a hardware device known as an Application Specific Integrated Circuit (ASIC), which are used to offload GPU and CPU from the host.

With this vGPU the guest OS has advanced graphical capabilities including DirectX 3D 9.0c (which is used by productivity applications such as PowerPoint 2010), Silverlight, Adobe Flash, Aero and many others. All of the rendering and commands associated for the graphics on the VM vGPU are sent via the VMBus to the host OS Render Capture Compress component and replayed on the host GPU using off-screen memory. All rendering is done host side. The capture component then scans for changes in the off-screen memory display rendering, compresses the changes, and sends the changes to the client for display. This RemoteFX graphical content is sent over a new RemoteFX graphical Remote Desktop Protocol (RPD) virtual channel, which is why the client must support RDP 7.1 and be RemoteFX enabled. Notice that this setup doesn't use the endpoint's rendering capabilities at all, so you get the same experience no matter the capabilities of the local client (assuming it's a RemoteFX compatible client). If the client isn't RemoteFX capable or the client isn't connecting via LAN (a LAN connection is a requirement for RemoteFX), the client will default to the standard RDP experience and RemoteFX vGPU won't be used.

The good news is many clients will be able to take advantage of RemoteFX, including standard Windows clients, traditional thin clients, ultra-light thin clients with RemoteFX ASIC components for the decoding and decompression, and even monitors with the RemoteFX ASIC built-in. All of these clients will get exactly the same graphical experience, because all the rendering is performed host side and only screen updates are sent over the network. Remember that the target must be running Windows 7 SP1 on Hyper-V 2008 R2 SP1 with RemoteFX enabled—you can't use the RemoteFX vGPU and associated advanced graphical capabilities on a RDSH.

While I'm talking about the graphical side of RemoteFX, I must mention a component of RemoteFX that isn't talked about that often. RemoteFX has a new set of intelligent codecs for encoding and decoding of the graphical data sent over RDP, compressing the information. These codecs are key to the entire RemoteFX, giving a better experience and using less bandwidth. If you have an ASIC in the RDSH, the codec operations will still benefit by offloading encoding and decoding to the ASIC without using GPU capabilities.

RemoteFX USB Redirection

The other major feature of RemoteFX is USB redirection. Traditional RDP has several high-level device redirection capabilities, such as for input direction, smart card redirection, port redirection, bi-directional audio, picture transfer protocol, printers (using EasyPrint for driverless printing), and drive redirection. This might sound like a lot but in reality, there are still a huge number of devices that aren't supported by normal RDP redirection.

RemoteFX addresses this by redirecting a device at the USB protocol level. This means far more devices can be redirected using the RemoteFX USB redirection capability, and you get support for devices such as scanners, all-in-one printers, PDAs, mobile phones, multimedia headsets, webcams, and biometric devices. You can see an example of this improved USB redirection support in Figure 3. With normal RDP redirection, I see one device that can be redirected. With RemoteFX USB redirection, I see many more.

Figure 3: Configuring USB redirection

Figure 3: Configuring USB redirection

The addition of this feature doesn't mean the high level redirection of normal RDP is no longer needed. With RemoteFX USB redirection, you're redirecting at the USB level, so the driver for the device is redirected and must be installed in the remote OS. The device is no longer available on the local client, which wouldn't be good for keyboards and mice. With normal RDP redirection, the driver is only needed on the local client and that redirected device can be used on multiple sessions, whereas RemoteFX USB redirection makes it accessible to only one session at a time.

In the Server 2008 R2 SP1 release of RemoteFX, USB redirection and vGPU are tied together. If you can't enable vGPU on a VM because your processor lacks SLAT support or your server doesn't have a FPU, you won't be able to use RemoteFX USB redirection. This also means the RemoteFX USB redirection isn't available for RDSH-based sessions. You can expect enhancements in future versions, including a potential decoupling of these capabilities.

Because you're using an RDS component, you need RDS CALs for clients that are taking advantage of RemoteFX. Because it's tied to VDI configurations, however, most companies that could use RemoteFX already have the RDS CALs as part of a suite to enable use of other RDS components commonly used in VDI deployments.

Enabling and using RemoteFX

For a Hyper-V based VMs to use RemoteFX, you have to make several configuration changes to the server, and you should be sure the server meets the RemoteFX requirements (such as a GPU and SLAT support on the processor). You must enable the Remote Desktop Virtualization Host role service that's part of the Remote Desktop Services role and ensure the Core Services and RemoteFX child component of RDVH are installed, as Figure 4 shows.

Figure 4: RDS components needed to enable RemoteFX

Figure 4: RDS components needed to enable RemoteFX

Once the component is installed, you'll see a new type of hardware available to add to a VM, the RemoteFX 3D Video Adapter. Add this hardware to the VM and then configure it for the VM, as Figure 5 shows. Ensure the guest is running Windows 7 SP1 and then make your connection from an RDP 7.1, RemoteFX-enabled remote client.

Figure 5: Configuring RemoteFX

Figure 5: Configuring RemoteFX

To allow RemoteFX USB redirection, you need to make a change for your clients, a change that's commonly made using Group Policy. Go to the policy Computer Configuration, Administrative Templates, Windows Components, Remote Desktop Services, Remote Desktop Connection Client, RemoteFX USB Device Redirection and set Allow RDP redirection of other supported RemoteFX USB devices from this computer to Enabled. Set who has RemoteFX USB redirection rights then click OK and close the policy editor.

Remember that RemoteFX is only for LAN connections today, so check the Experience tab of the Remote Desktop Connection Client. If you choose anything other than LAN, RemoteFX will be disabled and you'll get a normal RDP experience.

SP1 is key in getting the most from your Hyper-V based virtualization infrastructure. SP1 not only catches Hyper-V up with the competition in terms of memory density, but also provides more intelligent handling of memory for virtual environments. With RemoteFX, Hyper-V based VDI environments offer an unmatched experience on any form-factor end-point that supports RemoteFX. These features definitely make SP1 something that you should try out in your environment.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish