Skip navigation
Migration of virtual machine storage between Windows Server 2012 HyperV hosts

Windows Server 2012 Storage Live Migration

Become a migration master

Windows Server 2012 brought new levels of mobility to the virtual environment. This mobility extends beyond the previous live migration capability, which was limited to migration within a cluster with shared storage. Windows Server 2012 introduces migration of virtual machines (VMs) between any Windows Server 2012 Hyper-V hosts, standalone or clustered. The cluster is no longer a mobility boundary, so enterprises have complete flexibility. Often the requirement is to move not the VM but rather its storage, something that was possible prior to Windows Server 2012 only after shutting down the VM.

Windows Server 2012 supports three main types of storage for VMs: DAS; SAN-based (typically connected via Fibre Channel or iSCSI); and—new to Windows Server 2012—support for Server Message Block (SMB) 3.0 file shares, such as those hosted on a Windows Server 2012 file server or any NAS/SAN that has SMB 3.0 support. Windows Server 2012 storage live migration allows the storage used by a VM, including the VM's configuration and virtual hard disks (VHDs), to be moved between any supported storage, with zero downtime to the VM. Migration to a different folder on the same disk, between LUNs on the same SAN, from DAS to SAN, from SAN to an SMB file share—if the storage is supported by Hyper-V, then VMs can be moved with no downtime. Note that storage live migration can't move non-virtualized storage, so if a VM is using pass-through storage, then it can't be moved. The good news is that with the new VHDX format (which allows 64TB VHDs), there's no reason to use pass-through storage, from either a size or performance perspective.

The ability to move the storage of a VM at any time, without affecting the availability of the VM, is vital in two key scenarios:

  • The organization acquires new storage, such as a new SAN, or is migrating to a new SMB 3.0 appliance and needs to move VMs as part of a planned migration effort.
  • The storage in the environment is out of space or can't keep up with the I/O operations per second (IOPS) requirements, and VMs need to be moved as a matter of urgency. In my experience, this scenario is the most common.

How Storage Live Migration Works

The mechanics behind Windows Server 2012 storage live migration are quite simple but provide the most optimal migration process. Remember that the VM isn't moving between hosts (although you can use shared-nothing live migration to accomplish that); only the storage moves from a source location to a target location.

Storage live migration uses a one-pass copy of VHDs. The pass works as follows:

  1. Storage live migration is initiated from the GUI or Windows PowerShell.
  2. The copy of the source VHDs, smart paging file, snapshots, and configuration files to the target location is initiated.
  3. When the copy initiates, all writes are performed on the source and target VHD through a mirroring process in the virtual storage stack.
  4. After the copy of the VHDs is complete, the VM is switched to use the VHDs on the target location. (The target is up-to-date because all writes are mirrored to the target while the copy is in progress.)
  5. The VHDs and configuration files are deleted from the source.

The actual storage live migration process is managed by the Virtual Machine Management Service (VMMS) in the parent partition. However, the heavy lifting of storage live migration is performed by the VMs' worker process and the storage virtualization service provider in the parent partition. The mechanism for the storage copy is just an unbuffered copy operation plus the additional I/O on the target for the mirroring of writes during the copy. In reality, the additional I/O for the ongoing writes is negligible compared with the main unbuffered file copy. The path used is whichever path exists to the target: iSCSI or Fibre Channel for a SAN target, whichever network adapter or adapters have a path to the share for SMB. Any underlying storage technologies that optimize performance are fully utilized. If you're copying to or from SMB and using NIC Teaming, SMB Direct, or SMB Multichannel, then those technologies will be used. If you're using a SAN that supports offloaded data transfer (ODX) and you're moving a VM within a LUN or between LUNs, then ODX will be used, meaning that the move will use almost no load on the host and will complete very quickly.

The SAN ODX scenario is the best case. For all other situations, it is important to realize exactly what an unbuffered copy means to your system. The unbuffered copy is used because during storage live migration, you don't want to use a large amount of system memory for caching of data on a virtualization host.

Performing a copy can cause a significant amount of I/O load on your system for both reading the source and writing to the target. To get an idea, try manually creating an unbuffered copy on your system by using the Xcopy command with the /J switch. This creates a similar load to what a storage live migration would inflict on your system, again considering that the ongoing mirrored writes are negligible. Therefore, consider moving a VM between folders on a local disk (likely to be a worst-case scenario). The data would be read from and written to the same disk, causing a huge amount of disk thrashing; it would likely take a long time and would adversely affect any other VMs that use that disk. If the source and target are different storage devices, then the additional load won't be as severe as a local move—but must still be considered.

Moving a VM causes nothing Hyper-V–specific about the disk I/O, which is the same as for any data-migration technology (although other technologies might not have capabilities such as ODX when a SAN is involved). Ultimately, the data must be read and written. This doesn't mean that you shouldn't use storage live migration, but it does mean that you should plan carefully when you use it.

You probably won't want to perform the migration during normal working hours because of the possible adverse effect to other loads. I suspect this is why no automated storage live migration process is part of the Dynamic Optimization in System Center Virtual Machine Manager (VMM) 2012, which rebalances VMs within a cluster. If you detect a large I/O load on a storage subsystem in the middle of a weekday, the last thing you want to do is add a huge extra load by trying to move things around. The best option is to track I/O over time, then move the VM's storage at a quiet time—a task that is easy to script with PowerShell or to automate with technologies such as Microsoft System Center Orchestrator 2012.

Configuring Storage Live Migration

If you have installed the Hyper-V role on your server, you are all done. No specific configuration is needed to use storage live migration; it just works. As previously stated, storage live migration uses whichever path exists to communicate with the source and target storage, and it is enabled by default (in fact, you can't disable it). The only configuration is that you can set how many simultaneous storage live migrations are allowed. To do so, use the Hyper-V Settings action. In the Storage Migrations area, set the desired Simultaneous storage migrations number, as Figure 1 shows.

Figure 1: Setting the Number of Simultaneous Storage Live Migrations
Figure 1: Setting the Number of Simultaneous Storage Live Migrations

You can also configure this setting by using PowerShell:

Set-VMHost -MaximumStorageMigrations 

You only need extra configuration if you're using SMB storage for the migration target and are initiating the migration remotely, either through Hyper-V Manager or PowerShell. In other words, you aren't running the tools on the actual Hyper-V host. This type of remote management is the preferred approach for Windows Server 2012; all management should be performed remotely, using PowerShell or from a Windows 8 machine.

When you configure SMB storage for use with Hyper-V, you need to set several specific permissions, including giving administrators full control to create a VM on SMB or to move to SMB as part of a storage live migration, as their credential is used. As I explain in the article "Shared-Nothing VM Live Migration with Windows Server 2012 Hyper-V," remotely initiating a shared-nothing live migration requires the configuration of Kerberos constrained delegation on each Hyper-V server. The Microsoft Virtual System Migration Service requires this configuration because by default, a Windows server cannot pass a credential that is being used on the server to another server. Doing so would generally be bad from a security perspective but is exactly what we need here and is acceptable in this specific, scoped context:

  1. The administrator initiates the storage live migration remotely through Hyper-V Manager or PowerShell remoting. The administrator's current credential is passed to the host that is performing the action, or a specific credential may be passed, if you're using PowerShell.
  2. The server performing the storage live migration must then connect to the SMB share and create files. To do so, it needs to use the administrator's credential. However, doing so would be passing on the credential (aka delegation), which is not allowed by default.

To enable this scenario, you must enable Common Internet File System (CIFS) constrained delegation for each Hyper-V server to the various SMB file servers. This task is a simple one:

  1. Launch Active Directory Users and Computers.
  2. Navigate to your Hyper-V servers, right-click one, and choose Properties.
  3. Choose the Delegation tab.
  4. Make sure that the Trust this computer for delegation to specified services only and Use Kerberos only options are selected.
  5. Click Add.
  6. Click Users or Computers, choose your SMB file servers, and click OK.
  7. In the list of available services, select cifs for each server, and click OK, as Figure 2 shows.
Figure 2: Enabling Kerberos Constrained Delegation to the File Servers for CIFS
Figure 2: Enabling Kerberos Constrained Delegation to the File Servers for CIFS

You can now remotely trigger storage live migrations, even to SMB storage.

Performing Storage Live Migration

Now that the environment is ready for storage live migrations, all that is left is to perform them. Storage live migrations can be triggered through Hyper-V Manager or through PowerShell. You have two options when performing a storage live migration. You can move everything to one location, or you can choose different locations for each item that is stored as part of a VM (i.e., one location for the configuration file, one for the snapshots, one for smart paging, one for VHD1, one for VHD2, and so on), as Figure 3 shows.

Figure 3: Selecting Items to Move
Figure 3: Selecting Items to Move

This is not a problem when using graphical tools but adds an interesting aspect when using PowerShell.

Start by using Hyper-V Manager to perform the move. Doing so helps you understand the available options:

  1. Launch Hyper-V Manager.
  2. Choose the VM with the storage that needs to be moved and choose the Move action.
  3. Click Next to proceed to the Before You Begin page of the wizard.
  4. Choose the Move the virtual machine's storage option (since you are only moving the storage).
  5. You can now choose to move all the VM's data to a single location, which is the default, or to move the data to different locations, or to move only the VHDs for the VM. Make your selection and click Next.
  6. If you chose the default (moving everything to a single location) you are prompted for the new storage location; specify it, and then click Next. If you chose either of the other options, you are shown a separate page on which you must select the target location for each element of the VM's data; set the location for each item, and then click Next.
  7. Review your options and click Finish to initiate the storage live migration.

To perform the storage live migration from PowerShell, use the Move-VMStorage cmdlet. If you're moving everything to a single location, simply pass the VM name and the new target location with the DestinationStoragePath parameter. (Note that a subfolder with the VM name is not created automatically. If you want the VM in its own subfolder, you need to specify that as part of the target path.) Here's an example:

Move-VMStorage -DestinationStoragePath  -VMName 

If you want to move separate data to different locations, the process is more complicated. Instead of using DestinationStoragePath, use the SmartPagingFilePath, SnapshotFilePath, and VirtualMachinePath parameters to pass the location for the smart paging file, snapshots, and VM configuration, respectively. For the VHDs, use the Vhds parameter. However, you can have more than one VHD per VM—in fact, you can have hundreds of them—and PowerShell doesn't like an arbitrary number of parameters. Therefore, to pass the VHDs' new location, you need to create a hash value for the SourceFilePath and DestinationFilePath for each VHD, and then place them into an array, which is passed to the Vhds parameter. Pleasant!

The following example moves a VM with three VHDs, a smart paging file, configuration, and snapshots. Note you don't need to move all elements of a VM; you only need to specify the pieces that you want to move. Other unspecified elements stay in their current location. Note that in the example, the hash values (value pairs) use curly brackets { } whereas the array uses parentheses ( ).

Move-VMStorage -VMName  -SmartPagingFilePath d -SnapshotFilePath  -VirtualMachinePath  -Vhds @(@{ "SourceFilePath " = "C:\vm\vhd1.vhdx "; "DestinationFilePath " = "D:\VHDs\vhd1.vhdx "}, @{ "SourceFilePath " = "C:\vm\vhd2.vhdx "; "DestinationFilePath " = "E:\VHDs\vhd2..vhdx "}, @{ "SourceFilePath " = "C:\vm\vhd3.vhdx "; "DestinationFilePath " = "F:\VHDs\vhd3.vhdx "})

When the storage live migration is initiated, it runs until it's finished, no matter how long that might take. As the administrator, you can cancel the storage live migration manually by using the Cancel move storage action. Rebooting the Hyper-V host also cancels all storage live migrations. You can see the progress of storage live migrations in the Hyper-V Manager tool or by querying them through Windows Management Instrumentation (WMI):

PS C:\ > Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_MigrationJob | ft Name, JobStatus, PercentComplete, VirtualSystemName

Name       JobStatus      PercentComplete VirtualSystemName        ----       ---------      --------------- -----------------        Moving Storage Job is running 14          6A7C0DEF-9805-4242-92F9-98E6F...

Migrate Responsibly

Storage live migration is a great new capability for Hyper-V, if you use it wisely. The feature gives organizations new flexibility in implementing new storage without affecting the availability of services. You can even use it to rebalance storage subsystems with uneven loading—but be sure to plan your migrations to minimize I/O impact.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish