Microsoft Hyper-V’s virtual Fibre Channel feature makes it possible for virtual machines to communicate with Fibre Channel storage devices, thereby allowing virtual machines to use SAN storage. Even so, configuring a VM to use Fibre Channel is different than attaching to other types of storage, such as iSCSI. As such, there are some important considerations that you must take into account before configuring a VM to use virtual Fibre Channel--including Hyper-V live migration.
In fact, one of the first things that you have to think about when you decide to use virtual Fibre Channel is how your decision will impact Hyper-V live migration--or, your ability to live migrate your virtual machines. Live migration refers to the process of moving a running virtual machine from one Hyper-V host to another without any interruption in service.
Hyper-V live migration is compatible with virtual Fibre Channel, but if a virtual machine that is configured to use virtual Fibre Channel is to be live migrated, then the destination host must also support the use of virtual Fibre Channel. Specifically, this means that every host server on which the VM could potentially reside must provide physical connectivity to the virtual machine’s storage.
Because multiple Hyper-V hosts must connect to the same underlying Fibre Channel storage, the storage must be configured to support multi-path I/O. Multi-path I/O is what allows multiple Hyper-V hosts to share the underlying physical storage. That way, when a virtual machine is live migrated to another cluster node, Hyper-V is able to maintain connectivity between the virtual machine and its storage.
The use of multi-path I/O isn’t the only thing that is required for a virtual machine that is using virtual Fibre Channel to be live migrated. Another mechanism used to facilitate the process is World Wide Names (WWNs).
A World Wide Name, which is sometimes referred to as a World Wide Port Name, is a unique identifying number that gets assigned to a Fibre Channel host bus adapter. An analogy that is often used is that a WWN works similarly to a MAC address in that it allows devices to be positively identified.
So, with that in mind, imagine that each physical host bus adapter used within a cluster of Hyper-V servers had its own unique WWN. At some point during the virtual machine live migration process, a VM would have to detach itself from the host’s host bus adapter, thereby breaking storage connectivity. Hyper-V gets around this problem by assigning two different WWNs to each host bus adapter.
During the live migration process, the VM releases one of its WWNs, but not the other. The destination host then establishes Fibre Channel connectivity for the VM by using a single WWN. At this point in the live migration process, both hosts are connected to the virtual machine’s storage. Since the storage can now be accessed from the destination host, the source host releases its last remaining WWN, and the destination host assigns its second WWN to the host bus adapter. This process allows the handoff to occur without ever breaking storage connectivity.
It’s also important to consider that to use virtual Fibre Channel you will have to enable N_Port ID virtualization (NPIV). This is what makes it possible to map multiple virtual N_Port IDs to a single physical Fibre Channel port. You will have to enable NPIV for all of the physical host bus adapters used by your Hyper-V hosts, and you will also have to make sure that the physical Fibre Channel switch ports (and the SAN itself) support the use of NPIV.
Virtual Fibre Channel has been around for long enough now that this requirement shouldn’t be problematic. Depending on the hardware that you are using, however, you may find that you have to update your drivers before you will be able to use NPIV.
Finally, as a best practice, you should map your Hyper-V virtual SAN to the underlying physical fabric. In Hyper-V, a virtual SAN is essentially just a logical representation of a group of physical host bus adapter ports. The virtual SAN is created through the Hyper-V Manager’s Virtual SAN Manager. Once the virtual SAN has been created, virtual machines can be configured to use it on an as-needed basis. (It is worth noting, however, that a VM is limited to using a maximum of four virtual host bus adapters.)
Just as multiple physical servers can be linked to a common physical SAN, multiple virtual machines can share a common virtual SAN. As such, you can greatly reduce your environment’s complexity by mapping your virtual SAN to your physical environment. If you have a single fabric, for instance, then create a single virtual SAN. This approach will make it much easier to keep the infrastructure organized, and it will also make it easier to troubleshoot any problems that might occur.