Skip navigation

Q: What is SR-IOV?

A: SR-IOV stands for Single Root I/O Virtualization and is a PCI-SIG standard to provide native I/O virtualization to PCI Express devices. The official specification can be found at the PCI-SIG website

Essentially, the technology allows a single PCI Express network device to represent itself as multiple separate devices (all of the same type).

An SR-IOV device consists of a Physical Function (PF) object which has full PCI Express configurability--essentially the physical NIC object and multiple Virtual Function (VF) objects. These VF objects can’t be configured individually but support data movement.

They do this through their own individual transmit and review queues and lightweight PCIe resources (such as BARs and Descriptors) to enable the transmitting and receipt of data, acting like separate network adapters.

These VFs can be attached to a virtual machine (VM), giving the VM direct access to the network device. The number of VFs supported by network adapters varies by device: For example, the Intel 82576 (1Gbps) supports eight VFs per physical port while the 82599 (10Gbps) supports 64 VFs per port.

Although the maximum theoretical number of VFs per port is 256, because the network device needs the resources to support the VF such as queues for data, data address space, command processing, and more, the actual number implemented in most cards is much lower than 256.

The benefit of using SR-IOV over standard network virtualization is that the VM is talking directly to the network adapter by using Direct Memory Access (DMA). It isn’t going through any virtualization transports such as the VMBus nor is processing performed in the management partition since the network packet isn’t going through any virtual switch.

Because of this direct communication, the best performance is attained, and it’s close to bare-metal performance.

It’s important to understand that with SR-IOV, the VM is directly talking to the network adapter as I previously stated, which might mean the VM loses some portability since it’s no longer abstracted from the physical hardware (the VM will load a VF driver) unless the hypervisor has some capability to handle moving VMs between SR-IOV and non SR-IOV capable hardware.

To use SR-IOV, the network card, the motherboard, and the hypervisor all have to support SR-IOV for the VFs to function and be available to the VMs.

 

Need more help? Check out all of the FAQs for Windows by John Savill.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish