Skip navigation

Navigate VMware Licensing

Don't let licensing changes blind you to vSphere 5.0's top 5 features

When VMware released vSphere 5.0 in August 2011, there was more discussion of the licensing changes—specifically, virtual RAM (vRAM) entitlement—than of the new features. But in my environment, we've migrated about half our clients to vSphere 5.0, and it appears stable. The initial issues with backup support have been resolved; there are workarounds to get a proper virtual machine (VM) backup image running on vSphere 5.0. What was all the licensing fuss about, how can you best avoid it, and which features make it all worthwhile? Read on to find out.

vRAM Entitlement

One of the more controversial issues with vSphere 5.0 is the addition of a memory entitlement model. When vSphere 5.0 was first released, the memory entitlements were significantly more restricted than in earlier versions. Since then, because of feedback from the VMware Community, VMware decided to increase the amount of memory entitlement that comes with each socketed CPU.

With vSphere 4.x, licenses had some limitations, based on the number of cores in the CPU. Of course, with the current generation of servers, the number of cores per CPU has increased significantly. You can now purchase a CPU that has 10 cores! With vSphere 5.0, VMware made a move to license VMware ESX based on the amount of vRAM that you get with each CPU socket license. Table 1 shows the amount of vRAM that you get with each version of vSphere.

For example, if you have a two-socket ESX host running vSphere Enterprise, then you have 128GB (i.e., two sockets at 64GB each) of vRAM entitlement on that ESX host. This means that you can run VMs that have a total requirement of as much as 128GB of memory on the ESX host. Your 12-month rolling average memory utilization must be equal to or lower than your vRAM entitlement. If you exceed this limit, VMware won't prevent a VM from running, but you'll receive VMware vCenter Server alerts informing you that you're out of compliance.

You can also pool your vRAM entitlements. For example, suppose that you have a two-host ESX cluster running vSphere Enterprise Plus. In this configuration, you need a minimum of four sockets of Enterprise Plus, giving you a total of 384GB (four sockets at 96GB each) of vRAM entitlement. It doesn't matter whether all the VMs are running on one host and the other host is idle, as long as the total configured memory on all the VMs does not exceed 384GB.

The good news (sort of) is that the maximum amount of vRAM is capped per VM at 96GB. Therefore, the most that a single VM can count against the vRAM entitlement pool is 96GB, regardless of how much memory is configured for the VM. So a VM that's configured with 512GB of memory consumes only 96GB of the vRAM entitlement pool. Put another way, the vRAM entitlement requirement for three 1TB VMs is 288GB (three at 96GB each).

What if I Need More vRAM?

If you start to receive warnings that you have exceeded your vRAM entitlement, you have several options for increasing the vRAM entitlement pool:

  • Purchase additional licenses of the same vSphere edition.
  • Upgrade the existing vSphere edition to a higher level. Going from vSphere Standard to vSphere Enterprise gives you an additional 32GB of vRAM entitlement per socketed CPU license. Of course, if you have Enterprise Plus, this option won't work: Enterprise Plus has the largest vRAM entitlement of all vSphere editions.
  • Introduce another host (or hosts) running the same vSphere edition as the existing hosts in an ESX cluster. Be aware that if you want to maintain VMware vMotion compatibility, you'll need to get a host that has identical or nearly identical CPUs.

Note that you can't extend the amount of vRAM in the vRAM pool for vSphere 5.0 Essentials or Essentials Plus. Only vSphere 5.0 Standard through Enterprise Plus editions can be extended indefinitely, and you must purchase the same edition of vSphere to expand an existing vRAM entitlement pool.

If you used vSphere 4.x to deploy a virtual desktop infrastructure (VDI) solution, purchased your vSphere licenses before September 30, 2011, and have a valid Support Agreement, then you can upgrade to vSphere 5.0 while retaining an unlimited vRAM entitlement pool. However, you must use a separate instance of vCenter Server that manages only the VDI. Any vSphere licenses that you purchase separately to run VMware View (VMware's VDI solution) are still subject to the vRAM Entitlement licensing model. (See the VMware Community post " Desktop Virtualization with vSphere 5: Licensing Overview " for more information.) And there's good news for vSphere Advanced edition owners: If you were fortunate enough to purchase vSphere 4.x Advanced and have a valid support agreement, then you're entitled to vSphere 5.0 Enterprise. There's no Advanced version of vSphere 5.0, so you are grandfathered in to vSphere 5.0 Enterprise.

Top 5 vSphere Features

Significant changes in vSphere 5.0 can improve manageability and increase the return on investment (ROI) of your virtualization infrastructure investment. After you've got the licensing cleared up, take a look at these top new features of vSphere 5.0 and how we use these features in different environments.

#1: ESXi only. A few years ago, VMware announced that vSphere 4.1 was the last version to include both ESX and VMware ESXi. Keeping that promise, vSphere 5.0 comes only with ESXi, which basically is ESX without the Red Hat–based Service Console and the Web Server. ESXi has a significantly smaller footprint and is more secure than ESX. However, if you have any existing programs (e.g., a backup agent) that run in the Service Console, then you'll need to find a replacement for these services when you upgrade to vSphere 5.0.

For many of our clients, we were running Symantec Backup Exec's Remote Agent for Linux and UNIX Servers (RALUS) to obtain image backups of the VMs that were running on the host. To move to vSphere 5.0 and still get image backups of the VMs, we needed to purchase a vSphere 5.0 backup solution, such asVeeam Backup & Replication,Quest Software's vRanger Pro, or Symantec Backup Exec. Note that these backup solutions require you to back up to disk, so you can perform a granular restore of a single file by using the VM image backup file. This requirement might mean that you need to purchase additional hard disks or a NAS appliance so that you have enough room to back up the VMs to disk. Of course, we still recommend moving the image backups to some type of offline media (e.g., tape) in case you are hit with a nasty virus that could wipe out your VM-to-disk backup files.

#2: Support for as much as 1TB of memory on a VM. With most vSphere 5.0 installations, the first bottleneck that you hit is memory. vSphere 4.1 supported a VM with as much as 255GB of memory, but vSphere 5.0 now supports a VM with as much as 1TB of memory! You can actually purchase a server now that can hold 2TB of memory, but it would probably take a month to boot. (Of course, it isn't the host that has the 2TB limit, but vSphere 5.0 -- so vSphere will see only 2TB even if your host has more than that.) Some of our clients have servers that require more than 255GB of memory, but none of them have servers that require more than 1TB of memory. You might not need 1TB of memory in a VM now, but it's good to know you have room for growth.

#3: As many as 32 vCPUs with Enterprise Plus. With vSphere 5.0 Enterprise Plus, you can have a VM with as many as 32 vCPUs. With the Essentials, Essentials Plus, Standard, and Enterprise editions of vSphere 5.0, you can configure a VM with as many as 8 vCPUs. In addition to vCPUs on a VM, you can also specify the number of cores in each vCPU. For VM applications that aren't SMP-aware, adding additional vCPUs to the VM doesn't usually improve the VM's performance. However, for VM applications that are SMP-aware (e.g., Exchange Server 2010), you can make a single vCPU more powerful by configuring it with multiple cores.

Be aware that vSphere 5.0 won't let you exceed the number of physical CPU cores on the host. If you have a two-socket, six-core CPU, then you can allocate 12 cores per VM, at most. When you configure the vCPUs on a VM, you'll see that as you increase the number of vCPUs, the number of cores decreases and vice versa. Using our previous example of the 12-core host running vSphere 5.0 Enterprise Plus, you can have a VM that has as many as 12 vCPUs with 1 core, 1 vCPU with 12 cores, or any valid combination within this range. The ability to specify both the number of vCPUs and cores within each vCPU gives vSphere 5.0 administrators an ability to fine-tune the performance of a VM that wasn't so easy to discover in vSphere 4.1.

#4: Support for VMFS5. With VMware VMFS3, you can have a maximum extent size of just less than 2TB. If you create an extent that is exactly 2TB, you might have difficulty seeing the partition on the ESX host. With VMFS3, you can have as many as 32 extents per storage group, so the largest storage group that you can configure is 64TB. However, we maintain a one-to-one extent-to-storage group relationship to simplify storage management. With VMFS3 you had to plan for the largest *.vmdk file or VM disk that you wanted to store on the storage group, and then format the storage group with the appropriate block size. Some ESX administrators always format their storage groups with an 8MB block size, to ensure that they can always have the largest *.vmdk file. The default block size with VMFS3 is 1MB, so the largest *.vmdk file you can have is 256GB. The relationship between VMFS3 block size and maximum *.vmdk file is shown in Table 2.

With VMFS3, you never want to create the maximum size *.vmdk that the storage group allows or you won't be able to take a snapshot of the VM. When a snapshot is created, it increases the base *.vmdk slightly, so if you're already at the maximum *.vmdk, then the snapshot creation will fail.

vSphere 5.0 supports VMFS5, which has significant improvements over VMFS3:

  • Maximum storage group size—With VMFS5 partitions, the maximum extent size is 64TB.
  • Block size—By default, VMFS5 partitions are formatted with a 1MB block size, but you can still have a maximum *.vmdk size of 2TB. In other words, you don't need to worry about the relationship between block size and maximum *.vmdk size anymore.
  • VMFS5 sub-blocks—Sub-block size in VMFS5 has been reduced from 64KB to 8KB, so storage of smaller files takes up less space on the VMFS5 partition.
  • File Limit -- File limit has increased from 30,720 files with VMFS3 to more than 130,000 files with VMFS5.

You can perform a nondestructive upgrade of an existing VMFS3 partition to VMFS5, but you will retain the existing block size of the VMFS3 partition, sacrifice the benefits of the smaller sub-blocks, and still have a file limit of roughly 30,000 files. The best practice is to create a new storage group that's formatted with VMFS5 and then use VMware Storage vMotion to migrate the VMs to the new storage group. Even though an in-place migration to VMFS5 is supported, it's a risky operation and can have potentially catastrophic results if something goes wrong.

#5: SSD storage groups. vSphere 5.0 can recognize a storage group that comprises solid state disks (SSDs) and can use it for memory swapping. As a general rule, we avoid using the memory overcommit feature in vSphere because it can hurt the performance of the VMs that are running on the host. But what if you need to run more VMs on an ESXi host on which the maximum amount of memory is already installed? SSD storage groups might be an option. Although they aren't as fast as native memory installed on the host, SSDs are still significantly faster than non-SSD storage. Enterprise SSDs are still expensive -- around $3,400 for a 200GB mainstream drive and $7,000 for a 200GB enterprise performance drive. However, this might be a cost-effective solution when you're faced with replacing multiple ESXi hosts in a cluster. (Of course, there are drawbacks to this approach as well. Obviously, the best arrangement depends on your specific situation.) The other place we're using SSD drives on ESXi is for an application that requires fast disk performance. One of our clients has a legacy ERP application that runs only on Microsoft SQL Server 2000. SQL Server 2000 supports a maximum of 4GB of memory on the server, 2GB of which is available for SQL Server. If the client could run a later version of SQL Server on the x64 platform, we would just load the server with a lot of memory and cache everything. Because this isn't an option and the client still needs the scalability, we are going to place the SQL Server 2000 database on SSD storage to give the application significantly better performance.

Migrate Now for Improved ROI

There are numerous improvements in vSphere 5.0, and most deployments should be able to fit within the confines of the vSphere 5.0 vRAM entitlement model -- although some vSphere users might need to purchase additional licenses. We suggest migrating to vSphere 5.0 today to get better ROI on your virtualization infrastructure investment.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish