Skip navigation

5 Tips for Managing vSphere’s Distributed Resource Scheduler - 22 Jun 2011

In any VMware vSphere environment, the job of load balancing falls to vSphere’s Distributed Resource Scheduler (DRS) functionality. DRS clusters ESX and ESXi hosts together, with the goal of finding the best balance of virtual machines (VMs) across hosts. DRS is a fantastically useful solution, particularly in infrastructures with large numbers of virtual hosts. Without its automation, the constant job of monitoring and correctly placing VMs onto hosts falls to human eyes—and no human alive can accomplish the job as skillfully as a mathematical equation combined with a set of good monitors.

Although DRS at first blush feels exceptionally simple, you’ll be surprised at how many calculations really go on under its demure veneer. Its interface might look simplistic, but being successful with DRS requires significant effort along with a healthy dose of restraint. The effort lies in setting it up properly—make a few poor settings, and you’ll hinder it from doing its job. The restraint lies in not inadvertently constraining its activities—constrain too far, and you might not be load balancing at all.

It’s important that you take a close look at your DRS settings, as well as a few other settings that can inadvertently cause problems. In some cases, good-faith attempts to control a cluster can in fact do more harm than good. To avoid making such a mistake, check out the following tips for successfully managing DRS.

Tip #1: Don’t Think You’re Smarter than DRS

I once bet against a fellow consultant, siding with DRS’s capabilities over his. This person believed that his manual load-balancing skills were far superior to DRS’s calculations. According to this administrator, his reason for keeping DRS’s automation level at manual was that he had checked all the counters and placed his VMs where they should be. He didn’t think DRS could improve on what he had already done—plus, he didn’t trust DRS’s automation.

With a free lunch on the line, I convinced my co-worker to switch his cluster’s automation level from manual to fully automated and set its migration threshold to apply priority 1, priority 2, and priority 3 recommendations, as Figure 1 shows. This setting, which can be found in the VMware DRS node of the cluster’s properties screen, is midway between conservative and aggressive.

Shields-WIN2277-Figure-1_0
Figure 1: Fully automated DRS migration threshold


We stepped away for a few hours and came back to discover nearly every VM now located on a different host. My co-worker bought lunch that day.

DRS’s three automation levels determine how much control DRS’s services have over relocating VMs. The manual mode does nothing but advise. The partially automated mode acts only when VMs are initially powered on. These two options make suggestions and wait for you to act. Only the fully automated mode enables the cluster to automatically relocate VMs based on monitoring calculations. VMware and most experts suggest that DRS’s fully automated mode is the best selection for almost every cluster.

The biggest benefit of the fully automated mode is that the cluster is rebalanced quickly. Allowing DRS to perform actions on your behalf can help resolve performance problems before they affect users.

Tip #2: Know DRS’s Rebalancing Equations

The first tip’s suggestion doesn’t mean you should trust DRS without verifying its activities. Enabling the fully automated mode allows DRS’s cluster-wide mathematical model for determining cluster balance. This model isn’t difficult to understand, and knowing it will help you determine the best migration threshold setting. Depending on your needs, this setting might end up a touch closer to conservative or perhaps more toward aggressive.

A short primer is in order. First, be aware that a DRS interval is invoked every 5 minutes. During that pass, DRS analyzes resource utilization counters across every host in the cluster. It then plugs this data into a calculation to determine whether or not resource use across the cluster is balanced.

The concept of cluster balance can be difficult to grasp, so a mental picture helps. Imagine a multisided table with only one leg mounted in its center. Each side of this table represents one of the hosts in your cluster. The center leg can hold the table up only when the weight on all sides is balanced.

Now imagine what happens when processor and memory utilization on one host becomes comparatively greater than on others. Unbalanced, the table starts to tip. To fix that problem and rebalance the table, DRS must migrate one or more VMs to a new host.

Before determining which VM to move, DRS must first determine whether or not the cluster is indeed balanced. That calculation starts by determining the load on each host, summing the assigned entitlements for each VM on that host, and then dividing the resulting value by total host capacity. The equation looks like the following:

Shields WIN2277 Eq1_0


In this calculation, VM Entitlements encompasses any CPU or memory reservations or limits you’ve set on VMs. Also factored in are CPU resource demand and memory working set size, which are both dynamic measurements. You can determine Host Capacity by adding up total CPU and memory resources on the host and subtracting the VMkernel overhead, Service Console overhead, and a 6 percent extra reservation. A cluster with HA and Admission Control enabled may also subtract an additional reservation that’s required to meet its high-availability goals.

After these steps are completed, it’s useful to know a bit of statistics for the next step. With the load of every host now calculated, it becomes possible to determine the mathematical standard deviation across all the loads. If you never took a statistics class, think of the standard deviation as a measurement of how far away the cluster’s individual loads are from the average load (and thus how far away the cluster is from being balanced). A greater standard deviation signifies a greater imbalance.

DRS calculates these numbers for you. Figure 2 shows a screenshot from the Summary tab of an example cluster. In this cluster, the target host load standard deviation is set at less than or equal to 0.2. This value represents the greatest amount of imbalance the cluster will accept before doing something. You can also see that this cluster is experiencing a current host load standard deviation of only 0.074—which is less than 0.2, so this cluster is balanced. No VMs need to be relocated.

Shields-WIN2277-Figure-2_0
Figure 2: Target and current host load standard deviation

This example represents how things look when a DRS pass finds everything to be well-balanced. But what happens when the resource utilization of one or more VMs spikes? When this happens, the cluster might exceed its target host load standard deviation, and our proverbial table begins to lean. Fixing the problem requires moving VMs around to rebalance the load.

DRS’s next task is to determine which VMs have the largest effect toward fixing the problem. During this period, DRS simulates a series of VM migrations between hosts to identify and prioritize each option. The best options are those with the greatest effect on rebalancing the cluster, with the least risk of causing future imbalance.

The priority level of any potential move is calculated using the following equation:

Shields WIN2277 Eq2_0

The brackets in this equation represent the mathematical ceiling operator, which rounds up its contents to the next integer number. Thus, a potential move in a four-host cluster that reduces the current host load standard deviation to 0.14 would have a resulting priority of 3. As you can surmise from the equation, each possible move can be assigned a priority from 1 to 5, with lower numbers signifying higher priorities and greater effects on fixing the imbalance.

At this point, how you set your migration threshold setting becomes important (see Figure 1). In our example, the migration threshold was set to the middle option, which tells DRS to automatically apply any priority 1, 2, or 3 recommendations but ignore everything else. Suggested moves with a smaller effect on rebalancing the cluster—in this case, those with priorities 4 or 5—are ignored.

The potential move in the previous equation has a priority of 3. This value is within the configured migration threshold, so DRS will migrate the VM to its new host. This process of determining possible moves, calculating their effect, and choosing whether to invoke the vMotion migration continues until the current host load standard deviation drops below the target host load standard deviation.

Be aware that priority 1 recommendations always represent special cases. These are mandatory migrations that must occur to resolve a major problem. Example problems include a host entering maintenance mode or standby mode, an affinity or anti-affinity rule being violated, or the summation of VM reservations on a host exceeding the host’s capacity.

Although you obviously want a well-balanced cluster, trying too hard for the perfect balance can actually be detrimental. Selecting too aggressive a threshold has disadvantages, because every rebalancing requires one or more vMotion migrations, with every vMotion migration consuming resources. Thus, your goal should be to find the middle ground of not necessarily completely balanced, but balanced enough.

Tip #3: Be Conservative with Constraints

vSphere is loaded—perhaps overloaded—with locations to set resource constraints. You can set rules to always locate a set of VMs on the same host or rules to always ensure they’re on different hosts. With vSphere 4.1 you can set Virtual Machines to Hosts rules, which define groups of VMs that must, must not, should, or should not run on groups of hosts.

These rules let you apply business logic to DRS’s rebalancing equations. Some obvious situations come to mind immediately—for example, a data center that relies on two virtualized Active Directory (AD) domain controllers (DCs) probably doesn’t want those servers running on the same ESX host. Losing the ESX host means losing domain services. Setting a Separate Virtual Machines rule, which can be done from the Rule node in the cluster’s properties screen, as Figure 3 shows, enforces the separation no matter how unbalanced the cluster might get. In the same location, but for different reasons, a Keep Virtual Machines Together rule might be appropriate when a set of VMs rely on each other for data communication or because of security or compliance requirements.

Shields-WIN2277-Figure-3_0
Figure 3: Setting VM rules


Resource allocation settings, such as shares, reservations, or limits can also be set. These settings are applied to entire resource pools, or they can be discretely defined to individual VMs, as Figure 4 shows. Reservations set aside a specific quantity of resources that a VM will always have if resource contention occurs. Limits prevent VMs from consuming too many resources, regardless of whether those resources are in contention. Shares specify a relative importance among VMs, ensuring that resources in contention are assigned to VMs whose workload is most important to the business.

Shields-WIN2277-Figure-4_0
Figure 4: CPU resource allocation for a VM


Although they’re useful in certain circumstances, all these nifty features mean more boxes to check and sliders to adjust. They also give the overeager administrator plenty of opportunities to create trouble in the name of resource optimization.

The issue with these selections isn’t their efficacy; they indeed prevent a poorly coded application from consuming too many host processor cycles or a high-priority VM from not getting enough resources when resources are tight. Problems do occur, however, when overeager administrators configure constraints that aren’t actually necessary. Just because the options exist doesn’t necessarily mean you should use them.

The reasoning behind this statement lies in how resource constraints affect DRS’s operations. Recall that DRS’s central mission is to load-balance a cluster. That load-balancing calculation starts by analyzing the resource entitlements of every VM, which includes any statically set reservations or limits. Setting these reservations and limits in places where they aren’t needed unnecessarily complicates DRS’s rebalancing equations (from Tip #2). Reservations and limits can reduce the total number of possible moves by eliminating those which would violate resource constraints. In addition, reservations and limits might reduce the efficacy of possible remaining moves by forcing the cluster to balance itself based on resource constraints that aren’t operationally valid.

Tip #4: Don’t Use Too Many or Too Few Cluster Hosts

DRS’s delicate task of balancing VMs across hosts is a lot like assigning seats for a wedding reception. Each table can seat a certain number of people, with larger tables seating larger numbers of people. You can squeeze in more people per table if you’re running out of tables—but with plenty of tables to spare, there’s more breathing room for everybody (and your wedding guests are much happier).

However, determining the exactly correct number of tables isn’t the most obvious of tasks. At one extreme, you could just rent two big tables and seat everyone. At another, you could rent a bunch of small tables, each of which only seats a few guests.

A problem occurs when you have too few or too many tables (or, following the metaphor, ESX servers). Two large-enough tables might indeed seat everyone at the party. But what happens when Uncle Bob’s extended family suddenly can’t sit near anyone associated with your wife’s cousin Jane? Rearranging the chairs with only two very full tables requires a lot of extra thinking.

Rearranging VMs between too few ESX servers is no different when resource contention occurs. With not enough places to go, DRS requires more moves and more impact to find the right balance. A much better situation is to ensure that plenty of ESX servers are available. More ESX servers means more options for its load-balancing calculation to place VMs.

You can go too far with simply adding hosts, because having too many hosts introduces a completely different problem. A DRS cluster in vSphere 4.1 can handle up to 32 hosts and 3,000 VMs. That said, a greater number of hosts and VMs means a greater number of simulations DRS must undertake to find those with the greatest impact. Because DRS passes happen on 5-minute intervals, those calculations need to happen quickly before the next pass begins. As a result, spreading hosts and VMs across multiple clusters might be a better idea.

In their book VMware vSphere 4.1 HA and DRS Technical Deepdive (CreateSpace, 2010), Duncan Epping and Frank Denneman believe the current sweet spot of hosts per cluster lies somewhere in the range of 16 to 24. This range, in their words, “[offers] sufficient options to load-balance the virtual machines across the hosts inside the cluster without introducing too many DRS threads in vCenter.”

Tip #5: Large VMs Limit Positioning

Today’s guidance for assigning resources to VMs suggests right-sizing processors and memory to exactly what the VM requires. Even though ESX can overcommit memory by using various technologies, these processes incur an unnecessary overhead that can be avoided by simply giving them what they need in the first place. The same holds true for processors—for most use cases, the rule of thumb is to assign only a single processor to each VM.

However, sometimes the need still exists to create very large VMs with large quantities of assigned memory and multiple processors. These VMs can be database servers or high-powered application servers. Whatever their workload, these large VMs add pressure to DRS’s calculations. As you can imagine, a big VM has a limited number of places to go when resources get tight. Some hosts in your cluster might not have the capacity to support a big VM. Others might not have enough physical resources to host it at all. Although your workloads necessarily drive the amount of resources assigned to each VM, you need to ensure that your cluster includes the capacity to evacuate a large VM elsewhere. Not doing so inhibits DRS from doing its load-balancing job.

Not As Simple As it Looks

VMware has done an excellent job of masking DRS’s underlying calculations beneath a simple interface. Unfortunately, that simple interface belies the complex calculations that are really required to successfully balance a cluster. Smart administrators will pay careful attention to the monitoring data exposed inside each cluster’s properties and ensure that their clusters are carefully built to grant DRS the greatest freedom in making the right decisions. Not doing so can result in poor performance and a lack of optimization for expensive physical resources.

Don’t make the mistake in your environment of assuming that DRS can handle every situation. Keeping this article’s 5 tips in mind will help ensure that your cluster remains healthy even as resource utilization constantly changes.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish