Q: How is the priority of a Dynamic Resource Scheduler (DRS) migration calculated?

A: To understand DRS migration priorities, you must first understand its target host load standard deviation and current host load standard deviation. I discuss how the latter of these two is calculated in my last Q&A \\[LINK\\]. The former represents the amount of imbalance the cluster will tolerate before it will begin migrating VMs between hosts.

The current host load standard deviation is a measurement of how imbalanced a cluster is as a whole. When that value exceeds the target, one or more migrations must happen to restore the balance. DRS’s next job, then, is to simulate a range of possible migrations to see which will have the desired balancing effect. For each simulation, the cluster pretends to make the move and then calculates what the new current host load standard deviation might be.

Obviously, some migrations will have a much greater effect on returning balance than others, reducing the standard deviation. As you can expect, these will be your higher-priority migrations. Determining the actual priority value requires another step. That step runs the results of each simulation through the calculation below:

The brackets in this equation are the mathematical ceiling operator, which means to round up to the next integer. What results is a value between 1 and 5, which correspond to the five priority values you see in DRS’s GUI. Note that priority 1 migrations are mandatory migrations that must occur due to special situations, such as availability constraints being violated.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.