Skip navigation

Configuring HA in VMware vSphere 4.1

vMotion is a VMware vSphere capability that gets a lot of attention. Using vMotion, a running virtual machine (VM) can live migrate between hosts. Live migration relocates VMs prior to a host shut down, or rebalances VM loads across a set of hosts. It also lets you empty a host before rebooting or performing maintenance.

All of these things happen before the failure. If you know that a host is about to have a problem, or if you know you're about to perform some maintenance, vMotion can ride to the rescue. But what do you do when a host just dies? That's when another VMware high availability (HA) feature comes in handy. Commonly (but technically incorrectly) associated with vMotion, HA represents a somewhat different protection you can setup to quickly resurrect VMs after a host failure.

With HA, VMs that fail on one host automatically startup on another. It's important to recognize that an HA event generally starts with the loss of a host, and with that host failure is also the unexpected loss of VMs. In short, vSphere won't invoke HA until after your VMs are already down.

To Get HA, Start with DRS

Getting to HA starts first with the creation of a vSphere cluster. Even if you aren't licensed for the Distributed Resource Scheduler (DRS) feature, you'll need to start this cluster-creation process to get going.

Both HA and DRS are specially-licensed features of VMware vSphere. In order to use either, you'll need to ensure that you have the correct licensing for hosts that participate in a cluster. Figure 1 shows an example of what you might see if you click on the left pane of any ESX host, select its Configuration tab, and then click the link to display Licensed Features. Notice that VMware identifies licensed features for capabilities like VMware HA and VMware DRS on a host-by-host basis. Thus, depending on how you've purchased licenses in the past, you may have some hosts with and others without this capability.

Figure 1: Licensed features for an ESX host.

Figure 1: Licensed features for an ESX host.

Many people don't realize that HA and DRS are licensed at different editions of vSphere. HA is available in vSphere Standard and Essentials Plus, which are low-cost editions of the software (although not their lowest-cost editions). DRS isn't included until you move up the scale to vSphere Enterprise edition.

A cluster corresponds to a set of hosts where resources, such as processing power and memory, have been gathered together to create a pool. Clusters provide a boundary of resource administration, aggregating resources across multiple hosts. They also provide a boundary for DRS load balancing, creating a hard line around a collection of hosts within which vSphere can load balance VMs. That hard line also defines the hosts that HA can use to relocate VMs after their original host fails.

To create a cluster right-click a datacenter in the vSphere Client. Choose New Cluster and, as Figure 2 partially shows, the New Cluster Wizard will appear. You'll see that each cluster needs a unique name and you're given the option to enable one of the two different cluster features. In this example, I'll enable HA but leave DRS for a future article.

Figure 2: Creating a Cluster

Figure 2: Creating a Cluster

 

Selecting the checkbox next to HA adds three wizard screens that require configuration. Many administrators are confused by these screens—the correct configuration isn't always the most obvious one. Some settings can have implications for how your cluster operates after a failure and others, if configured incorrectly, will create more problems rather than help after a host goes down.

Enforcing Admission Control

Figure 3 shows the first configuration screen, which provides options for host admission control. The first setting, Host Monitoring Status, determines whether ESX hosts exchange network heartbeats for vCenter Server monitoring. This heartbeat allows vCenter Server to identify whether or not a host is running. Leave this checkbox selected. Know also that you'll generally want to leave host monitoring enabled, because it's necessary for HA to identify when hosts fail.

Figure 3: Admission control

Figure 3: Admission control

The second setting, Admission Control, determines what action your cluster will take when a host failure occurs but there aren't enough resources to power on failed VMs elsewhere in the cluster. A well-designed cluster will always contain enough spare resources so that any host can crash and its VMs can still power on atop surviving cluster hosts, but this isn't always the case—you might have too little hardware or too many VMs. Your Admission Control setting should depend on the availability and performance needs of your VMs. It's easiest to explain this situation with an extreme example. Think about a cluster that has four hosts, such as the one in Figure 4. In this cluster, all four hosts are running VMs—so many VMs that each host is fully-loaded, at 100 percent utilization.

Should Host Number 2 fail, as it has in Figure 4, HA's job is to relocate VMs to each of the remaining three hosts wherever spare capacity exists. In this case, however, the cluster's remaining three hosts are already at 100 percent utilization. By adding the load of Host Number 2's VMs to the remaining three hosts, HA would create the situation where the performance of every VM suffers—not a great result. The loss of a single host in a poorly-designed cluster can cause much bigger problems than just losing that host. If you want HA, your cluster must always be built with some spare capacity that remains unused just for if a host fails.

Figure 4: An example cluster with one host failure

Figure 4: An example cluster with one host failure

Look at the options for Admission Control in Figure 3. If you disable Admission Control, VMs will be powered on even if there aren't enough resources. This sounds like it would always bad idea, but a situation could exist where you'd want this. For example, maybe you can't afford the additional hardware the cluster needs to support failover, but you must ensure VMs are restarted, even if performance suffers. In this case, disabling Admission Control trades performance for availability. Most IT pros don't want to be in this situation, however, which is why Admission Control's default setting is enabled.

You probably want both performance and availability, and you have the budget to ensure you've got enough hardware lying around. If this describes your situation, you'll be enabling Admission Control. Admission Control lets the cluster manage how many of your resources must be kept in reserve. Handing this responsibility to the cluster frees you from constantly measuring available resources and what your VMs need. Allowing admission control to manage resource levels lets it tell you when its resources have all been assigned, either to VMs or to reserve capacity. Figure 5 shows the three policies at your disposal.

Figure 5: Admission Control Policy

Figure 5: Admission Control Policy

The first policy is Host failures cluster tolerates. Selecting this policy instructs the cluster to reserve an amount of resources that's equal to the specified number of hosts. By setting this value to one, as I did in the example, your cluster will set aside a quantity of resources that is equal to its most powerful host. By doing this, your cluster will be always assured that it can fail over VMs when a host fails.

(As a side note, this process of identifying resources uses a calculation involving "slots," which are logical representations of memory and CPU resources. A deeper discussion on slot calculations is out of scope for this article, but you can learn more about how they are calculated by taking a look at Duncan Epping's excellent explanation.)

An important point about cluster size is that setting a failover reserve makes smaller clusters suffer more "waste." Setting this value to one sets aside as unusable one entire server's contribution of resources in case of host failure. Figure 6 shows why this is both good and bad. It's good because the loss of Host Number 4 means its VMs always have a place to relocate. It's bad because your four-host cluster now functionally operates as a three-host cluster.

Figure 6: Reserved resources equal to one host

Figure 6: Reserved resources equal to one host

Increasing the number of hosts in a cluster reduces the overall percentage of waste. That four-host cluster must reserve 25 percent of itself for failover, but setting aside one host in a 10-host cluster requires only 10 percent of the cluster, and with 20 hosts, the reserve is only 5 percent.

You don't have to set aside that full percentage—that's one reason you can choose the Percentage of cluster resources reserved as failover spare capacity setting. Rather than setting aside a certain number of hosts' resources, the second policy identifies a percentage of overall cluster resources to reserve.

Let's return to the extreme example from above. That four-host cluster's percentage should be set to 25 percent to protect every VM. But you might not care about protecting every VM, because some VMs just aren't that critical. Should you lose a host, these less-important VMs can stay powered off until the problem is fixed. This reduces how many cluster resources you'll need to reserve. Consider using Percentage of cluster resources reserved as failover spare capacity if this situation describes your environment. Also use this setting if you want more exact control over the percentage of resources to reserve.

You'll generally choose one of these first two Admission Control policies. Remember that with the first option, your cluster will always maintain the correct quantity of resources to be held in reserve, even as you add hardware over time. With the percentage option, on the other hand, you'll probably need to make adjustments as you add more hardware. Remember that percentages decrease as the number of hosts goes up and adjust your configured percentage as you add hardware.

The third setting, Specify a failover host, is rarely used. This setting uses no dynamic management and instead allows you to select a specific host that will always remain in reserve for failover—effectively telling the cluster never to use that host during normal operations. This option isn't generally the best configuration because it forbids the cluster from balancing its reserve internally and across multiple hosts.

 

Setting VM Options

Figure 7 shows the second HA configuration screen, Virtual Machine Options. It has two settings that define the behavior of VMs during specific failure situations. These both represent overall policy settings. You can adjust individual per-VM settings for each after the cluster is created.

Figure 7: Virtual Machine Options

Figure 7: Virtual Machine Options

The VM restart policy setting can be set to Low, Medium, or High. Recall that in an HA situation, VMs will be restarted on surviving cluster hosts after a failure. When this happens you may want certain VMs to restart before others, in case you run out of resources. Configuring this setting defines what the default will be for all VMs, giving each VM a level playing field for their restart order. After you've created the cluster, you should adjust each VM's restart policy inside the properties of the cluster. VMs set to High will be restarted before those set to Medium or Low.

The other setting on this wizard screen, Host Isolation response, requires careful consideration. Remember that a cluster is considered healthy when each of its hosts can communicate with the others. Should a cluster node fail, the others will recognize that the failed host is no longer sending a heartbeat and the cluster will attempt to use HA to evacuate VMs onto surviving hosts.

But what if a host hasn't actually failed, but instead has lost its network connectivity with the rest of the cluster? This situation is a very real possibility due to the many different network connections an ESX cluster uses. Its VMs are still running. In that situation, the rest of the cluster sees that the host has failed, even though it actually hasn't. At the same time, the isolated host sees that it is no longer receiving a heartbeat response from other cluster nodes. What should it do to those VMs?

The answer to that question is configured with the Host Isolation Response setting. Your options are Leave powered on, Power off, and Shut down—will an isolated host leave its VMs powered on, will it gracefully shut them down, or will it ungracefully power them off, not unlike hitting the VM's power button?

You might initially think the most appropriate course of action is to leave VMs powered on. This can be good if you must ensure VMs stay alive. But the problem is that those VMs are now running on a cluster host that isn't participating in the cluster. HA on the surviving cluster hosts is likely attempting to fail over those VMs, but because they're still running, file locks inside VMFS prevent the VM from being failed over. This can cause problems.

As a result, it's generally a good practice to configure this setting to Power Off. Even though this setting means you'll lose VMs during an isolation event, powering VMs down frees their locks so that surviving cluster hosts can fail them over. Once they're failed over, you can fix the isolated host and rejoin it with the cluster.

Just like with VM restart policy, this option sets the default behavior for all VMs in the cluster. You'll be able to configure individual VM behaviors inside the properties of the cluster after it is created.

 

VM Monitoring

The final HA-specific wizard screen, shown in Figure 8, is VM Monitoring. These settings determine whether to enable monitoring on individual VMs, as opposed to individual hosts, which was configured earlier. They also define what the sensitivity of that monitoring will be.

Figure 8: VM Monitoring

Figure 8: VM Monitoring

By default, VM monitoring is disabled, because VM monitoring will restart a VM if its heartbeat isn't heard. That heartbeat can be prevented in several situations, some of which have nothing to do with problems in the VM. For example, a failure in the VMware Tools or a misconfiguration of the VM's network card can prevent heartbeats from being received. In either case, even though the VM is functioning perfectly well, its lack of communication can cause vCenter to restart the VM. It's generally a good idea to leave this functionality disabled until you have a very good understanding of its implications. As with the others, you can always make changes once the cluster is created.

 

HA: More Complicated than You'd Think

For a service that simply reboots VMs on surviving cluster hosts, HA is a deceptively complex beast. Plan carefully before you decide to enable it. Ensure you've got the spare host capacity to reserve for failover, which may mean buying more servers. Not having enough hardware can combine with HA's admission control polices to create a big headache down the road. Even when you enable HA, make sure you configure its settings carefully. Explore your available options well and think through your configurations before you click Turn On VMware HA.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish