Skip navigation

From One Web Server to Two: Making the Leap to a Web Farm

Grow your own web farm by adding hardware, setting up load balancing, and putting the servers on a cluster

Cover Story

From One Web Server to Two: Making the Leap to a Web Farm

Grow your own web farm by adding hardware, setting up load balancing, and putting the servers on a cluster

By Richard Campbell

As web developers building applications in ASP.NET, we've been told that scaling our website will be easy you just add more web servers. But the reality is not that simple. There are quite a few steps involved in actually getting multiple web servers working on the same website. Also, there are key design decisions in your application that can significantly impact how your web farm works. However, in the 24 x 7 web world, there is no substitute for building a web farm; you just have to know when and how.

When to Add a Web Server

So when should you add a second web server? There are really only three reasons to do so. By far the most common reason is reliability: When you have only one web server, having it die on you means your website is down. Adding a second web server gives you redundancy.

If reliability is your motivating reason, you should be looking at your entire infrastructure as well. You may need to significantly re-architect your infrastructure for redundancy. What about your database server? Can your website stay up if it fails? Can you seamlessly redirect web traffic from one server to the next? Once you head down the path of guaranteed uptime, you're going to need more than one of everything. Part of that goal is the web server, but it's important to remember that it is only part of a larger system that needs to be reliable.

The second reason for adding a web server is performance. It's easy to believe that if you double the number of web servers (say, from one to two), you'll double the performance of the website. The reality is not that nice. Although it's possible to improve the average response time of your website by adding more servers, there is certainly no guarantee of that. It is entirely possible to add a web server to a site and have no appreciable improvement in performance.

To know for sure that you'll get a performance benefit from adding web servers, you need to know two things: the first is that as the number of users goes up, your website's performance goes down. Performance Monitor can tell you this, by looking at requests per second counter (part of ASP.NET) and average response time counter (part of ASP.NET applications). When average response time is going down as the requests per second counter goes up, that's a sign that you're performance bound. But there's more: The second key factor to knowing you'll benefit from adding a web server is being able to show that the web server is buried that it cannot serve any more users at the current rate than it's currently serving.

Again, Performance Monitor can help you determine this. Look for pinned processors (CPU %), maxed-out memory (.NET Memory Heap), and response queues growing out of control (part of ASP.NET). It is easy to fool yourself looking at stats like this. For example, you could have a web application that's completely dependent on retrieving data from the database. Adding more web servers won't help you; you'll just have more servers waiting around for the database to deliver data. In fact, the additional database connections might make things even slower! However, in that scenario, it's not likely that your CPU will be pinned, but it is possible that you'll have significant request queues.

There are more impediments to getting performance benefits from web servers than just database bottlenecks. I'll go into more of these issues later in the article, but it is well worth your time to think hard about how your application functions before presuming that adding servers will improve performance. It can be a career-limiting mistake to spend thousands of dollars on new computer equipment that provides no benefit.

Finally, the third reason to go from one web server to two is seamless website updating. If you have a 24 7 website, taking the site down to do updates is not an option. By having two or more web servers, you can pull servers out of the pool, update them with the latest version of the web application, and place them back into the pool without the site ever being down. Actually making this process work is tricky, but it's a highly valued skill in the modern web world.

Buying Hardware

When you're getting ready to build your web farm, especially for the first time, leave your existing gear alone. It's time to buy all new gear you'll need to do quite a bit of testing before you'll be ready to take over for the existing equipment. Trying to buy just enough gear to turn the existing set up into a web farm is a recipe for disaster.

If this is your first web farm, you likely have only three sets of working equipment now: your development gear, QA gear, and the production environment. The development and QA gear likely are very similar to each other, whereas the production system will be different, possibly with more redundancy and performance.

Once you enter the world of web farms, you need a pre-production environment. This equipment needs to reflect the production environment. It doesn't have to be identical, but it should be close. The pre-production environment is where you can run load tests and make sure that your application is going to behave properly in the web farm. When you're shopping for your first web farm, buy the pre-production environment first. It will help teach you what you'll really need for production.

Ideally, your web servers should be symmetrical: all identical machines. Although it's possible to load balance between asymmetrical machines, doing so will very likely cause more problems than it's worth. Any perceived savings in cost will be quickly wiped out by the additional cost of diagnosing problems with the asymmetrical configuration. They don't need any internal redundancies, like multiple hard drives or power supplies. After all, if you've set up your farm correctly, you should be able to have a web server fail with no significant impact on the application at all. Web servers in a web farm should be inexpensive and plentiful.

Load Balancing

One of the most challenging decisions to make when setting up your first web farm is how you're going to load balance between your servers. Microsoft offers a few ways to do load balancing. The simplest is Network Load Balancing (NLB), which comes with every copy of Windows Server. If you're running Microsoft IIS 7, you also have Application Request Routing (ARR), which will do load balancing in addition to request routing (e.g., separating images requests from ASPX requests). Finally, Microsoft ISA Server also has a load balancing feature.

There are numerous third-party load balancing hardware solutions as well. The biggest (and most expensive on the market) come from companies such as F5 Networks, Citrix Systems, and Cisco Systems. There are also lower-cost solutions such as Zeus Technology, Coyote Point Systems, and Barracuda Networks. Hardware load balancers offer a large number of options for load balancing as well as other features such as SSL offloading. If you're considering the third-party load balancer option, remember that you may need to buy two for redundancy's sake and consider the costs of training and/or consulting for configuration. You'll also need a load balancer in your pre-production environment. It's smart to get to know your load balancer well; it's an expensive piece of equipment with many features that can help your website.

Setting Up NLB

You can't argue with the price of NLB: It's included with Windows. It also requires no additional hardware, since it runs on the web servers themselves. There's no central point of control; every web server knows what every other web server is doing, so there is no single point of failure. NLB is an algorithmic load balancer, splitting the workload between the web servers in a round-robin style by IP address. You configure rules for what NLB load balances and how the servers are balanced.

Every web server in an NLB cluster listens on a common virtual IP address. Because all the servers have the same algorithm in them, when a request comes in, they know which server should respond to that request, and only that server will respond to a given request. The web servers keep in touch with each other via a status packet that's sent every few seconds on the virtual IP address. When a web server stops sending that packet, the server is dropped from the cluster automatically. The remaining servers recompute the algorithm to load balance without the failed server.

You have to install NLB on your web servers; it isn't installed by default. NLB is part of the Windows Networking features. It's important to note that NLB is not a web server specific feature; it can load balance anything and is used with Microsoft Exchange Server, SQL Server, and other Windows services.

Once you've installed NLB, you can start creating a cluster. The first step in creating a new cluster is selecting the first host to be in the cluster the first web server of the cluster. In Figure 1, I've entered the IP address of the first web server for the cluster. All my web servers have NLB installed on them, but you don't have to run the NLB administrator from a web server; you can run it on any computer that can connect to the web servers.

Figure 1: Creating a new cluster

The next step in setting up the cluster is specifying host parameters, as Figure 2 shows. The priority of the host matters only when a request comes into the virtual IP that is not covered by the port rules you've set. When a request comes in that isn't covered by port rules, the server with the lowest-priority number handles the request. Another host parameter is the initial host state. By default this is set to Started, meaning that every time the web server starts up, it joins the load balancing cluster immediately. You can choose to have freshly started servers not join the load balancing cluster until you tell them to.

Figure 2: Setting the host parameters for the first host in the cluster

Once the host parameters for the first host in the cluster are set, it's time to specify the cluster IP address. This is the virtual IP address that all servers in the cluster will listen to. It's important to make sure this IP is not used by anything not part of the cluster. As you see in Figure 3, you also set the cluster operation mode, which handles how MAC addresses work in the cluster. The default is Unicast mode, which effectively makes every web server in the cluster have the same MAC address. Multicast mode lets the servers have their own MAC addresses. IGMP multicast is used only on networks taking advantage of the IGMP protocol.

Figure 3: Configuring the cluster IP address

As your cluster grows larger (beyond two servers), you'll find find that the amount of network traffic generated is hard on your network both in test and in production. It's important to isolate that traffic from the rest of your network, either by using isolated switches or Virtual LANs (VLANs).

Your next step is to set port rules; Figure 4 shows the dialog box where you do so. Since I'm building a web server cluster, I need to create a rule only for port 80 using the TCP protocol. The filtering mode is the key part of this dialog box: It decides the affinity of the web farm. Setting affinity to None means there's no affinity; a given request IP address can go to a different server every time. Setting affinity to Single means a given IP address always goes to the same server. Network affinity is used when you have tiers of clusters something you'll need when you have many (more than 10) web servers. Affinity is a key issue in making web farms work well. I'll discuss this in more detail later.

Figure 4: Creating port rules for the cluster

After the port rules are set up, the cluster will start to set itself up. This takes some time, since NLB is now reprogramming the NIC in the first web server, setting it to listen on the virtual IP address. When this process is finished, you have a cluster of one server set up. The next stage is to add a second host to the cluster, as Figure 5 shows. The process is identical to adding the first host to the cluster, except that the cluster IP address is already set, as are the rules.

Figure 5: Adding the second host to the cluster

Since in this example there are only two servers, once the second server is added, the cluster is finished. Figure 6 shows NLB Manager after the second server has been added but is not yet fully configured in the cluster. The status of the servers will cycle through three phases: Pending, Converging, and Converged. It can take several seconds for this to happen.

Figure 6: Configuration changes underway in NLB Manager

Once the cluster is set up, you should now be able to hit your web application on the virtual IP address. The old IP addresses of the servers are still there, but they are specific to the server. You will want your users to only ever reference the web farm by the virtual IP that means changing some DNS entries. The original IPs still work, and for administrations they're very useful for being able to check individual servers, but the users themselves shouldn't ever do that.

So What's the Big Deal About Affinity?

Affinity is the term used to indicate that a given user on your website is storing information specific to that user on a given web server. The user is bound to the web server because they need to keep coming back to that server to use the information stored there. In ASP.NET, typically the bound data is in-process session data. If your user is loading items into a shopping cart on your website and the shopping cart is stored in an in-process session, sending that user to a different server will cause the items in their cart to seem to disappear or worse, the user will get an Object Not Found error.

If you store your session data in process but don't set your load balancing to bind the user to the server, you'll confuse yourself and your users. It isn't always obvious that the problem is your load balancing configuration. Sometimes your site will work fine, sometimes things will disappear, and sometimes you'll get errors.

Whether you use software- or hardware-based load balancing, all of them offer some sort of affinity. The simplest affinity sticks a given IP address to a given server. This is what NLB supports. The downside to that sort of affinity is that you can actually have a large number of users coming from the same IP address, so the server that gets stuck with that IP address can be overwhelmed.

A better form of load balancing affinity is cookie-based: using the ASP.NET session cookie as the affinity identifier. With cookie-based load balancing, a given cookie is always sent back to the same server. Most third-party hardware load balancers support this technique, referring to it as sticky sessions, as does ARR. But regardless of the technique you use, getting the configuration right is important, but an even better solution is to get rid of affinity.

Recall that there are only three reasons to go to multiple web servers: reliability, performance, and seamless updates. Affinity impairs all three of these goals. If you need reliability, storing your session data in a web server means that session data vanishes when the server fails, which leads to unhappy users. If you want performance, affinity adds overhead; it takes extra work for NLB or any other load balancing system to keep a given user stuck to a given server. And if that server gets overloaded with work, there's no way to get away from that busy server; you're stuck to it. Finally, seamless updating means getting all the users off the server when it's time to update. If you've got users stuck to that server, it can take hours or more to get all the users off of it without negatively impacting them.

Clearly then, the goal of any web farm is to get rid of affinity. So what does it take?

Moving Session Out-of-Process

Microsoft offers two options for moving session out-of-process: State Server and SQL Server. State Server is a free bit of software included with ASP.NET for storing state data. It's a fast, simple server with no redundancy options. Typically you'd have a dedicated server running behind your web servers for storing state data, and all your web servers would fetch state data from it.

Unfortunately, in this configuration if the state server fails, you've lost all the session data from all users. And since there's no redundancy option, there isn't much you can do when it fails except set up a new state server and get your users to start over.

SQL Server is a popular option for out-of-process session data for a couple of reasons. The first is that most of the time, you're already running SQL Server, so there's no additional licensing or hardware needed. Also, there are redundancy options for SQL Server. Unfortunately, SQL Server is the slowest of the out-of-process options, although the actual amount of time it takes to store and retrieve session data from SQL Server is not significant if the amount of data stored in session isn't too large.

To switch your session to out-of-process, you modify web.config to specify how and where you want to store session data. But before you do that, have you marked all objects going into the session object as serializable?

When you switch to out-of-process session, you're switching from storing session data in the memory of your web server into a more complex process. At the beginning of each web request that references a web page using session data, ASP.NET will make a call to the out-of-process session store to retrieve the session. This data is in a serialized form that is, in a form that transmits over the network properly. Once it arrives back at the web server, the web page processing continues. When something is referenced from the session object, it is de-serialized back into memory. Note that only when an object in the sessions object is referenced is it actually placed into memory. When the page is finished processing, the session object with any changes is serialized again and set back to the session store for the next request.

To mark your objects as serializable, you have to set the Serializable attribute to true. These days, virtually every object is serializable, but since the default setting of the attribute is false, if you move your session out-of-process without setting the attribute, you'll get a serializing error when ASP.NET attempts to serialize your object in the session object. It's a vague error and takes time to debug.

In an existing application, you'll have to do a search for every reference to the session object and check exactly what you're putting in it. Make sure the Serializable attribute is set. This is also an excellent time to think hard about exactly what you're stuffing into the session object. More is definitely not better; now that you're sending your session data out of your web server with every web request, you want to keep your session object small.

There's another significant advantage to going out-of-process with session: You stop using .NET memory for session on your web server. Most really busy web servers are running low on memory most of the time. Getting session out of that memory can really help with performance and reliability.

Getting the Real Benefits of Web Farms

The real benefits of web farms are reliability, performance, and seamless updates. Once your application doesn't require affinity, you can easily add more servers on demand and recover gracefully from failed servers. When you want to update your servers, you drain them of connections (even NLB has this feature), remove them from the pool, update the server with the latest version of the application, then add it back into the pool. This process can be scripted for rapid, seamless website updating.

To get these benefits, you need to do lots of testing and practice. Use new, separate hardware for this; ultimately you'll need to build a pre-production environment. Get good at load testing; you want to test your web farm and see what benefits you're getting each time you add a web server. Practice failures, too: Pull the plug on your servers during a load test and see what failure looks like.

A web farm is more than just going from one server to two; it's significantly changing the architecture of your application and its infrastructure. With some effort and practice, those changes will take your web application to new levels of success.

Richard Campbell ([email protected]) is a co-founder of Strangeloop Networks. He has more than 30 years of high-tech experience and is both a Microsoft Regional Director and Microsoft MVP. In addition to speaking at conferences around the world, Richard is co-host of the .NET Rocks! (www.dotnetrocks.com) and host of RunAs Radio (www.runasradio.com).

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish