Shortly after I left corporate employment and started working independently, I received a call from a former colleague, who had been doing consulting for several years, asking if I would be available for a two-week performance troubleshooting job in Orlando, Florida. At that time, because I had several young children, including a one-year-old, I was keeping my travel to a minimum and wouldn’t even consider leaving them for two consecutive weeks. My colleague (or perhaps it was his client) was desperate. He offered to pay me 25 percent more than my going rate at the time. I told him I couldn’t do it; it wasn’t the money, it was the time away from my family. He offered me 50 percent more. Really, it’s not about the money, I said. He offered me 75 percent more. Maybe I should have held out for even more, but a quick mental calculation determined that the increased amount of money I would make for the gig in Orlando, compared to what I would normally make for the same two weeks, would be enough to bring my whole family with me to Orlando. So I agreed. And so we had our first trip to Disney World.
The urgency came from the fact that the client in Orlando had deployed what they thought was a production-ready application without ever having done multiuser testing on any of the components. The application was made up of dozens of screens, each enabling the user to request certain operations to be carried out (which ran T-SQL queries behind the scenes), and each screen had been tested to verify that it worked according to specifications. They had even brought in a color consultant to make sure the screen appearance was aesthetically pleasing. However, they had never tested any part of the application with more than one user connected at a time. They deployed at 9 AM on a Monday morning, and by 10 AM they were on the phone trying to find some help because the application had just stopped working because of excessive blocking after less than an hour. They were directed to my colleague, who was unavailable, but he offered to find someone. I was called before 11 AM that day.
Testing any SQL code in a multiuser environment is crucial if you’re ever going to be running with more than a single user. You might have the best indexes in the world, with perfectly tuned queries, but if the queries can’t get the data they need because another process has them locked, it will seem that your queries are the worst in the world! (You should be aware that even with the new snapshot-based isolation levels introduced in SQL Server 2005, you can still have blocking. Reader and writers are much less likely to block each other, but writers will still block other writers.)
So how does one go about testing performance with multiple users? Ideally, you’ll want to be able to test with as many concurrent connections as you’re ultimately expecting for your application during peak usage. There’s software available that can do this testing, and an online search will turn up many possibilities. Some of the tools are described as “stress testing” or “load testing,” and there are very expensive load testing tools, as well as home-grown tools offered for free download. Some of the load testing tools test much more than just how well your application behaves when there are concurrent processes competing for the same data. Some of them test your hardware components as well and make sure that your processors, I/O system, memory, and network can handle the load you’re going to be placing on it. I’m not going to recommend specific tools in this short commentary because the choice of tool depends completely on what your environment is like and what your needs are. However, a quick online search for “SQL Server concurrency testing tools” will bring up many pages of possible suggestions, including commercial products and links to blog posts or help forum questions that discuss this need. One of the blog posts in my own search results, near the top of the list, was actually a post I had written almost four years ago (“Geek City: Poor (Wo)Man's Load Testing”). In that post I suggested you could perform basic concurrency testing just by setting up several windows in your normal query tool, such as SQL Server Management Studio, and running a commonly accessed procedure or batch, and then terminating the batch with GO followed by a positive integer, indicating a number of times to run the batch. From a command-line environment, you could extend this just a bit by putting each T-SQL query or procedure call in a file, and then use the Windows START command multiple times in a .bat file to execute the same T-SQL batch concurrently from multiple command windows.
This kind of very simple testing wouldn’t be proof that your application would work in a multiuser environment, but it’s a good starting place and might indicate that it won’t work in a high concurrency situation.
More expensive tools let you specify which query to run, how many simultaneous executions to invoke, and you can also specify a wait time (to simulate user “think time”) on each connection before the next iteration has begun.
If the expensive load testing tools aren’t in your budget and you decide to evaluate the free tools, take a look at Microsoft’s own offerings. There’s a suite called the RML Utilities (RML stands for Replay Markup Language). Or you can use the SQLIOSim utility (designed to stress test your I/O system with multiple concurrent operations) to focus on the potential blocking issues with multiple users rather than on the impact on your I/O system. (You can read about SQLIOSim here.)
Obviously this commentary isn’t a full description of how to carry out concurrency testing, but rather a reminder that you should and can always perform at least minimal concurrency testing. However, if you’re based in Orlando, maybe you don’t need to bother. My kids would love another trip to Disney World!