Picture this: You've just finished debugging your final stored procedure for the department's latest SQL Server application. You're ready to check in your work and call it a day—or are you? Are you sure your T-SQL code will stand up to users' demands? What about the functionality you've designed into your code? Have you covered every business requirement? Does each function deliver as advertised under all normal operating scenarios?
Even if you can answer "yes" to all those questions, it's not time to relax. What about the outcomes you hope won't occur? Have you tested common error conditions that caused problems in the past? And what about the error handlers? Are you sure they're free of bugs? If you find yourself falling short in any of these areas, you might want to reassess the way you unit-test your stored procedures.
Unlike system tests that professional testers conduct after an application is "code complete," unit testing looks for errors in individual modules, such as stored procedures, while those modules are being developed. Unit testing isn't difficult, but to work effectively, it requires planning, documentation, and above all, a shared understanding of some basic principles. So before jumping into the testing process, let's first dispel a few common myths about unit testing that often get in the way of its proper use.
Myth 1: Testing is just another name for debugging. Testing looks for errors and reports their occurrence; debugging tries to find what caused the error, then corrects the defective code.
Myth 2: Testing is used to show a program is free of errors. Your testing goal should be to find errors, not to prove they don't exist.
Myth 3: Testing is the job of professional testers. Testing should be done in stages and involve programmers and professional testers. The first stage, unit testing, is the programmer's responsibility. After your code has passed its unit tests, it's ready for system testing by a quality-control professional.
Myth 4: Testing is an end-of-coding task. For best results, you need to plan how to test every piece of code you write; designing the tests should ideally come before—not after—you start coding.
Myth 5: Formal testing is unnecessary for stored procedures. To some DBAs, testing means throwing a few parameter values at a procedure and declaring victory when the query executes cleanly. Unfortunately, such throwaway tests are hard to debug, aren't reusable, and waste time. Formal testing takes discipline, but that's the price of improved reliability.
So how should you test your stored procedures? You first unit-test each procedure by feeding it appropriate test data, observing the outcome, and retesting as needed to make sure the bug fixes contain no new errors. Unit-testing a stored procedure involves a series of test-debug-retest cycles. The number of cycles will depend on your company's defect tolerance, your application's release date, and, of course, the "bugginess" of your code.
Each of your unit tests should follow these five steps:
- Define the expected outcomes.
- Prepare the test data.
- Run the test and verify the outcome.
- Document the results.
- Retest as needed.
Over time, you'll compile a library of tests that you can reuse for other procedures, share with other developers, and adapt to future development projects. As your test library grows, so will your testing prowess. The result will be a quicker, more effective testing regimen, uniquely suited to your (and your organization's) development needs and style.
Keep in mind that unit testing is just one phase of testing. After you've unit-tested a procedure, you (or your "build manager," if you have one) might combine it with other parts of your application such as a business object or a UI component such as an Active Server Pages (ASP) page. Integration testing at that level is beyond the scope of this article, but in an upcoming article, I'll cover an early step that often occurs in that process—integration testing of two procedures.
To examine the unit-testing process, let's walk through the steps of a practical example of testing a procedure that looks up a product price, then adds a record to an Orders table. The sample procedure requires a Products table and an Orders table, which you can create by running the script that Listing 1 shows. You can implement the business logic as a single procedure, but to better observe the testing process, let's separate it into two procedures. This article covers the inner procedure, usp_lookupPrice, which looks up prices in the Products table. My next article will cover testing of the outer procedure, usp_insertOrder, which uses the inner procedure to prepare an order record.
Step 1: Define the Expected Outcomes
Every stored procedure has some kind of outcome, if only a return value or a system error message. Most procedures also perform database operations that produce other kinds of outcomes; obvious examples include INSERT procedures that add table records and SELECT procedures that generate result sets. Stored procedures can also generate custom error messages or status codes as well as return values, print statements, or output parameters. The lookup procedure, usp_lookupPrice, which Listing 2 shows, can produce all these different outcomes. The procedure searches for a product, sets an output parameter equal to the product's unit price, then returns a default status code of 0, indicating successful execution.
Let's start defining outcomes by looking at the output parameter, @product_price. The first expected outcome is the product price being returned to the calling program. To better see what's happening in this test scenario, use the code Listing 3 shows to populate the Products table with data. You can then use Listing 4's code to execute the usp_ lookupPrice procedure.
Depending on the product_id you use to test usp_lookupPrice, the expected outcome for @product_price will be $19.95, $29.95, $39.95, $11.00, or NULL. But what if you select an invalid product_id? The result will depend on the test value or other keyboard character you feed to usp_lookupPrice. For example, inputting an invalid integer, say 11, as the product_id will evoke a NULL return value. Entering an alphabetic character such as a or f will generate a Server: Msg 8114 error message: Error converting data type nvarchar to int. And failing to enter a product_id or entering a special character such as & will cause a Server: Msg 170 error message: Incorrect syntax near ','. So, depending on your test values for product_id, you can expect the execution of usp_lookupPrice to result in a valid product_price (when product_id is 1, 2, 3, or 4), a null product_price, or a T-SQL message signaling a conversion error, a syntax error, or other system error.
Now, what about the return statements in usp_lookupPrice? You can use a return statement to pass error-handling control to a calling program or procedure by assigning custom integer status codes to different error conditions. You can program the calling process to handle the code resulting from an encounter with one of these status codes however you choose, including by ignoring it.
The expected outcome for a return statement is 0—the default status code for a procedure that executes successfully. Besides 0, a return statement can return Microsoft-reserved codes of -1 to -99, as well as any custom codes you define. Usp_lookupPrice contains two custom status codes, 10 and 11, so the expected return values are
- 0, when no errors or warnings occur
- 10, when no unit price is found
- 11, when the unit price is null
- -1 to -99, when usp_lookupPrice raises a reserved status code
Step 2: Prepare the Test Data
The key to effective unit testing is choosing the right starting conditions. In most cases, these starting conditions are a combination of test values you assign to a procedure's input parameters. Together with the expected outcomes, the values you assign to these parameters constitute a test case. But beware—a good test case requires that you choose the right combination of test values.
Testing experts have devised many ways (sometimes overlapping) of choosing test values. The classic distinction is between white-box and black-box tests. White-box, or code-based, tests use test values that probe for errors along each logical pathway in a block of code. Black-box, or specification-based, tests use test values that verify that a procedure correctly performs its assigned responsibilities, such as inserting or deleting a record. Table 1 summarizes these and other common test-case design techniques.
You can create test cases from any or all of these techniques. The more techniques you tap, the more comprehensive your tests will be. Just remember: Every new test case takes extra time to design, program, execute, and analyze. Consequently, many development shops limit how many and in what order tests should be run. If your team hasn't yet established such rules, you might want to start.
When you're unit-testing stored procedures, starting with white-box testing techniques is usually best because procedure developers are the ones with the best overall grasp of the code. Also, unlike professional testers in many shops, developers usually have direct access to their own procedures. So if you (the programmer) don't do it, white-box testing will probably be ignored. Time permitting, you can augment your white-box tests with other techniques, but your T-SQL code is where you need to concentrate your unit-testing efforts.
As I mentioned, with white-box testing, you check for errors along each logical path through a stored procedure. Practically speaking, this means testing all possible combinations of true/false conditions created by IF, GOTO, and other branching statements. (Looping structures create an additional layer of logical complexity that you can explore later.)
As you can see in Listing 2, usp_lookupPrice contains two IF statements (in lines 6 and 8). Each condition first checks for potential errors and, if it finds one, returns a unique user-defined status code to the calling procedure. When you combine them, the two branching statements define three separate paths through the T-SQL code in usp_lookupPrice:
- Path 1: statements 5, 6, 8, and 10
- Path 2: statements 5, 6, and 7
- Path 3: statements 5, 6, 8, and 9
Table 2, page 26, shows the testing details of the three paths. Path 1 tests usp_lookupPrice's core business logic—namely, setting @product_price equal to a valid product_price and returning a status code of 0. Paths 2 and 3 test the two error handlers. Path 2 returns a status code of 10 and an error message of Row not found; Path 3 returns a status code of 11 (Unit_price is null) along with a null unit_price.
To test all three paths requires a separate test case for each. The test case for Path 1 covers the normal state, in which requesting a valid record (1, 2, 3, or 4) returns a status code of 0 and sets @product_price to the product_price value for the corresponding record. For Path 2, @@rowcount will be 0 whenever @product_id requests an invalid product_id from the Products table. Any product_id greater than 5 meets this condition. So, running a test using 6 as the @product_id should produce the expected outcomes for Path Statement 2 in Table 2—namely, a return value of 10 and a NULL output parameter value.
Finally, for Path 3, recall that record 5 in the Products table contains no product_price—a common bug I've recreated for this example. Requesting a product_price for record 5 will have two expected effects—a return value of 11 and a NULL @price output parameter.
Step 3: Run the Test, Verify the Outcome
Before running your tests, gather all the information and other resources you need, including the expected outcomes, the test cases, any special instructions for conducting the tests (e.g., what order to run the tests in), and any special setup requirements (e.g., log into the database as tester) and clean-up operations. You also need some way to feed input values to the procedure (I'm using Query Analyzer in my examples) and a spreadsheet, data table, or automated test manager for logging test results.
Sometimes, you'll run a test once and never repeat it. But you'll discover that you rerun most of your tests many times. Because an effective test is one that uncovers bugs, your first test run should usually be followed by a second to check your bug fixes. Thereafter, every code change you (or others) make should be followed by another test-debug-retest cycle.
To maintain consistency across cycles, you need a written script that tells the tester how to set up and run the test and record the results. For tests with many setup requirements or execution steps, the test script should include detailed instructions that stand alone. When preparing your test scripts, supply enough detail for someone else to carry out the test without having to ask for additional information or undergo special training. To save time and improve consistency from test to test, reuse the same scripts whenever possible.
Table 3 contains a sample script for testing the usp_lookupPrice procedure. Steps 3.1 through 3.3 are essential setup operations for all unit-level procedure tests. Step 3.4 verifies that a successful database connection has been made and that the correct data has been loaded into the test tables.
Step 3.5 shows a test driver you can use to feed test values to the stored procedure you're testing. We're running our tests directly from Query Analyzer here, but if you're doing a lot of testing, you might consider writing a stored procedure to automate this process. Another option is to run your test from a Web page or other client application. At this early testing stage, it's best to minimize outside interference—like a dead Web server—by staying as close to the code module as possible.
Finally, Step 3.6 executes the test, documents the results, and records the test outcomes. In this case, you can simply look at the query results to be sure the expected outcomes occurred. If the input data passes the test, you can note that fact and move on to the next test. For procedures that contain INSERT, UPDATE, or DELETE operations, you also need to verify that all test records have been correctly added, modified, or removed from the proper tables.
When the correct outcome doesn't occur, you need to record that result, as I discuss later. Include all relevant details about the actual outcome such as any error messages returned or any potential problems, like a broken connection. This information tells whoever debugs the procedure where to begin looking for likely errors. It also lets you know after you run a post-debug test whether you've fixed the bugs that caused the initial test to fail.
Finally, to wrap up the test cases for usp_lookupPrice, let's check the results for the two remaining scenarios in Table 2. Test Case 2 requires a product_id greater than 5, so let's request product_id 6 to test the error-handling code that captures out-of-bounds product IDs. Using the test driver from Step 3.5, run the following script from Query Analyzer:
DECLARE @price money, @return_ value int EXEC @return_value=usp_ lookupPrice 6, @price OUTPUT SELECT @price "Unit Price" SELECT @return_value "Return Value"
As expected, this request causes usp_lookupPrice to fail and returns a status code of 10—along with a NULL output parameter—thus verifying the correctness of the error-handling code.
Requesting product_id 5 also causes an expected fault and returns a status code of 11, indicating an invalid price (to run this test, just replace the 6 with a 5 in the preceding script). Again, the test results show that the error-handling code is behaving correctly.
Step 4: Document Your Test Results
Documenting your results is an essential part of the testing process. So, depending on your test outcome, you need to record your findings in at least one of two places. First, if your test succeeds, record that fact in a test log. A test log need not be a complex document; keep it as simple as possible, or you'll probably never use it. You can create a test table in your database or, better yet, create a dedicated database for your testing work. Begin by assigning a unique, unchanging number to each test you run. In the test log, also include all relevant information—the test conditions, the test values, the expected outcomes, the person who designed the test, the name of the procedure being tested, and the test script's file location.
Because you'll be running most tests multiple times, consider using a separate TestDetail table as a running record of each testing cycle. Besides a test_id foreign key, the detail table should include the state of the test (e.g., scheduled but not yet run or run but not yet completed), the date and time the test was run, the person running the test, the test outcome, the test result, and any follow-up action to be taken. Follow-up actions might include rerunning the test, debugging the procedure, or analyzing the test conditions when you suspect something went wrong with the test.
If the test fails, you still need to record that fact in your test log. In addition, you need a second document for tracking how you respond to test failures. This document is commonly known as a software problem report (SPR) or a bug tracker. When you get to system-level testing, SPRs are usually tracked by professional testers (if you have them) and become official entries in your project's change-management system. At the unit-testing level, the main purpose of an SPR is to help you, the procedure developer, better organize your testing and debugging work.
Again, you'll probably want to create a separate tracking table to store the data that goes into your SPR. That data should include the type of error (e.g., coding, design, hardware), the error severity (e.g., fatal, serious, minor), and the nature of the error as well as how to reproduce it. You should also date and summarize the problem in a separate column; the summary is the information your manager will most want to see. In addition, your SPR should include fields for tracking whatever follow-up activity takes place, the problem's status (open or closed), who resolved the problem, who retested the stored procedure, and the date each of those events occurred.
Step 5: Retest as Needed
Obviously, finding bugs is only half the story. Once you've uncovered an error, you must locate its source and rework the responsible code to remove the error. When retesting a bug fix, repeat the original test exactly as it was designed and previously executed. Variations in retest procedures can easily mask incomplete or improper fixes. They can also mislead you into thinking a valid fix has failed or created undesirable side effects. The first all-nighter you pull looking for phantom bugs will be the last time you question the need for consistency across the test-debug-retest cycle.
Procedure testing is an essential development practice used worldwide by quality-conscious DBAs and database developers. The testing framework I've outlined can dramatically improve the reliability of your database applications. Mastering unit-testing practices and skills will make you a better DBA and give an extra boost to your career as a SQL Server database professional.