Used to be, procedure testing was a last-minute sanity check before releasing a new procedure: "Does the procedure execute cleanly and complete its assigned work?" If you answered yes, you could release the procedure and move on. Few busy DBAs had time to worry about atypical usage scenarios such as illegal parameter values, missing data, or unavailable objects such as work tables. But this straightforward approach no longer works. We now know that stored procedures are just like other software: Before release, we must test and retest them until they meet established standards. And therein lies the rub—because unit testing isn't much fun. It requires laboriously entering test values, tracking down test results, trying to figure out whether the procedure did everything it was supposed to do—and nothing else.
When a task is no fun, the programmer in us naturally starts looking for ways to automate the boring stuff. Wouldn't it be great to cut the time it takes to unit test your T-SQL stored procedures? Well, now you can! You can automate your procedure testing by using nothing more than Query Analyzer and a little T-SQL code. The solution has four easy-to-implement parts that you can extend as needed: a table that stores your test cases and identifies the procedures you want to run; dynamically generated T-SQL for executing test cases against your procedures; a table that stores your test results; and SQL Server's sp_makewebtask system stored procedure for reporting your test results.
Unit Testing and Test Automation
To get started, let's quickly review what's involved in unit testing a stored procedure. At its simplest, testing exercises code for the purpose of finding errors. For nonparameterized stored procedures, "exercising the code" simply means submitting an execution request with the correct procedure name. For procedures that contain input parameters, it means including parameter values along with the procedure name; and for those that contain output parameters, it means declaring a local variable to receive the output values. Nothing complicated here, as far as the basic mechanics go.
What sometimes makes matters tricky is choosing the right test values and identifying how those values interact with the state of your database at the time you run your tests. I assume you already know what test values you want to use without delving into value-selection strategies such as equivalence partitioning (e.g., choosing input values that exercise valid and invalid resource conditions, such as inserting into an existing table—valid—versus trying to insert into a nonexistent table—invalid) and boundary-value analysis (e.g., choosing input values that probe extreme conditions, such as months 1 and 12 when testing dates). By the "state" of your database, I mean its preconditions (i.e., its requirements or dependencies), such as the objects a procedure requires and whether the user has permission to access those objects.
When you know the state of your database, you can figure out the setup and execution mechanics you must simulate to automate your procedure tests. Running a test against a stored procedure requires that you know several values: the values of any input parameters passed to the procedure, the procedure's return value (to help you decide whether any faults have occurred), any parameter values you expect the procedure to output, and any values the procedure prints or sends to a table. When you run a test, you compute a pass or fail result by comparing expected outputs with actual outputs and checking a procedure's return value for fault conditions (in the form of custom error codes) such as a missing table or improper permissions.
So, the basics of test mechanics are simple. Procedure tests can also check the pre-test and post-test states of database objects (e.g., the insertion of records into a table), but I won't go into those kinds of tests here.
Recording Your Test Cases
So how do you automate the mechanics of executing a test case? In practice, a test case includes a procedure name and any associated input values, output parameters, or return values. Let's start by creating a table to store the names and parameters of some sample procedures for later testing, as the code in Listing 1 shows. To keep it simple, I've included both procedure names and parameter data in the same table, tabTestValues. That way, all test data are centrally located, so you can easily add new test cases and query the table for quality assurance reports. (If you require normalization, you might want to create separate tables for your procedures and parameter data.) Table 1 shows a description of each column in tabTestValues. If you want, you can easily extend tabTestValues to include additional information such as the test-case author; date and time stamps; the procedure's version number (for source control and build-management purposes); the procedure owner (for notification purposes, particularly when a test fails); and the application requirement or feature the procedure implements (to ensure traceability).
Now that we've got a place to store our tests, let's populate tabTestValues with some sample test cases. The first three INSERT statements in Listing 2 create a test case for a stored procedure named Sproc_1, which occupies three rows in TabTestValues—one row for each of its input parameters. (Listing 3 shows the creation code for the stored procedures that the tabTestValues table references.) Sproc_1 has no output parameter and only prints its input values.
The next four INSERT statements in Listing 2 create a test case for Sproc_2, a procedure containing three input parameters and an output parameter—total_cost—that's a money data type. Sproc_2's logic is a bit more complex. After receiving a product name, quantity, and price, Sproc_2 calculates the total order cost (quantity * price) and then uses the output parameter, @total, to pass it to the calling process (Query Analyzer in this case). The next two INSERT statements in Listing 2 create a test case for Sproc_3. The procedure requests a last name, then uses an output parameter to return a varchar(30) error message. This logic shows how an automated test driver might handle different data types. Finally, the up_lookupPrice procedure queries a table called Products (which we omit on purpose for this example to force a "missing resource" error) for a price, then returns the price to the calling program. Up_lookupPrice introduces additional realism by adding realtime error checks and return codes commonly used in production-level stored procedures. The procedure also forces the test driver to deal with a common dependency violation in the missing Products table.
Automated Testing with Up_autoTest
Now that we've loaded tabTestValues with test cases, let's take a closer look at the test driver, the up_autoTest stored procedure that Listing 4 shows. Up_autoTest reflects the realities of real-world test environments. This stored procedure has lots of great features. It's self-contained (in this case, running directly from Query Analyzer), tests any number of procedures in a single run, and runs any number of tests on the same procedure. In addition, the stored procedure works with all commonly used stored procedures (nonparameterized procedures, input-only procedures, and procedures with output parameters), supports any number of input parameters (up to the legal T-SQL limit), and supports all standard T-SQL data types (e.g., int, money, char). Up_autoTest provides a standard format for all tests, captures return values as well as output values, when available, allows procedure testers and developers to work independently of one another, as needed, and is easy to use, maintain, and extend.
To understand the way up_autoTest works, consider how you'd use Query Analyzer to manually test the up_lookupPrice stored procedure that Listing 3 shows. First, you'd declare a local variable—@price—to receive up_lookupPrice's output value. Second, to check for potential error conditions, you'd declare another local variable—@return_value—to receive up_lookupPrice's return value. Third, you'd select a test value—such as 4—for up_lookupPrice's input parameter, @product_id. Finally, you'd include an execution statement, followed by SELECT queries to display @price and @return_value. The following code shows this test process:
EXEC @return_value = up_lookupPrice 4, @price OUTPUT
SELECT @price "Unit Price"
SELECT @return_value "Return Value"
Because the required Products table doesn't exist, executing up_lookupPrice results in a null Unit Price and a return value of 10 (the custom code that signifies a missing table in Listing 3). Because of this fault, you'd conclude that up_lookupPrice failed this particular test.
The question we need to answer is how to automate these manual steps to test up_lookupPrice. TabTestValues already contains all the necessary information: the procedure name, the names and test values of all input parameters, and the names and expected values for all output parameters. So, to automate our testing steps, we simply write T-SQL code that combines this information into test scripts like the one we just created manually, then executes those scripts. The T-SQL code will then compare actual with expected outcomes, capture any custom error codes, determine the test results and record them for later use, and produce a Web report that will display the test results.
Creating the T-SQL code is the meat of the process, but the ability to execute stored procedures (or other T-SQL code) on the fly is what actually makes automated testing possible. By deferring query construction until run time, test designers can store procedure names and test details in a metadata table (such as tabTestValues) without worrying when, by whom, or how often those tests will eventually be run.
To illustrate the automatic test process, let's walk through the sample stored procedure up_AutoTest, which Listing 4 shows, as it dynamically tests up_lookupPrice. First, up_AutoTest identifies up_lookupPrice as the procedure under test by selecting its proc_name and any associated parameters inserted into tabTestValues. Next, up_AutoTest uses a cursor to dynamically build a test script, concatenating the proper execution syntax (the procedure name along with any parameters and placeholder variables needed to receive return values or parameter outputs) into a local variable, @query. Once the test script is complete, up_AutoTest executes @query.
As I designed it, up_AutoTest runs any number of sequential unit tests. Should you need to execute individual procedures, you can revise up_AutoTest to include an input parameter referencing the procedure name. Figure 1 lists the steps in up_AutoTest's logic.
Reporting Test Results
And that's it—well, almost. Our requirements also say we must report our test results. To produce reports, we need another table, tabTestResults, which the code in Listing 5 creates. Now, after executing all the tests, the up_AutoTest stored procedure queries tabTestResults for any return values, parameter outputs, and expected test outcomes. Then, the procedure evaluates the three values and defines a test result. (You might choose to define test results differently than I did.) Finally, the procedure sets the test_result column to Pass or Fail and updates tabTestResults to include the test results.
To keep everyone on top of your team's testing work, up_AutoTest also includes automated Web reporting. Just specify a valid file path in the final block of Listing 4's code, and up_AutoTest queries tabTestResults for the necessary data, then executes the system procedure sp_makewebtask (the brains behind SQL Server's Web Assistant), producing a Web report of your test results. In case you need them, sp_makewebtask offers parameters for customizing Web reports, including job-scheduling options through SQL Server's Agent service.
When you run up_ AutoTest, you'll see from the Test Results Web report that the up_lookupPrice procedure failed the test we designed, returning the same custom error code (10) as before—and thereby alerting us to the missing Products table. The other three procedures all passed their tests, returning a 0 (error-free) execution code to the calling process. Sproc_1 contained no output parameters (hence the None flag in the Actual Output Value column of Test Results), whereas sproc_2 and sproc_3 both passed back expected output values.
When using automated testing, keep in mind that test effectiveness depends as much on the test cases you design as the test procedures you follow. Do your test cases force the expected behavior (e.g., triggering an error, updating a column) from your procedure? If not, you'll need to reconsider the input values you're using or the pre-test state of your database (e.g., column values, access permissions). Are you getting back the data (return values, output parameters, table changes) you expect? As yet, up_AutoTest doesn't automatically check the correctness of the SELECT statements' result sets or the table changes that INSERT, UPDATE, or DELETE operations produce. If these checks are important to your testing goals, you can perform them manually or extend up_AutoTest to do the work for you.
Take It for a Spin!
Now, you can automate your stored procedure unit tests. The framework—and associated test driver, up_AutoTest—that we use in this article's examples support many recurring test scenarios. By using up_AutoTest, you can save time when you're unit testing a new procedure, retesting a procedure you (or someone else) previously wrote, debugging procedures and running regression tests to check for unwanted side effects, or unit testing procedures across an entire database or application. Are you ready to see for yourself how useful automated testing can be? Just copy and paste this article's listings into Query Analyzer and give up_AutoTest a test drive. You—and the test specialists on your team—will be glad you did.