Database security used to be a simple matter of access control. Break your users into task groups, then assign each group the permissions its members require. How times have changed! No SQL Server professional today would ever think of releasing a database without first checking it for security vulnerabilities such as weak passwords or porous firewalls. But how do you know whether you've verified every essential security setting? Forget to lock down your installation folders or disable your server's Guest account, and just like that, you're toast!
The answer is to develop an organized security-testing plan that checks various configuration settings that your company requires and generates reports that log the test results. I'll explain the elements of a security-testing scheme, then provide T-SQL code examples that you can use to automate various parts of your configuration-testing and test-reporting process. Let's get started.
Configuration checks are the nuts and bolts of database-security testing and are an essential part of protecting your databases against outside intruders or tampering from within your organization. A configuration check essentially involves verifying that a particular security option—such as enabling a firewall or a specific port configuration or deleting nonessential network libraries—has been correctly set. Configuration checking also includes backup and file-protection policies, authentication modes, Guest accounts, and Public role permissions. Most DBAs do security checks manually, but by scripting the steps involved, you can easily automate many routine security-configuration checks.
For example, to find out whether your database server is running the most up-to-date service pack, you could run the T-SQL code that Listing 1 shows (insert your own version number). (Some code in the listings wraps to multiple lines because of space constraints.) Or, to determine whether an unnecessary service has been disabled,you could run the code in Listing 2, which checks whether the distributed transaction coordinator (DTC) service (i.e., MSDTC) is running.
Although scripts that automate security tests are seldom more complicated than the previous code examples, the examples aren't fully automated tests. A fully automated security test includes these elements:
- a clearly defined test plan for each security check, which includes an expected outcome (e.g., any SQL Server version equal to or greater than 8.00.2039), so that you can determine whether a configuration option is correctly set, and preconditions that the test requires (e.g., the server is running, the database supports all the settings being tested, the test has the necessary permissions on the test script and test targets)
- test reports documenting the results of each check
- a separate bug report that's produced when a configuration setting violates a security policy
Don't waste time testing a configuration setting if your organization has no security requirement or policy regarding that configuration option. Furthermore, if you believe that a particular security setting must be tested, although it's undocumented, you should first log a bug against your company's security plan or business rules (or both) ASAP. If the setting is an essential security checkpoint, you should inform your business analyst or security planner of this so that he or she can revise your company's security policy accordingly.
Logging Test Reports
As part of your configuration-testing procedure, you should log a test report whenever you check a security setting. At a minimum, every test report should include a unique ID, the date the security check occurred, the tester's name, the test's expected and actual outcomes, and the test result (pass or fail). (I'll discuss how to determine an expected outcome in the next section.)
To generate a test report, you can create a table that stores the results of a configuration test, such as the T-SQL code in Listing 3. The two test reports that Table 1 shows contain sample test results similar to the results you'd see after running the previous configuration-testing scripts. (Later, I'll provide specific coding techniques for moving the test results into a table as well as automating other parts of the testing process that I discuss here.)
Logging test results saves time, promotes consistency, and reduces test duplication. Developers can use these results to reproduce apparent bugs. Testers can use them to verify any bug fixes that are made. Logging your results also discourages unplanned and undocumented tests. As I'll show you shortly, you can also automate test reporting just as you can automate the security checks.
You should also log a bug report whenever you discover a security violation or vulnerability. The bug report should contain the same information as the test report, along with a bug identifier, the date the report was logged, a description of the security violation or vulnerability, and a status flag (e.g., Testing, Debugging, Awaiting Clarification, Fixed) that indicates where the bug stands in your development workflow. Some companies also require the identifier of the parent requirement or security policy.
To log bug reports, you should first create another table for the bug-report data, by using a statement similar to that in Listing 4. To see what a bug report might look like, let's consider another example: A security check detects a server running under the LocalSystem account (a definite "no-no" in almost any production environment), as the code in Listing 5 does. The bug report for this security check might look like the one that Table 2 shows. Notice how the Current Status column has been set to Debugging to signify that testing has stopped until the problem is fixed.Once the bug has been fixed,the administrator or developer resets the status either to Testing (i.e., a defective setting—Local-System in this case—is found and fixed) or to Closed (i.e., the issue is otherwise resolved).
Choose a Test Benchmark
To determine a test's expected outcome, you need a benchmark (also called an oracle). A security configuration test can include various benchmarks—for example, business rules or requirements such as "SQL Server must be installed on a standalone server," design rules such as "Every input procedure must contain a data-validation routine,"vendor specifications such as Microsoft's Security Best Practices (SBP) Checklist, and industry best practices such as the US Department of Defense's C2 security standards.
Here I use Microsoft's SBP Checklist as a benchmark for our security checks. If your organization doesn't yet have its own database-security plan or security best practices, I recommend you use the SBC Checklist as a guide for developing them. You can always add new or revise existing security policies as your needs change. Table 3 contains an abbreviated list of SBPs in the checklist.
More Automation Techniques
I've shown you sample code that automates discrete areas of security testing: performing a configuration test and creating tables to contain test and bug reports.You can write code to automate other test activities, such as the reporting and bug-logging steps of the process.
How might you automate test-report creation? Earlier, you saw sample code that creates a report table, tabConfigSettings. Now you need code that moves the test results into tabConfigSettings.To do so, you can use a simple INSERT statement like this, which places the results of the service-pack test done earlier into tabConfigSettings:
INSERT tabConfigSettings (ConfigItem, ExpectedSetting) VALUES ('Version#', '8.00.2039')
(The Version# value you use depends on your particular configuration.)
Even better, instead of creating separate INSERT statements to move each set of test results into the table, you can use a stored procedure, such as the one that Listing 6 shows, to automate this task for all the test results.
Next, you'll need a way to automatically log test results. To do so, you can use an UPDATE statement, similar to the one that Web Listing 1 shows. Since every security setting repeats the same basic logic as this UPDATE statement, you can simply wrap the statement inside another stored procedure, as the upUpdateTestCase procedure in Web Listing 2 shows. upUpdateTestCase takes three input values: a configuration item, a test result, and the item's actual setting.
You can automate bug reports by using coding techniques similar to those I just explained.The basic method for doing so is to use an INSERT statement like that in Web Listing 3, which assigns each bug report a unique TestID and specifies the configuration setting being tested and other related information. Alternatively, you could write a separate procedure that inserts a report after a test is run, depending, of course, on the test result—since bugs are logged only when a configuration setting fails the test. But what if someone wants to log a bug report manually or by using a different testing tool? To accommodate such cases, you can simply create a trigger on the tabConfigSettings table and let it automatically log bug reports. As the code in Web Listing 4 shows, trigger trgLogBug is fired whenever the tabConfigSettings table is updated. trgLogBug first checks whether the testresult column (which is found in the inserted temp table that the trigger automatically creates) is set to Fail. If it isn't (i.e., the test "passed"), the trigger aborts and no bug report is logged. If testresult is set to Fail, trgLogBug inserts a new bug report in the tabBugReports table, taking its values from the updated tabConfigSettings table.
Putting It All Together
You now have the basic pieces for an automated security test-and-report system. There's only one problem, though. Although the individual steps are automated, the process itself still requires manual intervention.You can fix that by writing code that fires each step automatically. To do so, first fold each step into another stored procedure, as Listing 7 shows. Procedure upVerifySecurityConfiguration inserts test cases into tabConfigSettings, then verifies that each security setting is correct. Notice that upVerifySecurityConfiguration contains our previous two subprocedures, upInsertTestCase and upUpdateTestCase.
To extend upVerifySecurityConfiguration, you simply include additional test cases with their corresponding automation scripts. Web Table 1 contains additional automated security verification scripts you can use. As with our initial three test cases, each of these scripts targets a security setting from Microsoft's SBP Checklist.Web Table 1 also shows each test's goal and the expected setting for SBP-compliant servers and databases. Simply include the scripts you need in your automated security-checking process.
Your final step is to write code that automates the test-reporting and bug-logging processes. The tabConfigSettings table and the trgLogBug trigger provide the necessary reporting infrastructure. To keep everyone up to date on your latest test results and security bugs, you can simply publish those tables by using SQL Server's sp_makewebtask Web reporting tool, as the code in Web Listing 5 shows.
Web Table 2 shows what the tabConfigSettings table might look like after you've run all 15 configuration tests against SQL Server's Northwind database (using the defaults on a standard SQL Server 2000 install on a Toshiba laptop runningWindows XP). As you can see, nine of the 15 tests uncovered security settings that violate Microsoft's own SBPs.
You'll want to customize the code examples and automation framework I've provided to include the specific security checks you need to run. Develop a set of scriptable checks that meet your security requirements and evaluate your checkpoints against your organization's security plan. Do you have at least one test for each security policy included in the plan? If not, you'll need to design one. And what about the policies themselves? Does your organization's security plan cover every relevant item on Microsoft's SBP Checklist? If not, find out why.
Finally, if the automation framework makes sense to you as an end-of-cycle development-testing tool, consider turning it over to your support team so that they can adapt it to your production servers. An administrator could, for example, schedule a SQL Server Agent job that automatically runs your configuration checks at scheduled intervals during the week. Another option is to run your test tool whenever an unsuccessful login attempt (or other auditable security event) occurs. Or you might simply set a SQL Server alert to fire whenever a login attempt fails, launching the upVerifySecurityConfiguration procedure in the process to verify that your production servers are still running "safe."
Remember to document the practices and procedures you use to design security checks and process test results. As you've seen, configuration checking requires no special tools or expertise beyond a good SQL editor and a basic understanding of database administration and common security checkpoints. But configuration checking is a poor substitute for professional security testing unless you have a test plan (including suitable test cases and reliable test benchmarks), documented test results, and shared bug reports. Automating the entire process is the proverbial icing on the cake, since it not only saves time but also assures you that your database (and server) meet or exceed a baseline security level.