Skip navigation

The Investigative Tools

Whether you're a developer or a DBA, you need to become good friends with SQL Trace in SQL Server 6.5 or SQL Profiler in SQL Server 7.0. These tools show you information that the application is sending to the SQL Server, such as SQL statements and data-access layer calls, duration of executing statements, CPU time on the SQL Server machine, and user and host-machine identification. When I'm investigating a performance problem, I first do a quick sanity check of Windows NT and SQL Server configuration option settings. I look at the basic hardware configuration (CPU, memory, and disk subsystem); other applications running on the machine; SQL Server configuration options such as memory, locks, user connections; and other settings that can consume resources. Then I use SQL Trace or SQL Profiler to run a trace. These traces generate information I can use to identify problems. Look for resource-intensive statements that require excessive execution time on the server, statements that execute an inordinately high number of times, and other problems such as excessive creation of temporary stored procedures in SQL Server 6.5 or calls to sp_prepare and sp_execute in SQL Server 7.0.

Analyzing trace output is important because data-access components such as ADO add a layer of complication that can degrade performance. What looks to the developer like a simple SELECT statement sent from a Visual Basic (VB) application can become a resource drain when it comes out of the OLE DB provider or ODBC driver. I recommend that developers use SQL Trace or SQL Profiler when they unit-test their code. As an example, when a development team recently asked me to investigate excessive CPU use, I asked the team to run a SQL Trace. When I received the trace output, I noticed that a highly selective stored procedure that should have returned only one row had an unusually high average duration for the several thousand times it executed. Following that lead, I found that a rebuild of the clustered primary key on a table had failed the night before, so SQL Server had no index to use. The resulting table scans for each execution of the stored procedure caused excessive I/O, which resulted in higher CPU usage. The moral of this story is to use all the tools available and try to correlate your observations. And check the application before you spend big bucks on hardware.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish