Skip navigation

Questions, Answers, and Tips About SQL Server - 01 Jul 1998

My server has 1.5GB of physical memory, and I've assigned 1.3GB (650,000 2KB pages in sp_configure memory) to SQL Server. I've allocated 100MB of RAM to tempdb. Does that 100MB come from the memory I've already allocated to SQL Server?

Contrary to what many people think, space assigned to tempdb in RAM does not come from the memory you've assigned to SQL Server. If you don't remember this fact, you might allocate to the combination of SQL Server and tempdb more physical memory than your server has. The result will be disk-thrashing paging, which is what you're avoiding by putting tempdb in RAM in the first place. Don't inadvertently make this mistake.

I read in SQL Server Books Online (BOL) that I can dump a database from SQL Server 6.5 and load it into SQL Server 6.0, but I can't get the procedure to work. Do you know why?

A note included with SQL Server 6.0 Service Pack 3 (SP3) explains that Microsoft intended that you could load a SQL Server 6.5 database dump into SQL Server 6.0 if you had installed SP3. However, Microsoft later determined that this procedure wasn't technically feasible because of some changes to internal structures in SQL Server 6.5, which were not compatible with version 6.0. As a result, if you attempt to load a SQL Server 6.5 database dump into SQL Server 6.0 with SP3 applied, SQL Server 6.5 will reject the load and raise an error, saying that the dump is from an incorrect version.

I heard some people talking about using wildcards with the LIKE clause, and they mentioned using escape. However, when I tried using the Esc key, the technique just cancelled whatever I was doing. What did I miss?

The LIKE clause lets you search char, text, and datetime data for specific patterns. Table 1, page 202, displays the standard documented wildcard characters that you can use in your searches.

Escape doesn't refer to the Esc key, but rather to the ESCAPE option that you can use with the LIKE clause. This option lets you include the four wildcard characters in the known portion of the string you're searching. Look at the two examples in Figure 1, page 202. In the first example, the SELECT statement outputs any string that contains the character sequence under. In the second example, the ESCAPE option uses a tilde (~) as an escape character. If the search comes across ~ in the pattern string, it will treat the next character (an underscore in the example) literally, that is, as a regular character rather than as a wildcard character to include in the pattern match.

Can I easily run a certain stored procedure when my SQL Server starts?

Table 2, page 202, shows three special system procedures you can use to hone your startup routine. Sometimes, however, you make a procedure (e.g., SHUTDOWN) into an autostart procedure and live to regret it. If you want to correct such a mistake, you can disable execution of all autostart procedures by starting the server with trace flag 4022. You can set trace flags from the Registry or from the command line. Trace flags are unsupported features of SQL Server, so use them with care. For detailed information about trace flags, see "Using Trace Flags" in SQL Server Books Online (BOL).

What are clustered indexes, how are they different from nonclustered indexes, and how do they relate to primary keys?

Most people focus on the retrieval aspect of clustered indexes, but clustered indexes also play an important role in space management within large tables. Both of SQL Server's basic indexes--nonclustered and clustered--use a standard binary-tree (B-tree) data structure. However, in a clustered index, the leaf level contains the data pages. The result is that clustered indexes maintain data in physical order. Because you can physically order data in only one way, a table can have only one clustered index. As you might expect, maintaining the rows in physical order has some overhead, but clustered indexes make life a lot easier on the SQL Server optimizer.

You can use a clustered index in almost every large table, although which columns to include in your clustered index is not always obvious. When a table doesn't have a clustered index, SQL Server adds all INSERTs to the last row on the last page of the table. And SQL Server performs many UPDATEs as a DELETE followed by an INSERT. This fact means that an UPDATE of 100 rows on a table without a clustered index might delete those rows from their current locations and then INSERT all of them at the end of the table.

Although indexes and primary keys don't have a direct relationship, SQL Server will build an index to enforce all primary keys that you create with the ANSI constraint syntax. By default, the index is clustered, but you can have SQL Server create a nonclustered index instead. On columns that you define with the UNIQUE constraint, SQL Server will build a nonclustered index by default, but you can override the default. You define a primary key as a constraint to enforce uniqueness. You might choose to establish an index on your primary key column, but you don't have to. In short, always place a clustered index on tables that you frequently subject to INSERT, UPDATE, and DELETE commands, or your tables will end up looking like Swiss cheese.

Is a 300MHz Pentium server with 64MB of RAM powerful enough to run Windows NT, Microsoft Exchange Server, and SQL Server? Or do I need to use a dual Pentium server?

Every system has a hardware bottleneck, because a bottleneck is the slowest part of any system. Increasing CPU resources won't have much effect if you have a serious memory bottleneck, and increasing memory won't help much if the CPU is your bottleneck. You can run SQL Server and Exchange on the same machine with less than 128MB of memory, but if you expect to do a reasonable amount of work, you're wasting your time running the applications on a box with 64MB. Memory is inexpensive these days. So do yourself a favor and bump up your server's memory to at least 128MB if you plan to run production SQL Server and Exchange instances on the same computer. However, 128MB might not be enough memory if you stress your SQL Server and Exchange system. Evaluate your need for a second processor after you've upgraded your memory.

If I run Database Consistency Checker (DBCC) dbreindex repeatedly, index space utilization increases dramatically. How can I avoid this problem?

We found a tip about this problem in a Microsoft newsgroup. DBCC dbreindex rebuilds an index in place, and it can be a handy tool for recompacting space utilization within a table. However, in some cases space utilization increases dramatically if you run DBCC dbreindex repeatedly; dbreindex causes index sizes to grow. Service Pack 4 (SP4) fixes this problem.

We've invested a lot of time and money developing stored procedures for a new application, and we don't want our customers to see our proprietary business logic. Can you recommend a way for us to hide our stored procedure code so that people can't look at it in syscomments or create scripts using SQL Server Enterprise Manager (EM)?

SQL Server will encrypt your stored procedures within the server if you use the WITH ENCRYPTION clause when you create the procedure, in this way.

CREATE PROCEDURE spencrypted WITH ENCRYPTION AS

But be careful: You won't be able to see the code, either. So make sure you have good source code control outside the server.

SQL Server supports double-byte character sets (DBCSs). Isn't DBCS the same as Unicode?

You're correct in saying that SQL Server supports DBCS, but DBCS isn't the same as Unicode, which SQL Server won't support until SQL Server 7.0. DBCS is a hybrid approach to storing characters--some characters use one byte, some use two bytes. True Unicode uses two bytes for every character. Unicode data needs twice as much storage space as simple ANSI character sets, but eliminating the need to convert extended characters between code pages offsets this requirement. The new Unicode data types in SQL Server 7.0 are ntext, nchar, and nvarchar.

Which stored procedure lets me capture file attributes (specifically the size of a file)?

Try xp_getfiledetails, which accepts a string that specifies the path of the file you want information about. For example, if you use

xp_getfiledetails "c:\pagefile.sys"

the extended stored procedure returns the result set in Table 3.

How does tuning ODBC server-side cursors affect application performance?

The addition in SQL Server 6.0 of cursors to the server provides an efficient way for single-row operations to occur within a given result set. Using cursors, you can perform multiple operations row by row against a result set without returning to the original table; cursors conceptually return a result set based on tables within the database. For example, you can generate a cursor to include a list of all user-defined table names within a database. After you open the cursor, moving (fetching) through the result set can include multiple operations against each table (passing each table name as a variable). Cursors are powerful when you combine them with stored procedures and the EXECUTE statement to dynamically build strings. (See William R. Vaughan's Hitchhiker's Guide to Visual Basic and SQL Server--Microsoft Press--for a comprehensive discussion of cursors and the major programming models.)

The original ODBC drivers implemented only client-side cursors, which put the burden of the cursor processing on the client system instead of on the more powerful server. Client-side cursors are often the most efficient way to handle small (1000 rows or fewer) result sets, but client-side cursors aren't as appropriate for large result sets. Beginning with Remote Data Objects (RDO) 1.0 (which requires at least ODBC 2.0 compliance), Microsoft offered true server- side cursors, which enabled the server to manage the cursor for the client application.

The ODBC API cursors also let ODBC applications make server-side cursor requests. ODBC API cursors include sp_cursor, sp_cursorclose, sp_cursorfetch, sp_cursoropen, and sp_cursoroption. These cursors are internal server extended stored procedures, even though they don't have an xp_ prefix. (To see a list of extended stored procedures, run EXEC sp_helpextendedproc in the master database.)

Internally, ODBC treats every record set as a cursor that you can manage on the client or on the server. Client-side cursors return all the data from the server by issuing one SQL statement that streams all the rows to a local cache on the client. The local ODBC driver manages this cache as a client-side cursor. Server-side cursors leave the data on the server, and ODBC asks for a few rows as necessary. Both client-side and server-side cursors have advantages that make them useful in certain circumstances. This example illustrates their differences. Let's say that your ODBC application runs the following query:

SELECT * FROM employees

Using a client-side cursor, the ODBC returns one result set with 23 rows. If you used SQL Trace (see Brian's article "Tuning Your Database with SQL Trace," May 1998), you'd expect to see a trace that looks like

SELECT * FROM authors

However, if ODBC uses a server-side cursor to return this information, SQL Trace output for the operation might look like Figure 2. The trace of the server-side cursor is different from the trace of the client-side cursor, but the front-end application won't see any difference in the final data. Rather than issuing one command that returns the 23 rows from the authors table, you've opened a server-side cursor with sp_cursoropen and returned one row at a time with each call to sp_cursorfetch. We don't condemn server-side ODBC cursors, but we condemn the default values that ODBC chooses in most cases.

You can easily solve this problem. ODBC lets you set the row fetch size for cursor operations; that is, you can change the number of rows each call to sp_cursorfetch returns. If you use ActiveX Data Object (ADO), you can use the CacheSize property to change this setting. Setting Cache Size to 25 for the same SELECT * FROM authors operation produces SQL Trace output like in Figure 3. This trace shows a more efficient way to use server-side cursors, because you've limited the number of times you call the server. This activity significantly affects performance with 23-row result sets, and the performance implications get much worse quickly as the size of the result set increases.

This quick tip just scratches the surface of properly tuning cursor utilization in ODBC-based applications, but it illustrates one important fact: Keeping the default CacheSize of 1 when dealing with server-side cursors is almost always a bad idea. However, changing this setting is usually easy.

For more information about this important tuning topic, read the ADO and ODBC white papers in the Microsoft SQL Server Developers Resource Kit. Both papers discuss cursor utilization in more detail, and the ADO white paper provides annotated code samples and a Visual Basic (VB) application that will help you better understand the effects of ADO settings on SQL Server.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish