Questions, Answers, and Tips About SQL Server - 01 Dec 1998

Install SQL Server on a domain controller at your own risk, shrink your tempdb database, and create your own benchmark.

15 Min Read
ITPro Today logo

Q: Do some NetLibs behave differently when you use Open Database Connectivity (ODBC) connection pooling?

When you use ODBC connection pooling, Named Pipe NetLib connections are slower and might cause erratic behavior. We duplicated anomalous behavior by using ActiveX Data Object (ADO) to connect to SQL Server 6.5 over Named Pipes while we were running Microsoft Transaction Server (MTS). The system responded with the error message Unable to read login packets. As a general strategy when you're running MTS, use the same connection parameters (i.e., username and password) to help provide maximum reuse of connections.

Q: Does installing SQL Server on a domain controller affect security and performance?

To avoid performance and security problems, we encourage you to install SQL Server on a member server in a domain, rather than a Primary Domain Controller (PDC) or Backup Domain Controller (BDC). Performance is affected because the domain controller consumes memory, CPU, and other resources that a member server would delegate to SQL Server. Performance degradation is not noticeably great, unless you don't have enough domain controllers to handle the network load.

Many people are familiar with these performance problems, but not everyone realizes the potential security risks associated with SQL Server installed on a domain controller. SQL Server is a Windows NT service called MSSQLServer. Like any NT service, MSSQLServer must run within the security context of an NT account. This configuration lets users assign permissions and rights to a service by assigning those permissions and rights to the underlying NT account. You have two account choices: You can run a service as LocalSystem, which is an administrator account with powerful rights and privileges, or you can run a service as normal, which is an account with the rights of an ordinary system user. By default, the setup program installs MSSQLServer to run as a LocalSystem account, giving users powerful rights and privileges.

This default setting can lead to security problems. For example, xp_cmdshell is an extended stored procedure that lets SQL Server users issue a Transact SQL (T-SQL) command as if they were typing in a command prompt window. If a user types the command

xp_cmdshell "dir c:"

from a T-SQL session, the system returns the output "dir c:" as the query's result set. This result is benign, but xp_cmdshell executes in the same user security context of the NT account running the MSSQLServer service. Thus, users who have the right to execute xp_cmdshell also have the right to perform

xp_cmdshell "erase c:*.* -F -S -Q"

This command is a full recursive deletion of the root directory. For even more fun, a SQL Server system administrator (sa) can type

xp_cmdshell "net" **inc

This command lets sas turn their personal NT accounts (i.e., with ordinary user privileges) into a domain administrator account.

You can avoid these security holes (even if you install SQL Server on a domain controller) by running MSSQLServer under an account other than LocalSystem and without domain administrator privileges. However, to contain potential security problems and not degrade performance, your best option is to put SQL Server on a member server rather than a domain controller.

Q: What is the best way to make my tempdb smaller?

We see tempdb-related questions often in SQL Server newsgroups. Tempdb defaults to 2MB, so you generally want to expand tempdb rather than shrink it. However, here's our recipe for shrinking tempdb:

  1. Use sp_configure to put tempdb into RAM.

  2. Stop and restart SQL Server.

  3. Create a new device to hold your tempdb database, and select it as the default device.

  4. Cancel the selection of the master device as the default device.

  5. Use sp_configure to remove tempdb from RAM.

  6. Stop and restart SQL Server.

Your tempdb database is now on the device you created. It is the same size as the default database size in sp_configure, so you need to shrink or expand tempdb as necessary (some queries will use much more space in tempdb than the default 2MB).

By moving tempdb into RAM, you remove all references to existing disk device usage. Taking tempdb out of RAM creates a new database that is backed by disk storage on the first device in the pool of default disk devices. By default this pool includes only master.

Q: My boss wants to use Microsoft Cluster Server (MSCS), and I'm a little worried that I'm getting in over my head. Do you have any advice?

Surprisingly, using MSCS isn't difficult. We see more folks using MSCS for its failover support with SQL Server. However, you need to be aware of potential problems.

TCP/IP sockets. Under certain circumstances, you might not be able to connect to your virtual SQL Server machine running on a cluster over a TCP/IP socket connection even though Named Pipe connections still work. A bug in the netbt.sys driver causes this problem. A hotfix is available at Or, you can connect via a Named Pipe connection as a short-term solution.

Server account privileges. SQL Server cluster services run in the security context of the LocalSystem account or an ordinary Windows NT account. By default, SQL Server installs the MSSQLServer service to run as a LocalSystem account and installs SQLExecutive to run under the NT account that you specify during installation. SQL Server Setup grants the Log on as a service privilege to the SQLExecutive account you specify. However, if your cluster nodes are member servers, and not domain controllers, SQL Server Setup grants this privilege only on the primary node and not the other node. This configuration can cause problems.

For example, during testing, VSQL1 SQLExecutive service ran fine on NodeA, its preferred node, but wouldn't start on NodeB during failover. Similarly, VSQL2 SQLExecutive service ran fine on NodeB but wouldn't start on NodeA. SQL Server Setup configured VSQL1 SQLExecutive to run under the SQLExec1 domain account and configured VSQL2 SQLExecutive to run under the SQLExec2 domain account.

After a little research, we realized that SQL Server Setup grants the Log on as a service privilege at the machine level on a member server, not the domain level. SQL Server Setup granted this privilege to SQLExec1 on NodeA during the initial SQL Server installation, but SQL Server Setup didn't grant this right to SQLExec2 on NodeA. VSQL2 SQLExecutive service didn't start on NodeA because it didn't have the necessary privileges. You can fix this problem by modifying the user rights in User Manager for Domains. You can also assign these rights by opening the Control Panel Services applet and resetting the startup account information for VSQL2 SQLExecutive on NodeA. (Of course, we also needed to reset SQLExec1 on NodeB.) Resetting the password to the existing SQLExec1 password grants the VSQL2 SQLExecutive service account the Log on as a service privilege on the local machine.

If this cluster contained two domain controllers, we wouldn't have encountered this problem. Consider installing your MSCS SQL Server systems on member servers rather than domain controllers, as explained in the previous performance and security question. Also, you need to use SQL Server 6.5, Enterprise Edition or later to work with MSCS.

Q: I can't start or stop SQL Server through Enterprise Manager (EM) from a Windows 95 client. Is this function normal? Am I missing a DLL? My system is LAN-connected and logged into the domain. The server is registered (as user sa) through standard security. All options for start, stop, and pause are unavailable on menus and toolbars.

This behavior is normal. Win95 doesn't implement the remote procedure call (RPC) interfaces necessary to view stop and start services under Windows NT. You can run EM from a Win95 or Win98 client, but you won't be able to control the underlying MSSQLServer and SQLExecutive services running on the NT Server.

Q: I know replication will be more powerful in SQL Server 7.0, but I've heard there will be incompatibilities with SQL Server 6.0. Will my SQL Server 6.x replication break under SQL Server 7.0?

Don't worry. In SQL Server 6.x, subscriptions can be to articles or publications. An article can be an entire table, only certain columns (using a vertical filter), only certain rows (using a horizontal filter), or even a stored procedure (in certain types of replication). A publication consists of multiple articles. For backward compatibility with SQL Server 6.x, SQL Server 7.0 still allows subscriptions directly to an article. However, the SQL Server 7.0's user interface (UI) doesn't support this configuration. If you have applications built with SQL Server 6.x and those applications subscribe directly to articles (rather than publications), those applications will continue to work in SQL Server 7.0. Nonetheless, begin migrating your subscriptions to the publication level.

Q: Can I install SQL Server 7.0 beta 3 in a separate directory on my hard disk without destroying my SQL Server 6.5 installation? I don't have a separate test server.

Yes, you can safely run both versions on one machine (Brian is running his laptop with both versions installed). Before installing beta 3 on any system, make sure you have a recent backup of your current mssql directory. Then, stop all SQL Server 6.5 services. Run SQL Server 7.0 Setup, and install SQL Server 7.0 in a new directory. Don't select the option to upgrade existing databases, even though you probably can safely select this option. SQL Server 7.0 Setup will detect SQL Server 6.5 and will create a new set of Registry keys to manage the SQL Server 7.0 installation.

SQL Server 7.0 Setup will also create an MS SQL Server­Switch entry in your Start menu Programs list. This utility copies the SQL Server 6.5 or SQL Server 7.0 keys in and out of the correct Registry key names, which lets you run either SQL Server 6.5 or SQL Server 7.0 on the same machine. It also swaps all the runtime .dlls in and out of the temp directory.

This installation works on Brian's laptop, except that his system can't connect to remote SQL Server 6.x machines over a TCP/IP socket NetLib connection, even though he can connect over Named Pipes. Looking in the SQL Server client configuration tool, Brian noticed that the TCP/IP socket NetLib (i.e., dbmssocn.dll) shows version even when he's running SQL Server 6.5. This inaccuracy indicates a .dll conflict and an imperfect MS SQL Server­Switch.

Q: I have two tables with two indexes each. The first index is the primary key, and the second index is a field that I use for query selection. During my initial load of the tables, I add about 75,000 records to the first table and about 150,000 records to the second table. I immediately query the tables with WHERE clauses that the secondary indexes satisfy. The first query takes 10 seconds, and the second query times out at 15 seconds­both queries should take less than a second.

Following advice from SQL Server forums, I re-created my problem and used SET SHOWPLAN ON to display SQL Server's approach to my query. First, I get the following output


The type of query is SETON


The type of query is SELECT


ACD.dbo.QStatsRaw QstatsRaw

Nested iteration

Table Scan

After rebuilding the index with SQL Enterprise Manager (EM), I get the following output


The type of query is SETON


The type of query is SELECT


ACD.dbo.AStatsRaw AstatsRaw

Nested iteration

Index : QDate

Why didn't SQL Server use the index until I rebuilt it, even though EM verified that the index existed?

You're experiencing a common problem for new SQL Server administrators. Do you know about the UPDATE STATISTICS command? The SQL Server optimizer picks indexes based on how selective the index will be for a given query. Queries that return a large percentage of rows from a table typically don't use a nonclustered index. The SQL Server optimizer uses a statistics page to guess how many rows a query will return to determine whether an index is helpful. We won't bore you with the details of how SQL Server stores statistics pages internally­you just need to know SQL Server doesn't maintain the statistics pages dynamically (i.e., automatically). Therefore, when you make several changes in a table's data, SQL Server's internal statistics for that table will probably be inaccurate. Because of this inaccuracy, SQL Server can easily make poor indexing decisions. The UPDATE STATISTICS command forces SQL Server to update this data. Read the SQL Server Books Online (BOL) entries for UPDATE STATS for more information about this command.

In your example, the statistics information is out of date because you have loaded several rows into a previously empty table. Because SQL Server doesn't maintain index statistics information dynamically, it has no idea your tables have 75,000 rows and 150,000 rows, respectively. SQL Server thinks your tables are still empty. Running the UPDATE STATS command will solve your problem.

Q: I want to use a stored procedure to get an employee listing from more than one department. I currently use the following SQL Statement:

SELECT employee_id, employee_name

FROM employee


dept_cd in ('FINANCE','HR','AP','AR')

I want to send the department codes to the stored procedure and have the procedure return the employee listing, and I want to use the following statement:

CREATE PROCEDURE get_employees(@dept_cd_list char(250)) AS

/* @DEPT_CD_LIST = 'FINANCE','HR','AP','AR' */

SELECT employee_id, employee_name

FROM employee


dept_cd in (@dept_cd_list)

but the comment (/*) indicates the line that's causing a problem. I've developed a crude workaround in which I insert the department codes into a temporary table in the stored procedure and then use the following SELECT statement:

SELECT employee_id, employee_name

FROM employee


dept_cd in (SELECT dept_cd FROM #temp_dept_codes)

I'm looking for a better method. Can you help?

You can use dynamic SQL with the EXEC statement:

USE employee

DECLARE @MyString varchar(255)

DECLARE @MySQL varchar(255)

SELECT @MyString = "'HR', 'AR'"

SELECT @MySQL = "SELECT name FROM dept_cd WHERE type in (" + @MyString + ")"


SELECT @MyString


This statement builds a dynamic SQL statement based on results of the SQL statement in @MySQL.

Q: What are materialized views? I couldn't find information about them in SQL Server 7.0 beta 3 Books Online (BOL).

Normal views let database administrators and users define virtual tables. Users can retrieve rows in a view by treating the defined view as they would treat a table. When the user references a view, the database management system (DBMS) uses the definition of the view to produce the appropriate result set. Thus, ordinary views are executed on demand. Alternatively, the server produces the result of a view and stores it in the database system. These views are materialized views. To be consistent, users need to update materialized views when they update the data. Despite the overhead of maintaining materialized views, you can leverage materialized views in answering queries against the database. The simplest use of a materialized view is to accelerate responses to queries that are routine but expensive to compute on the fly. For example, you can define the following large multitable joins as materialized views:

SELECT day, nation, MAX(Temp)


SELECT Day(Time) AS day, Country(Latitude, Longitude)

AS nation, Temp

FROM Weather

) AS foo

GROUP BY day, nation

Materialized views act as generalized indexes and help improve performance. SQL Server 7.0 uses materialized views, but the technology is transparent to the user.

Q: How can I create a benchmark?

A few months ago, Client/Server Solutions (CSS­ released its framework for developing and running multiple benchmarks, including AS3AP, TPC-B, TPC-C, TPC-D, and Wisconsin. (The company provides all these benchmarks with the product.) Using Benchmark Factory 97, you can benchmark any data source you can access from Windows NT or Windows 95. CSS provides Open Database Connectivity (ODBC) support and optional native drivers for SQL Server using DBLIB, Oracle Server using OCI, DB2 using CLI, Informix using ESQL, and Sybase SQL Server using CTLIB.

You can download Microsoft's free TPC-C benchmark code (i.e.,, which Microsoft wrote in the mid-1990s. This code is a ZIP file containing the benchmark kit for SQL Server for NT. You can download the code from Type

pkunzip -d

at the command prompt. You can also download KITFIX, a minor fix for the TPC-B benchmark kit.

Q: Given the power of Data Transformation Services (DTS), why would anyone use bulk copy program (bcp)?

We agree that DTS is awesome, but DTS requires Object Linking and Embedding Database (OLE DB) and, at press time, we're still not sure about OLE DB performance. Don't write off bcp; it's far from dead. Several new bcp options might make it even more useful (although not easier to use). For example, bcp has new parameters related to Unicode, code pages, quoted identifiers, handling empty columns, using SQL Server 6.x data types or a trusted connection, and hints. The hints (i.e., ORDER, ROWS_PER_BATCH, KILOBYTES_PER_BATCH, TABLOCK, and CHECK-CONSTRAINTS) aren't for your benefit; Microsoft created them to help the bcp utility.

Q: When I try to bulk copy program (bcp) NULL values into a table with a timestamp column, I get NULLs rather than the valid timestamp values I get from using an INSERT statement with NULLs specified for the timestamp column. How can I bcp data into a table with a timestamp column and have the system automatically populate the timestamp value?

SQL Server will not assign values to timestamp columns during a bcp operation. Timestamp has no temporal meaning; it's a technique to deal with optimistic concurrency and to detect whether a row has changed. You can't bcp NULL data into a timestamp column and have SQL Server assign a value. However, you can grab a valid timestamp value from SQL Server using the @@DBTS global variable and bcp that value into your rows. If you use this technique, you need to edit your bcp input files to include an appropriate value. You can simply update each row of the table after you bcp the data, but this method is much less efficient--especially on large data sets.

Q: Can I set the Row Fetch Size for an Open Database Connectivity (ODBC) connection to SQL Server without changing the client application (an off-the-shelf application)?

Sorry, but no. The client handles this setting.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like