Questions, Answers, and Tips About SQL Server - 01 Jan 1999

Discover a way to speed ADO applications, the functionality of local named pipes, a query to find out which service pack SQL Server is running, and options for a SQL Server system search.

8 Min Read
ITPro Today logo

Q:Using a scheduled task, I'm running a stored procedure to copy a filefrom a SQL Server PC to a remote PC. In the following line from the storedprocedure, I map the E drive to a remote PC:

EXEC Master..xp_cmdshell "Copy C:Folder1File.txt E:"

The stored procedure claims to run successfully, but the system never copies the file. Although both PCs have a system administrator (sa) account, I suspect a permissions problem. Do you have any ideas for a solution?

The first problem is that the command you're running in xp_cmdshell isreferencing a mapped drive (i.e., the E drive). The system creates and manages mapped drives on a per-user basis, which means that the account that the SQLExec service is running under can't see the mapped drive. The SQLExec service manages the scheduled tasks you've created. Thus, the file doesn't copy to the remote PC because SQLExec can't see the E drive.

The system doesn't generate an error message because the xp_cmdshell command ran successfully (the system claims success as long as xp_cmdshell sends the command to the OS). For example, if you type

"dir Z:"

on a machine with no Z drive, Windows NT tells you the path specified can't be found. That's a valid answer, so xp_cmdshell thinks the procedure ran successfully.

You can fix this problem by referencing your remote disks with UniformNaming Convention (UNC) names (i.e., \servernamesharenamepathfilename) rather than mapped drives. NT services can access UNC shares, so you can run the following command:

EXEC Master..xp_cmdshell "Copy C:Folder1File.txt \"

Security is the second part of the problem you're encountering. You need to know what user account context the command will run under. Tasks that sa owns run under the context of the NT account running the SQLExec service. Tasks that users own that don't have sa permissions run within the context of the special SQLExecutiveCmdExec account. Be sure the account running the task has appropriate permissions to access the share on the remote machine.

Q:If SQL Server logically accesses a table by its PRIMARY KEY fields, canI use CREATE UNIQUE CLUSTERED INDEX or the PRIMARY KEY constraint when I create the table? Microsoft recommends the latter option; however, the two options seem to be mutually exclusive. And what is the effect of specifying PRIMARY KEY, other than enforcing the uniqueness constraint?

A PRIMARY KEY constraint doesn't always create a clustered index; the syntax lets you create a PRIMARY KEY using a nonclustered index. With that said, no significant difference exists between a PRIMARY KEY based on a clustered indexor a standalone UNIQUE CLUSTERED INDEX. However, we agree with Microsoft that you should use the PRIMARY KEY syntax for several reasons:

  1. This syntax is ANSI portable and easier to move to other platforms.

  2. Explicitly specifying a PRIMARY KEY makes the database moreself-documenting and lets third-party applications and certain Open DatabaseConnectivity (ODBC) calls automatically identify the PRIMARY KEY.

  3. This syntax lets you use FOREIGN KEY constraints providing declarativereferential integrity.

  4. You're less likely to accidentally drop the PRIMARY KEY index.

Q: Can I speed my ActiveX Data Object (ADO) application by tweaking atuning parameter?

The CacheSize property is an important ADO tuning option that controls howmany rows ADO retrieves every time it issues an sp_cursorfetch against an API server-side cursor. For example, let's say you're running payroll against an employee table with 10,000 rows, and you're retrieving data with Open Database Connectivity (ODBC) server-side cursors that ADO initiated. By default, your application issues 10,000 sp_cursorfetch calls, and each call retrieves one row of data. Changing the ADO CacheSize property to 500 would return 500 rows with each fetch. We'll let you do the math to figure out how many fewer sp_cursorfetch calls the second configuration requires.

Q: How does a bulk copy program (bcp) script that doesn't specify the ­ parameter (i.e., the server name) find the right server?

If you leave the server name blank when specifying a SQL Server connection, you're indicating you want to use a local named pipe. You only need to specify the ­S parameter for bcp operations on a remote server. (Parameters are case sensitive for bcp.)

You've probably heard the term named pipe in reference to a SQL ServerNetLib. Network named pipes send data through Windows NT's networking layers and out to the NIC. The NIC grabs the data and returns it through NT's network layers. This process is a waste of time for interprocess communications (IPC) within one machine.

The solution is local named pipes. Designed for IPC, local named pipes are optimized versions of the standard named pipes. Local named pipes bypass the network, which can yield a significant performance boost when transferring large data sets. SQL Server usually handles network traffic, so the system can't use local named pipes. However, local named pipes can be very effective for bcp operations that run within one SQL Server machine: A Compaq white paper about tuning SQL Server benchmarked a 150 percent performance improvement for bcp operations using local named pipes as compared with using network named pipes.

For SQL Server 7.0, read the bcp "Backward Compatibility Details" note in Books Online (BOL). Microsoft has made several format changes involving datetime, smalldatetime, and money data. Plus, BOL discusses new parameters, including -N (for Unicode support) and -6 (for SQL Server 6.x data type compatibility).

Q: How can I protect Internet data traveling between Internet InformationServer (IIS) and SQL Server running on separate machines? All the information sent to or from the SQL Server system must remain confidential. I've locked down the login and user account security, but I'm concerned that someone with a sniffer can grab information.

You're right to be worried: By default, SQL Server doesn't encrypt data itsends over the network. Usually this default setting isn't a problem. However, financial-service or Internet-based e-commerce applications require more stringent security and need SQL Server to encrypt the data it sends over the wire. SQL Server's multiprotocol NetLib offers this functionality, and you can run NetLib over any network protocol that Windows NT supports (i.e., TCP/IP if you're connecting over the Internet). You must explicitly enable encryption via SQL Server Setup. Be prepared for a performance penalty of 10 to 20 percent for network communications. If you need to configure the multiprotocol NetLib to communicate with SQL Server through a secure firewall, check out the Microsoft article "INF: Replication Setup Over a Firewall"(http://support.microsoft.com/support/kb/articles/q164/6/67.asp).

Q: We have a large database with several historical tables that we use forreporting, and the same field keys all the tables. The system lets us have only about 31 FOREIGN KEY per table. For a workaround, we've come up with these guidelines: Do not use FOREIGN KEY on the historical tables, and use triggers to handle referential integrity. Are we missing something?

You've encountered a known limitation of SQL Server 6.5, although thenumber of FOREIGN KEY the system will let you have per table varies. The good news is that SQL Server 7.0 lets you have up to 63 FOREIGN KEY constraints per table.

Q: After using Oracle and DB2, my programmers want to use pre-triggers andpost-triggers. For example, they want to use pre-triggers to add at least one column before inserting data. SQL Server 6.5 doesn't support pre-triggers and post-triggers; does SQL Server 7.0 support them?

Sorry. Your programmers must handle the equivalent of pre-triggers and post-triggers programmatically in both SQL Server 6.5 and SQL Server 7.0.

Q: I'm looking for a good design book for data marts and warehouses. Doyou have any recommendations?

We recommend two books. Both books include great beginner designs: Christopher Adamson and Michael Venerable, Data Warehouse Design Solutions (John Wiley & Sons, 1998); and Len Silverston, Kent Graziano, and William H. Inmon, The Data Model Resource Book: A Library of Logical Data and Data Warehouse Designs (John Wiley & Sons, 1997).

Q: My department had compiled what we thought was a complete list of thecompany's database servers, when we discovered the List Servers option in the ISQL/w Connect Server dialog box. Clicking this option produced a list of systems that weren't on our original list. I'd like to write a program that automatically checks the network for new SQL Server systems. How did ISQL/w find the new systems, and what are the API calls and declare statements?

Your first option is to use the DBLib dbserverenum API call. According toBooks Online (BOL), "You can use the dbserverenum function toobtain the names of SQL Servers either locally or over the network. For network searches, the NetLib DLL you use must support the dbserverenum function."

DBLib has libraries for both C/C++ and Visual Basic (VB). You can find these libraries on the SQL Server distribution disk, Microsoft Developer Network (MSDN) subscription CD-ROMs, and Microsoft's online Download & Trial Center (http://backoffice.microsoft.com/downtrial/default.asp?product=12).

Second, look at the xp_sqlinventory extended stored procedure. This system procedure will find servers and capture several useful attributes about them. This approach is easier to implement because you can simply schedule the procedure to run as a SQL Server task. Table 1, page 200, shows you the information xp_sqlinventory will capture.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like