Editor's Note: Send your SQL Server questions and comments to SQL Server MVP Brian Moran at [email protected]
To run a bulk copy program (bcp) command from inside a Data Transformation Services (DTS) package, I'm using a T-SQL task that contains the following statement:
EXEC master..xp_cmdshell "BCP HOLD..DETAIL in D:\DETAIL.FMT /SNT2 -T"
However, the DTS package doesn't report an error if the bcp command fails. How can I detect whether the bcp command executed successfully?
The DTS package doesn't report a bcp error as long as xp_cmdshell runs successfully. However, xp_cmdshell may run even if the bcp job fails completely or loads only part of your input data. You need a more definitive way to check for errors. Without additional data, I can't say which error-handling approach would work best for you, but here are two options to consider.
First, you can use the -e parameter with your bcp command to tell bcp to generate an error file as it runs. If the size of the error file is 0 bytes, the bcp task worked. Checking file size from T-SQL is a simple task: Just use the undocumented xp_getfiledetails extended stored procedure. For example, the following statement
produces the result set that Table 1, page 22, shows. You can then use an EXEC operation to move this result set into a temporary table that you've created and select the Size column, which gives you the file's size in bytes. The following code snippet shows an example of how to perform such an EXEC operation:
INSERT INTO #GetFileDetailsOut EXEC master..xp_getfiledetails 'c:\temp\bcpout.err' SELECT Size FROM #GetFileDetailsOut
where #GetFileDetailsOut is a table that you've created to match xp_getfiledetails' result set. A Size column value greater than 0 means that the bcp command failed.
If your goal is to simply use a DTS package to load data into SQL Server, a better solution is to bypass xp_cmdshell and bcp entirely and use DTS's custom BulkInsert task instead. This task is essentially a GUI wrapper for the T-SQL Bulk Insert command. Because T-SQL's Bulk Insert runs in-process with SQL Server and bypasses the Network Library layer, it's typically faster than the command-line bcp. And the BulkInsert task gives you much more control over processing within the DTS package because you're using a native DTS task designed specifically for loading data.
I'm using SQL Server 7.0 and trying to access a linked server by using a command such as
SELECT * INTO test FROM server2.db1.dbo.table1
I receive an error message that says Server2 is not configured for DATA ACCESS, but I can access both server1 and server2 from Enterprise Manager. Why am I receiving this error message?
The error message is telling you that the Data Access option for the linked server, which SQL Server enables by default, is disabled. To enable this option from Enterprise Manager, navigate to the Linked Servers tab, right-click the linked server's name, and select Properties, Server Options. Then, make sure that the Data Access check box is selected. Your query should now run without problem.
I'm running SQL Server 2000, and I need to store SQL Server Profiler output in a table. I could simply open my captured trace file from Profiler's GUI and use the Save as Trace Table option, but I don't want to use Profiler's GUI because it sends each row of the trace file as a separate insert to the table. In the past, I used the TraceFile COM object, which Microsoft shipped as part of the Microsoft SQL Server 7.0 Resource Guide (available as part of the Microsoft BackOffice 4.5 Resource Kit), to read the file and load the trace file to a table. However, Microsoft hasn't released a SQL Server 2000 version of the TraceFile COM reader. Do you know of another way that I can easily load my Profiler trace data into a table?
SQL Server 2000 has extended Profiler's trace-file management capabilities in many ways, including the addition of a new table-valued system function called fn_trace_gettable(). This function returns trace-file information in table format. So if you want to load trace data into a SQL Server table, you could use a command such as
SELECT * INTO MyTraceTable FROM ::fn_trace_gettable ('c:\my_trace.trc', default)
You'll find that this function provides a fast and easy way to load trace data.
Can I change where the SQL Server install procedure installs SQL Server tools? SQL Server 2000 install needs a large amount of space on my system drive in which to install the tools. But because space is limited on that drive, I'd prefer to install these files on another partition.
Microsoft doesn't offer a supported way to change where SQL Server installs its tools, but an unsupported solution has been floating around for a while. This solution comes with a serious warning: You'll be editing important registry keys and could easily render your system unusable if you make a mistake. Be very careful, and ensure that you have good backups before going forward.
SQL Server 2000 installs tools by default in Program Files\Microsoft\SQL Server\80\Tools on the partition on which Windows is installed. You have no option to change this directory. SQL Server install simply reads the directory location from the ProgramFilesDir registry subkey located at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion. SQL Server 2000 install will use the directory location you have in this registry subkey, which means you can change the installation location by changing the directory location. Here are the steps for changing the registry subkey to install SQL Server tools in a different location:
- Back up the registry and your system.
- Change the subkey to the new directory path in which you want to install the SQL Server tools.
- Install SQL Server.
- Do not reboot the machine after installing SQL Server.
- Change the registry subkey back to its original value.
- Reboot the machine.
Absolutely, positively make sure that you set the registry subkey back to its original location before you reboot. Otherwise, severe and unwanted side effects might occur. Many Windows system functions and other applications use this subkey's value. So, for example, Windows 2000 will apply the Win2K System File Protection (SFP) functions to the new directory you specified if you reboot before changing the registry subkey back to its original value.
Can I connect to SQL Server 7.0 using SQL Server 2000 Enterprise Manager across a TCP/IP WAN? I've tried, but I keep getting a timeout expired message. The remote server is running, and I'm sure that I've entered the registration information correctly in Enterprise Manager.
Yes, you can connect to SQL Server 7.0 servers using SQL Server 2000 tools, and connecting across a WAN doesn't make a difference as long as a firewall isn't in the way. However, the WAN link might be slow, causing Enterprise Manager to simply time out when it tries to log in. You can change the login timeout setting from Enterprise Manager by clicking Options from the Tools menu, clicking the Advanced tab, then changing the login timeout setting.
I'm issuing a T-SQL RESTORE DATABASE command through Visual Basic (VB) 6.0. However, the query is asynchronous, so it moves on to the next line in my VB code before the RESTORE DATABASE command has finished executing. Subsequent SQL statements then fail because they're trying to access a database that's unavailable. How can I find out when the restore is complete so that the VB application can issue the subsequent SQL statements against the database?
You can check the restore's status by using SQL Server 2000's DATABASEPROPERTYEX() function or SQL Server 7.0's DATABASEPROPERTY() function. Issuing the following SQL Server 2000 command:
SELECT DATABASEPROPERTYEX('Northwind', 'Status')
will return one of the following results:
- ONLINE = database is available for query
- OFFLINE = database was explicitly taken offline
- RESTORING = database is being restored
- RECOVERING = database is recovering and not yet ready for queries
- SUSPECT = database can't be recovered
Issuing the following SQL Server 7.0 command:
SELECT DATABASEPROPERTY('Northwind', 'IsInRecovery')
will return either a value of 1, which means the database is still being recovered, or a value of 0, which means that the database is available.
I don't understand how Enterprise Manager reports a table's column sizes. For example, I created a table called HowBigAmI with a single column of type nvarchar(10), then issued sp_help HowBigAmI to see what SQL Server would report as the column size. The result said the column had a length of 20. But I defined the column as varchar(10), and Enterprise Manager tells me the length is 10. Why is SQL Server reporting different lengths?
Run the following script:
CREATE TABLE HowBigAmI (Col1 nvarchar(10)) GO sp_help HowBigAmI GO
and you'll see that sp_help indeed shows the column size as 20, but Enterprise Manager's design view for the table shows the size as 10. SQL Server Books Online (BOL) says that Enterprise Manager shows column size as number of bytes, but that isn't the case. Enterprise Manager shows the number of characters that can fit in the column. The example above defines the Col1 column as Unicode (that's what the n in front of varchar means), which is a two-byte character set. So, Col1 can accept 10 Unicode characters, but each character consumes 2 bytes. Thus, you have Enterprise Manager reporting the column size as 10 and sp_help reporting the column size as 20. I agree that this behavior is confusing. But in reality, DBAs need both pieces of information. We need to know that we're limited to only 10 characters when storing data in the column, and we also need to know that the column consumes 20 bytes.