Skip navigation

Questions, Answers, and Tips About SQL Server - 01 Feb 1998

Q: Can you use bulk copy program (bcp) to copy data from a local temp table? I tried to create one stored procedure to place a query result into a local temp table. I also tried to perform a bcp within the same stored procedure, but neither operation worked.

Sorry, using bcp to copy data from a local temporary table isn't possible, and this restriction makes sense. Only the connection that creates local temporary tables—those with names starting with one # symbol—can see the local temporary tables. Bcp dials in to SQL Server on a separate connection, which means it can't see the private local temporary table that the first connection running the stored procedure created.

But don't despair. Global temporary tables will do the trick. If you create global temporary tables with names starting with two # symbols, all connections can see the global temporary tables. Create the global temporary table name with a unique connection-specific identifier; otherwise, only one user at a time can run the procedure. Listing 1, page 198, shows you how to do this.

When I was writing this tip, I uncovered a trick to making this technique work. You'll notice that the bcp command specifies tempdb..##BCPOut as the source table for the bcp operation. Usually, temporary tables don't need to reference tempdb to fully qualify the name. However, using the command

bcp  ##BCPOut out c:\temp\BCPOut.txt -Usa -P -S -c

to bcp the data out didn't work. The command seemed to run and didn't report an error, but it didn't export any data. The moral of the story is, make sure you fully qualify the temp table line (i.e., tempdb.##BCPOut), even though you usually don't need to include the tempdb reference when using temporary tables.

Q: I have an UPDATE/INSERT/DELETE trigger for a table, and I would like to send an email to a user every time someone modifies, adds, or deletes a record in that table. I'm using xp_sendmail to send the results of a query to a user, and I'm assigning a SQL query string to a variable and passing the variable to xp_sendmail. The length of my query string is greater than 255 characters. SQL Server is truncating the query string when I assign it to a variable, whether I choose char or varchar. Is there any way around this annoying 255-character limitation?

Technically, you can't send a message greater than 255 characters. Xp_sendmail is an extended stored procedure, which means it's a C-language DLL that uses the Open Data Services (ODS) library to communicate with SQL Server. Unfortunately, the @message variable is a 255-character string inside the original C program. Concatenating strings together in Transact-SQL (T-SQL) doesn't do you any good because the @message variable is still only 255 bytes inside the extended stored procedure. This limitation of xp_sendmail is significant, but a reasonably easy workaround is documented in "Send Messages Longer than 255 Characters" in SQL Server Books Online (BOL):

This example shows how to send a message longer than 255 characters. Because the @message parameter is limited to the length of a varchar (as are all stored procedure parameters), this example writes the long message into a global temporary table consisting of one text column. The contents of this temporary table are then sent in mail using the @query parameter:

CREATE TABLE ##texttab (c1 text)
INSERT ##texttab values  ('Put your long message here.')
DECLARE @cmd varchar(56)
SELECT @cmd = 'SELECT c1 FROM ##texttab
EXEC master.dbo.xp_sendmail
 'user1', 
    @query = @cmd, @no_
     header= 'TRUE'
DROP TABLE ##texttab

You insert your long message into a text column within a table and then send your long message because xp_sendmail talks to SQL Server on a completely separate connection. Make sure to create your global temporary table using some type of unique naming scheme because this technique serializes access to xp_sendmail.

Dropping Persistent Pesky Tables
Hold on to your hats, SQL Server fans, because you're not going to believe this: SQL Server has a few bugs. I ran into one of these elusive bugs about a year ago and didn't figure out an answer until I ran across a recent tip on the microsoft.public.sqlserver.server newsgroup. My problem was that from time to time, tables wouldn't go away if I tried to drop them. Needless to say, the problem was quite annoying.

Someone recently posted a similar problem to the newsgroup, which Kalen Delaney ([email protected]) responded to with a great tip. Apparently, changing the sysobjects.systats column for the offending table to 67 lets you fully drop the table. Kalen refuses to reveal her sources and exactly why this works, but we still appreciate the information.

Because, as Kalen says, "Mucking around in the system tables isn't for the faint of heart," we reiterate her advice to back up your database before trying the script in Listing 2.

Dynamically Estimating a Good Size for the Open Objects Parameter
SQL Server's open objects configuration option sets the maximum number of database objects that can be open at one time. Setting this option too low causes annoying errors and degrades performance. You can find details about open objects in Microsoft's Knowledge Base articles Q89392, "Why SQL Server Error Log Contains 'Open Objects' Warnings" (http://support.microsoft.com/support/kb/articles/q89/3/92.asp) and Q114843, "Explanation of Open Objects in SQL Server" (http://support.microsoft.com/support/kb/articles/q114/8/43.asp). Unfortunately, the best advice in the Knowledge Base says, "There is no way to determine the best value for \[the open objects setting\], but it is suggested to increase it by about 500, and examine the SQL Server error log every day or so. If the warnings are still appearing, run the sp_configure procedure to increase it again, until the warnings are no longer generated."

Not a very scientific approach, if you ask us. As far as we know, you can't determine the optimal setting, but we stumbled across a tip for programmatically checking whether open objects space is running low.

The undocumented Database Consistency Checker (DBCC) command DBCC DES will print out information about the object descriptors (essentially, open objects) that the server is currently using. Counting the number of entries from DBCC DES tells you how many objects are in use, so you can figure out how many are left. (For those indefatigably curious souls who want to experiment with our complex formula, try subtracting the number of entries listed in DES from the number of open objects returned by sp_configure.) Listing 3 shows a simple technique for automating the process.

Q: When the NT Performance Monitor (Perfmon) runs, it floods the SQL Server error log with login failure notifications for the user ID probe. I haven't found any setting (i.e., in Perfmon) that lets me adjust the password for the login. Any thoughts?

Answering your question requires some background. Perfmon gets its data from SQL Server by running several undocumented Database Consistency Checker (DBCC) commands such as DBCC PERFMON. Running a Transact-SQL (T-SQL) command requires an open SQL Server connection so Perfmon must be able to log in to SQL Server. Basically, Perfmon logs in two ways. If SQL Server is running in Standard or Mixed security mode, Perfmon connects using the probe login. If you configure SQL Server to use Integrated security, Perfmon connects using a trusted connection (Integrated security uses trusted connections that authenticate based on the user's Windows NT login credentials rather than requiring a separate SQL Server login). You must leave the probe login on the NULL default setting, or Perfmon can't connect if you have configured your server for Standard or Mixed security.

The problem you're having makes us think your server is running Standard or Mixed security and you've changed the probe password to something other than NULL. You must have also turned on the Failed Login option in the Audit Level section on the Security Options tab on the Server Configuration screen. Otherwise, you wouldn't see the error messages and your problem would be harder to pinpoint.

Perfmon can't connect using Standard security if you've changed the probe password, but you can force Perfmon to request a trusted connection—regardless of how you've configured the server security. The process is simple but requires you to hack the Registry, so be very careful. Add the PerfMonSecure value, with a type of REG_DWORD and a value of 0x1 to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer. This addition will cause SQL Performance Monitor to request a trusted connection to SQL Server using the security context of the logged-on NT user, if Perfmon is running locally on the SQL Server box, or as the LocalSystem account, if Perfmon is running from a remote computer.

Q: I've heard lots of people talk about how they "denormalize" their database to get better performance. I can't find anything about denormalization in the SQL Server Comprehensive Index or the Microsoft Knowledge Base. What am I missing?

Denormalization goes back to creating a good database design in the first place. Normalization refers to a handful of normal forms that identify increasingly well-designed relational databases. Just about any book on relational databases or SQL describes these forms, but here is a summary.

First normal form (1NF). A table is in 1NF if it contains no repeating groups of data items. In other words, all occurrences of a record contain the same number of fields and don't have multiple occurrences of the same field within one record; records must contain only atomic values for all attributes. Additionally, to meet the criteria for 1NF, one or more columns have primary key status.

Table 1 shows three tables that aren't in 1NF because they have repeating groups. You can probably see why tables such as these are poorly designed. First, in these tables, you have to decide arbitrarily the maximum number of patient visits—or degrees or invoice items—you will accommodate. If you overestimate, you end up with a lot of space-consuming blank fields filled with NULLs (NULLs are a problem, because not all database management systems—DBMSs—treat NULLs the same way). If you underestimate, you run into problems when you need more fields than you've provided. Worse yet, think about problems with querying the database. To find all the employees with degrees from Harvard, for example, you'd have to include a giant WHERE clause (WHERE School1 = 'Harvard' or School2 = 'Harvard' or School3 = 'Harvard' ...).

The solution? Create a new table for the repeating fields, where each group of repeating fields has its own row. This second table becomes the detail, or child, table and is related to its master, or parent, table in a many-to-one type relationship. You might call the detail tables in the example table PatientVisits, EmployeeDegrees, and InvoiceItems.

Second normal form (2NF). A table is in 2NF if in addition to meeting the criteria of 1NF, all nonkey attributes are functionally dependent on the entire key. Primary keys are fields or a combination of fields that provide unambiguous reference to individual records. 2NF pertains only to tables with composite keys, where a composite key is a key consisting of more than one attribute (column). A formal definition—for any relation R, attribute A is functionally dependent on attribute B, if for every valid instance, the value of B determines the value of A—might help you understand functional dependencies.

One way of thinking about 2NF is that it precludes multisubject or multithematic tables. In Table 2, the example tables aren't in 2NF because of partial key functional dependencies. The INVOICE table has a combination of invoice and product data. The PRODUCT table stores both product and supplier data. Both cases are almost guaranteed to result in data redundancy and all the headaches redundancy implies, such as modification and deletion anomalies.

Looking at Table 2, assume you order a dozen products from the same supplier. If the supplier moves, you'll have to update his address data in the PRODUCT table for all the products you order from him. If you don't, you lose your data integrity, because the supplier's address will no longer be an unambiguous value.

Another way to test for functional dependency is to remove the created pkey (the column will be an autogenerated or counter data type) and any created foreign keys from the argument. Then, identify the primary key for the table, which in the case of INVOICE will be a concatenation of InvoiceDate and ProductName (a poor candidate, but the best one in light of the current attribute set). Then test each nonkey attribute with the following query, "Does the combination of Pkey Part 1 plus Pkey Part 2 define Nonkey Attribute, or does Pkey Part 1 only (or Pkey Part 2 only) define this Nonkey Attribute?" If one part or another of the concatenated pkey defines nonkey attributes, that's a sure sign that the table is not in 2NF.

Third normal form (3NF). Database managers generally consider 3NF normal enough. A table is in 3NF if in addition to meeting the criteria for 2NF, it contains no transitive dependencies. A data item, or attribute, is transitively dependent on another data item (the key) if the attribute data item is functionally dependent on a second attribute data item, which in turn is functionally dependent on the key data item. If we lost you, try this explanation: Transitive dependencies occur when a data item in a record is not a key but identifies other data items as though it were a key. Fortunately, the process of normalizing to 2NF often—but not always—achieves 3NF automatically. An easy way to remember 3NF is that any nonkey field is determined by the key.

If normalized enough offends your sense of purity, bear in mind that most tables meeting the requirements for 3NF also satisfy the criteria for 4NF and 5NF. Fourth and fifth normal forms both deal with rarely encountered multivalued dependencies. Somewhere between 3NF and 4NF, however, is another widely recognized normal form, Boyce-Codd normal form, BCNF.

BCNF differs from 3NF only for tables that have more than one candidate composite key. Candidate composite keys aren't simply megakeys that contain all the attributes in a table, for example. Candidate keys are simply potential primary keys. The primary key is a candidate key that is chosen to serve as the primary identifying attribute of the table based on several specific criteria, and brevity (a singleton pkey is best) is one. A superkey, the concatenation of all attributes in the table, is a candidate key (although a poor candidate, in most cases).

For example, in an employee table that contains both Social Security numbers and employee IDs, with EmployeeID as the primary key, the Social Security number field represents a candidate key. (See http://www.cpsr.org/cpsr/privacy/ssn/ssn.faq.htm for an interesting discussion of privacy issues associated with the use of Social Security numbers.)

Denormalization. The result of a normalized design is generally lots of little tables that might contain more foreign keys than actual new data. To recombine the data in response to user queries, the DBMS has to perform time-consuming joins. In the real world, database administrators sometimes de-normalize databases to achieve better query performance by avoiding the cost of the joins. Denormalization is one of the standard techniques you use when you design a read-only data warehouse, but it's not recommended for OnLine Transaction Processing (OLTP) systems because of the potential for introducing delete or update anomalies.

However, database administrators often denormalize tables in BCNF back into 3NF, even in transaction databases. This approach seems to be the most common and the safest form of "responsible" denormalization.

The term denormalization can provoke heated discussions akin to religious wars in some circles. To find out more about database design, consult Access 97's online Help. (Access 97 ships with a Table Analyzer Wizard that helps identify badly designed tables after the tables are populated.)

You can also download the "Microsoft SQL Server 6.5 Deployment Guide," a 70-page technical white paper available as sqldep95.exe or sqldep97.exe (http://www.microsoft.com/sql/whitepapers.htm), or download Peter Hussey's "Designing Efficient Applications for Microsoft SQL Server" (a 29-page technical white paper that you can download as road-map.exe from the SQL Server Developer's Resource Kit—http://www.microsoft.com/sql/reskit.htm). Although these resources aren't comprehensive or rigorous in their discussions of the normalization process, they provide decent recipes for approaching database design. Let us know how you handle denormalization.

Data Warehousing Resources
If you've attended any of the industry's data warehousing conferences, you're probably familiar with the extremely helpful little yellow booklets, which pack a lot of good, vendor-neutral information into their advertising-free pages. They're the brainchild of Applied Technologies Group, which also produces a series about Internet technology, communications, and document management. The good news is that you can download the booklets (http://www.techguide.com). Sample data warehousing titles are

  • Practical Guide to Getting Started with Data Warehousing
  • Building a Decision Support Architecture for Data Warehousing
  • Delivering Warehouse ROI with Business Intelligence
  • Enterprise Storage for Today's Data Warehousing Environments
  • Leveraging Visual and Analytic Data Mining
  • Managing the Warehouse Throughout Its Lifecycle
  • Meeting the Data Integration Challenge
  • Mining for a Competitive Advantage in Your Data Warehouse
  • Putting Metadata to Work in the Warehouse
  • Right Sizing Your Data Warehouse
  • Using Java Computing for Web-Enabled Data Warehousing
  • Data Warehousing Technology Glossary

Send your questions to us individually, or send them to us collectively at [email protected].

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish