Skip navigation

Questions, Answers, and Tips About SQL Server - 01 Mar 1998

Downloads
SQLSstest.zip

Reader Tip: Visual SourceSafe
In your November column, you asked whether anyone had ideas about Visual SourceSafe (VSS) and SQL Server script integration. As a postdevelopment task, my company wrote a program to integrate the process of stored procedure development and version control check-in/check-out to work like it does in the Visual Basic (VB) IDE. The program solves part of the problem. The code uses Visual Basic 5.0 Service Pack 2 (SP2), VSS 5.0 SP2 (particularly the VSS Type Library), and SQL Server 6.5 SP3 (particularly the Distributed Management Objects—DMO—interface). You can extend the application to support views.

Thanks, Jeffrey. Readers can download Jeffrey's VB5 sample code from the Windows NT Magazine Web site (http://www.winntmag.com).


Reader Tip: Preparing for Disaster Recovery
Greetings from Norway. My company is in the process of setting up a SQL Server 6.5 system, but before putting it into production, we wanted to simulate a total machine failure. In preparation for this test, we searched extensively—but unsuccessfully—for relevant references to outlining a foolproof way to quickly and easily restore data from tape onto a temporary machine while the other was down for repair. Here's what we eventually did:

  1. We formatted a new machine with NTFS and installed Windows NT 4.0 as a member server (i.e., not a Primary Domain Controller—PDC—or a Backup Domain Controller—BDC).
  2. We installed SQL Server 6.5, fresh from the BackOffice CD-ROM, onto the new system. We created an empty master device that was large enough for the expected needs.
  3. We restored the /MS3100/DATA/*.* directory with all its contents from the backup tape onto the new server.

We didn't think this procedure would work, but it did. After we rebooted the server, the traffic light came up green. The only job we had left was to adjust the client's .ini files to point to the server's new machine name—something we probably could have skipped if we had chosen not to change the machine name.

We are disappointed with the lack of explanations about how to perform such a vital task. What good is a backup if nobody knows what to do with it? So for what it's worth, we're happy to contribute our three-step survival guide to SQL disaster recovery.

You've discovered a solution that works, Oyvind, but we certainly don't recommend that clients plan their disaster recovery in this way for all situations. Readers must realize that the fully qualified directory structure on both machines must be identical, which it was in your case. Also, readers need to remember to run

sp_dropserver <oldservername>

and then

sp_addserver <newservername>, local

if the new server name is different from the old server name.

Commercial NT backup products and clustering solutions can typically provide faster recovery times. Appendix E from the "Administrators Companion: Backup and Recovery Guidelines for Production Environments" in SQL Server Books Online (BOL) offers some helpful information.

Q: How can I tell (in testing mode) whether a trigger has fired?

If you have constraints on the same table and they fail before your program gets to the trigger, your program won't call the trigger. However, if you want to keep track of when SQL Server calls the trigger, you can do something like this:

  1. Create an audit table. You'll probably want to include the key to the underlying table that you've created the trigger for. Make a datetime stamp—and perhaps a column for comments—part of the new audit table.
  2. Add a line to your trigger to write an entry to the audit table at a certain point in your trigger.
  3. Review the file after you expect your application to have called the trigger.

The beauty of this method is that the updates tend to be almost instantaneous and you don't have to watch the operation of the trigger because you have a date timestamp in the file. When you're happy with whatever you are looking for, you can remove the insert entry from your trigger.

A more general approach would be to create a table called Audit_Trigger, with columns (ID, TableName, TriggerName, Date, Time, UserName), where the column ID is an identity data type. You can use this Audit_Trigger table to capture any trigger firing from any table that you want to test. Turn on Insert Row Locking (IRL), and don't create indexes on the table. This table probably won't cause a big bottleneck if you use it only for testing and evaluation.

Q: Sybase 10 has a thresholds feature for emergency management of transaction logs. Does SQL Server have an equivalent? If not, can you simulate threshold functionality?

Sybase has a whole family of threshold procedures. SQL Server doesn't have any direct procedure-based equivalents, but it does provide similar support through tight integration with the Windows NT Performance Monitor (Perfmon).

SQL Server exports dozens of counters, including Log Space Used %, that you can track from Perfmon. Perfmon's native features let you take action when a threshold has been exceeded for a counter you're tracking. Another solution is to create a WAITFOR procedure that checks log sizes with DBCC SQLPERF(LOGSPACE) and takes appropriate action.


Reader Tip: Another Approach to Crosstabs
In your December 1997 column, you gave an answer to the crosstab query question that involved several subqueries. I think that the solution shown in Listing 1 is easier to write and runs faster when you have a large table. I use a similar technique to create crosstab summary tables in our data warehouse with sales for a product code summarized for multiple time periods in each row.

Crosstabs are never fun to construct, and they can be real performance dogs, so we're happy to have a handful of recipes to try. We like Don's solution!

Replicating in Sphinx
Microsoft released beta 1 of SQL Server 7.0 (code-named Sphinx) almost a year ago; beta 2 finally came out in December 1997. Sphinx will be a major product upgrade with tons of new functionality. We'd need a few months of the whole magazine to fully describe all the new features, but we'll focus on specific features until Sphinx ships. This month we'll give you a quick overview of Microsoft's plans for enhancing the much-maligned replication capabilities of SQL Server 6.x.

SQL Server 6.x provides two types of support for distributed data. The distributed transaction coordinator (DTC) supports standard two-phase commit (2PC) that lets you keep data completely synchronized at all times across all your sites. Unfortunately, to take advantage of 2PC, you have to do API-level programming, and this synchronous approach doesn't offer acceptable performance in many production environments. The other model, using SQL Server 6.5's built-in replication, is asynchronous and is based on a publish and subscribe metaphor. Publish and subscribe supports loose data consistency rather than 2PC's tight data consistency (loose consistency is a nice way of saying data can be out of sync).

Customers told Microsoft they wanted more options, and, as you can see in Figure 1, the Sphinx replication model of-fers several options. Sphinx retains the loose consistency model, the publish-and-subscribe (master/slave) model found in SQL Server 6.x. Tight consistency in Sphinx is the 2PC model that the DTC currently provides in SQL Server 6.x. However, improved programming interfaces and new replication wizards make setting up this model much easier.

A new feature is No consistency in the form of merge replication and queued transactions. Merge replication is the familiar replication model used in Microsoft Access. This kind of replication is useful if you have mobile workers (e.g., salespeople) making changes offline from the field. Although merge replication obviously can't guarantee full transactional consistency, it fits many real-world business scenarios. Sphinx and Access 98 are expected to support bidirectional merge replications.

Updating subscribers offers a slight twist on a full 2PC approach. SQL Server uses 2PC to ensure that changes simultaneously commit at the master location and at the subscriber that initiated the update, but normal loose consistency rules apply when you're replicating the change to all other subscribers. This model lets you update data at a remote subscriber but doesn't force all the LAN/WAN links to be up at the same time--just the one between the master site and the subscriber that initiated the change.

The Queued transactions model is similar to Updating subscribers, but instead of using the Microsoft Transaction Server to handle the 2PC protocol, it relies on the new Microsoft Message Queue Server (MSMQ), code-named Falcon, which runs as an NT service and offers connectionless store-and-forward messaging. The Queued transactions model supports the ability to apply transactions at the subscriber and then queue the transactions to be applied at the publisher later. Queued transactions complements the Update subscribers approach because it provides an asynchronous backup if the LAN/WAN link between a subscriber and a publisher is temporarily unavailable.


Using DBCC PSS
We recently stumbled across Database Consistency Checker (DBCC) Process Status Structure (PSS), an undocumented function, that can provide a wealth of information about executing SQL Server processes. DBCC PSS displays information from a SQL Server's PSS, an internal data structure. The PSS stores each connection's state of activity so you can work on the connection for a while, schedule it out, work on it again, and so on. Although Microsoft doesn't document much of the information that DBCC PSS spits out, you can find a few pieces of gold if you know how to sift through the output. The usage is

/*Trace flag 3604 redirects subsequent DBCC output to the client rather than to the error log*/

DBCC TRACEON (3604)

/* This command prints PSS information for a given SPID*/

DBCC PSS (0, SPID, 0)

Listing 2 shows some DBCC PSS output we captured during a recent troubleshooting session.

What do the pstat bits (shown in callout A in Listing 2) tell us? Pstat values aren't fully documented, but Microsoft Knowledge Base article "Q171224, INF: Understanding How the Transact-SQL KILL Command Works" (http://premium.microsoft.com/support/kb/articles/q171/12/24.asp), lists the bit settings Table 1 shows. (This Knowledge Base article also explains why the KILL command won't always work.) The pstat value in my sample output was 0x8, so the connection was executing a trigger.

TABLE 1: Documented pstat Bits
0x4000 Delay KILL and ATTENTION signals if inside a critical section
0x2000 Process is being killed
0x800 Process is in backout, thus cannot be chosen as deadlock victim
0x400 Process has received an ATTENTION signal and has responded by raising an internal exception
0x100 Process is in the middle of a single statement transaction
0x80 Process is involved in multidatabase transaction
0x8 Process is currently executing a trigger
0x2 Process has received KILL command
0x1 Process has received an ATTENTION signal

At B in Listing 2, plastprocid_value gives us the object ID of any stored procedure the current connection is executing. In this case, the pstat bit tells us the connection is executing a trigger, so plastprocid_value is telling us the object ID of the trigger the connection is executing. (This explanation makes sense, because triggers are a special type of stored procedure.) We can determine which procedure or trigger the connection is executing by running the following command:

Select object_name(plastprocid_value)

The third gem we can mine from the PSS output is in pxcb->xcb_xactcnt=3, at C in Listing 2. The value 3 tells us that the @@trancount setting for the connection is 3. Transaction management and @@trancount deserve a separate discussion, but for now, all you need to know is that any value other than 0 reveals that the connection is in the middle of a transaction. In this case, we're three levels deep in a nested transaction.

Here's a real-life example of how you can use this information. We were trying to figure out why a process in a development environment had suddenly become very slow. The process was running a DELETE command that previously had run fast, but suddenly started taking more than an hour to complete. It was still running much faster in a separate database with similar data sets and supposedly duplicate schema definition.

We ran DBCC PSS while the command was executing and noticed that the slow version of the DELETE command was executing a trigger. (This fact was obvious because the pstat bit was listed as 0X8.) However, this trigger wasn't supposed to be there at all--we thought we had dropped it during a recent design change. Getting rid of the trigger solved the problem; finding the trigger and troubleshooting the problem was a breeze because we had access to the internal PSS info.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish