Q: I received an error message that SQL Server had marked my database suspect. What can I do?
Use emergency mode (also called bypass mode) to recover data when SQL Server marks a database suspect. Occasionally something (hardware glitches, software glitches, gamma rays, or whatever) causes SQL Server's ordinarily robust automatic recovery to fail. Of course, recovery failure is never a problem because everyone always has an accurate, up-to-date backup handy. Uh- huh. Right. But what if (hypothetically) you don't have a good backup?
Most of your data is probably still there on disk, but you can't get to it because SQL Server says the database is suspect. Setting the database status to emergency mode tells SQL Server to skip automatic recovery and lets you access the data. To get your data, use this script:
Sp_configure "allow updates", 1
Reconfigure with override
Update sysdatabases set status = -32768 where name = "BadDbName"
Sp_configure "allow updates", 0
Reconfigure with override
You might be able to use bulk copy program (bcp), simple SELECT commands, or DUMP TRANSACTION WITH NO_LOG to extract your data while the database is in emergency mode, but you won't be able to issue data modification commands. Get your data out as fast as you can before anything else can go wrong. Count your blessings if it's all there, because we can't guarantee that this technique will work in all cases. Call Microsoft if you're still stuck. The $150 service call is worth the money if Product Support can help you recover critical data. (See the Microsoft Knowledge Base article Q165918--http://www.microsoft.com/kb/articles/q165/9/18.htm--for more information on this topic.)
Why might SQL Server mark your database suspect? For starters, try querying on suspect in SQL Server Books Online (BOL) in the MS SQL Server 6.5 Group in the Programs Start Menu. You'll be surprised how many hits you get. Granted, some of them are for the verb suspect, but you'll have dozens of hits on suspect as an adjective. For example, you'll find stored procedures such as sp_resetstatus and sp_mark suspect that turn a database's suspect flag off and on, respectively. Marking a database suspect is often the easiest way to drop it (if you can't drop it using SQL Enterprise Manager's Manage Databases or Server Manager windows) because you can drop suspect databases. Use the command
DBCC DBREPAIR(database_name, dropdb)
(Remember, though, that you can never drop master, model, or tempdb databases, or a database that's open or participating in replication.)
Also recognize that DBCC DBREPAIR is a command of last resort. Always try DROP DATABASE or sp_db remove first. DBCC DBREPAIR is for backward compatibility only.
Q: I distribute SQL Server databases to my customers, and I don't want them to see the stored procedure code because it's proprietary to my business. How can I keep them from seeing the procedures?
Before SQL Server 6.5, many people hid stored procedure code by simply deleting the contents of the text field from the syscomments table, where SQL Server stores SQL definition statements for view, rules, defaults, triggers, CHECK and DEFAULT constraints--and stored procedure code. This tactic is less feasible now, however, because SQL Server 6.5 requires data in the text column. Now, the WITH ENCRYPTION option is probably the best way to protect your procedures without deleting data from syscomments; you can also use WITH ENCRYPTION to keep your views and triggers secret. Use this syntax:
CREATE PROCEDURE TestProc WITH ENCRYPTION
Print "this is just a test"
SQL Server populates syscomments but encrypts the text so no one--not even you or the systems administrator--can read it. This option protects your secret information, and you'll have no problems upgrading to future versions. But if you use WITH ENCRYPTION, remember that it really hides your information. You can't read the procedure after you create it, so make sure you have earlier versions of the code (or at least text files) outside of SQL Server.
Q: I'm using the Microsoft Access upsizing wizard to move a large database to SQL Server, and the process is taking forever. What can I do to speed things up?
Three versions of Access databases (Access 2.0, Access 95, and Access 97--each with different file formats) let you deal with the three versions of Microsoft's upsizing wizard. You can download the Access 97 and Access 95 versions from http://www.microsoft.com/AccessDev/ProdInfo/AUT97dat.htm. But for Access 2.0 files, you'll need to migrate from Access 2.0 to Access 95 first, purchase the Access 2.0 Developer Kit, or consider commercial options such as Aditi's UpDown (http://www.aditi.com).
In general, each version is faster and more powerful than the previous version, but automation has its limits. You can't, for example, use the upsizing wizards to create SQL Server databases that span more than one device; that is, you must have the data and log files on the same device.
All the upsizing wizards can, however, create a SQL Server schema and populate your tables with data from the Access database. Although that capability makes the migration as hands-off as possible, upsizing large datasets is tedious because the wizards use INSERT statements to populate the SQL Server database one row at a time. You have a table with 500,000 rows? Any idea how long Access will take to issue 500,000 INSERTs across the wire?
SQL Server's bcp provides a much faster load, especially when you implement bcp in fast (no logging) mode (see Brian Moran, "Seven Tips for Speeding Large Data Loads with Bulk Copy Program," February 1997, for tips about tuning bcp). A good compromise is to let the wizard create the database schema for you, move the log file to a second device, and then use bcp to manually load the data. By the way, each upsizing wizard has bugs and anomalies, so be sure to check the Knowledge Base before using any of them.
Note that nothing forces you to use the upsizing wizards. You can always use SQL Enterprise Manager--or your favorite third-party database administrator utility--to create the new database. Then you can export the Access data as ASCII files and use bcp to get them into SQL Server. For a comprehensive discussion of options, download Microsoft's new 80-page paper "Upsizing Microsoft Access Applications to Microsoft SQL Server" from http://www.microsoft.com/sql/reskit.htm.
Q: My table has a primary key (PK) on a column defined with the identity property, but I've started getting error messages saying I'm trying to insert duplicate keys. What gives?
You might have duplicate keys without knowing it. In theory, SQL Server always picks a unique number when it assigns identity values, but it can lose track and try to assign a value that's already in use. This duplication can happen, for example, if you shut down the server improperly or if you truncate the table. To troubleshoot the situation, first make sure to use the PK or UNIQUE constraints to forbid duplicate entries, and then use DBCC CHECKIDENT if your values ever get out of synch and cause inserts to fail with duplicate key errors. The command
DBCC CHECKIDENT (table_name)
compares the current identity value with the maximum value in the identity column. If the current identity value is invalid, SQL Server will reset it using the maximum value in the identity column.
Q: What's the purpose of the logwrite sleep parameter?
Logwrite sleep is one of the least understood SQL Server configuration parameters, but it can affect performance dramatically in certain applications. (It's also one of the advanced parameters that are visible only when the Show advanced option is set to 1.) By default, SQL Server performs a physical write to the transaction log as soon as a user transaction commits. This process provides a reliable way to recover the system if someone accidentally kicks out the power cord. Logwrite sleep tells SQL Server to wait x milliseconds before forcing the physical write to the transaction log. Logwrite sleep's values (in milliseconds) range from 1 to 500; the default is 0.
Let's say you are building a stock system that needs to handle a high volume of trade requests from hundreds of users. The transaction mix for this application is 5 percent SELECT, 5 percent UPDATE, and 90 percent INSERT, and all transactions are brief. Assume that SQL Server must perform a physical disk I/O to write a log record as each INSERT transaction commits. This requirement means that one transaction occurs per log write. A good SCSI/RAID disk subsystem might let you do sustained sequential I/O of about 150 writes per second, and typical IDE drives might handle about 60 per second. In this scenario, you could do a maximum of 150 inserts per second because each insert requires a physical I/O to write the log record. (SQL Server could include more than one transaction in one log write, but let's keep the example simple.)
Still with us? Let's assume that bumping logwrite sleep to 50 milliseconds lets SQL Server batch 20 INSERT statements per log write; the increase would let you perform 10 transactions per physical I/O. In theory, you can now do 1500 INSERT statements per second, assuming the disks are still performing 150 physical I/O operations per second (150 I/Os * 10 INSERT statements per I/O).
See the benefit now? Don't get too excited though--you won't want to fiddle with this parameter under most circumstances. In general, increasing logwrite sleep tends to be more useful in high-volume, high-concurrency OnLine Transaction Processing (OLTP) environments with very short transactions. In these conditions, your application has many users simultaneously issuing many INSERT/UPDATE/DELETE transactions that commit very quickly. Increasing logwrite sleep will help your application if it can improve throughput (the number of transactions processed per second) without having a serious impact on response time for individual transactions. The downside of increasing the parameter is that individual transactions might take longer. Their commit is delayed, and blocking might increase because the open transactions hold locks and other resources longer. You can use NT Performance Monitor to observe whether increasing logwrite sleep's value has increased the number of transactions written per log record (SQL Server: I/O--Transactions Per Log Record) without a noticeable drop in response time for individual queries.
Q: SQL Enterprise Manager (SEM) doesn't let me scroll when I'm looking at long messages in the error log. Can I do anything?
Looking at the error log with SEM is handy, but you can get very frustrated when you can't scroll to the end of a long message. Until Microsoft decides to spend five minutes of developer time to add a scroll bar to the user interface, you can use xp_readerrorlog, an undocumented extended stored procedure that returns the error log as a query result (see last month's column for a description of many other undocumented extended stored procedures). Another approach is to double-click on the error message from SEM to pop up a dialog box that shows the entire message.
Tip: Using Trace Flag 1140
Brian just came across this tip on a private Microsoft newsgroup reserved for the SQL Server Most Valuable Professionals (MVPs). Usually, private threads are part of a nondisclosure agreement, but Microsoft said we could pass this on. An MVP's customer had a large database that seemed to work fine most of the time. However, occasionally, the database seemed to stop for a few seconds--up to a minute or even more. Investigations determined that CHECKPOINT bottlenecks weren't the problem, but rather INSERT statements that normally completed very quickly.
Now, you probably know that SQL Server uses extents to manage space utilization for tables and indexes; extents are 16KB blocks of eight 2KB pages. SQL Server adds new extents to objects as they grow and ordinarily finds these extents by searching the list of available extents from beginning to end looking for ones not in use. Microsoft Product Support Services tells us that the search can take a long time on large (multigigabyte) databases and that allocating the new extents from the end of the chain might be faster than searching the entire chain from the beginning. (Note: We're greatly oversimplifying the SQL Server internals, but our description is accurate enough to explain what trace flag 1140 does.)
The solution is to set trace flag 1140 to tell SQL Server to grab free extents from the end of the chain rather than starting from the first extent and walking the chain until it finds a free one. Of course, the downside to this strategy is that your database might start looking like Swiss cheese because, along the chain, you'll have empty extents that SQL Server will never reuse. This situation will cause your database to grow much faster than usual. But tuning SQL Server is a series of tradeoffs and always requires field testing to find out what works best in a certain environment. Trace flag 1140 may help in one of your databases.
Database administrators will be interested in Bill Wunder's site (http://www.nyx.net/~bwunder/sql/list.html), where they'll find several scripts and stored procedures--and a dozen or so links.
Send your questions to either of us or collectively to [email protected] And be sure to check out the new, very excellent Windows NT Magazine SQL Server page at http://www.winntmag.com/sql. It features more awesome Q&As, links to SQL-related Web sites, and more.