We want to use SQL Mail over the Internet. What problems might we encounter?
In some cases, SQL Server hangs when you use SQL Mail with a profile configured to use a Post Office Protocol (POP) 3 Internet mail server. Apparently, a bug in the Windows NT Service Pack 3 (SP3) version of MAPI32.DLL causes this problem but only if you've set SQL Server's priority boost to 1. Microsoft has a SQL Server hotfix that solves this problem. However, because hotfixes aren't fully regression tested (and can create new bugs), Microsoft conservatively suggests leaving the priority boost set to 0 if you're encountering this problem.
I ran DBCC NEWALLOC on my database and received the following error message on two system tables, sysprocedures and syscomments:
TABLE: sysprocedures OBJID = 5 INDID=1 FIRST=88 ROOT=164803 DPAGES=9076 SORT=0 Msg 2558, Level 16, State 2, Server Tom Extent not within segment: Object 5, indid 1 includes extents on allocation page 414208, which is not in segment 0.
SQL Server Books Online (BOL) says that to fix the system tables, I must restore from a clean backup, but I don't have a clean backup. Can I fix this problem another way?
We think you'll find the following TechNet information about nonclustered index pages useful. To find the article in TechNet, search on error 2558, and choose SQL Server Troubleshooting Guide in the result set. Read the whole article, and then reread this section to see how to fix your problem.
Nonclustered index pages. The error occurred on the index of a user table. The error can be cleared by dropping and re-creating the index. Creating a clustered index causes all nonclustered indexes to be rebuilt as well. Therefore, if 2558 errors are occurring on several indexes on the same table, you can create a clustered index on that table to clear all these errors. Another approach is to try using Database Consistency Checker (DBCC) reindex to rebuild the index in question.
What's the best way to extract data from SQL Server in realtime? I work for a company that needs to store manufacturing data (e.g., material and receipt information) in SQL Server and to extract this information whenever someone adds, updates, or deletes a record. I've considered triggers (i.e., extracting a record to a sequential file), but do you know any alternatives that are event-driven and realtime?
Your options are the trigger approach or the API route. You need to base your decision on functional business requirements, transaction volume, and so on. The API approach gives you more flexibility to batch by time or number of transactions, but it requires extra coding and the extra maintenance that goes with API routines. For starters, you might want to think about whether you need receipt information in realtime. In a shop floor environment, you probably need receipt information only if people are waiting for parts. If this situation occurs frequently, you're justified in wanting realtime receipt information.
You've probably discovered that you can't write directly to a sequential file, but you can work around that limitation by writing your file output to a table and then using bulk copy program (bcp) to extract the data. You might want to consider implementing replication--set the timing to every second or two, or every x transactions. If you use a second server on site, pulling reports off the subscriber is simple and takes the trigger load off the server; but it adds the overhead associated with replication. You need to experiment to see what works best for you.
Can you reference more than 16 tables in one query?
No, SQL Server can't reference more than 16 tables in one query. Most queries don't need to exceed that limit, but from time to time you'll bump your head against that ceiling. We recently ran across a tip in a public newsgroup discussion that might come in handy if you ever hit this barrier: You can use trace flag 105 to reference more than 16 tables with one query.
But be warned: Microsoft doesn't support trace flags. If you want to use a trace flag, be cautious and thoroughly test it in your environment before you deploy it in a production environment.
The good news is that you probably won't have to wait long for a legitimate, Microsoft-sanctioned solution, because Microsoft says that SQL Server 7.0 will support up to 32 tables in a SELECT statement. Another limitation we're happy to see scotched is the 64KB maximum source filesize for stored procedures.
We want to store images in SQL Server 6.5 and publish these images on our Web site. We're running both Internet Information Server (IIS) 4.0 and Microsoft Transaction Server (MTS). My colleagues have suggested that we use an object-oriented database such as Object Design's ObjectStore, Computer Associates' Jasmine, or Informix's Universal Server Universal Data Option. These products handle the data as graphics and provide methods for pattern matching and other tasks. What do you think?
Don't lose the faith. Microsoft doesn't have an object-relational, or universal, database such as IBM's extenders, Oracle's cartridges, Sybase's adapters, or Informix's data blades. But Microsoft doesn't want you to buy a competitor's relational database management system (RDBMS) or a competitor's object database management system (ODBMS). The most common way to handle images in SQL Server is to store URLs (or network filenames) in an image table. In other words, you're storing 16-byte pointers to the graphics files, which can be as large as 2GB each. You might want to store other information about the graphics files, such as ownership and royalty information and keywords.
You can also store large binary data files using SQL Server's image or text data types that you access via the Data Access Objects (DAO)/Remote Data Objects (RDO)/ActiveX Data Objects (ADO) GetChunk or AppendChunk methods. Image and text data types can each store up to 231-1 bytes of binary data. SQL Server 6.5 stores and manages data as a linked list of 2KB data pages that appear as if they were stored in a table row. SELECT statements return text and image data up to the limit specified in the global variable @@TEXTSIZE, and you can use READTEXT to read the data. In SQL Server 7.0, text and images are variable-length data types with the same maximum size limit, but the rows contain only 16-byte addresses to files that contain the data.
Microsoft isn't ignoring the special needs of giant image and multimedia data files or the challenge of competitors' universal databases. See the http://www.terraserver.com site with TBs of SQL Server-managed geospatial data, and read more about the topic by downloading Jim Gray's paper from http://research.microsoft.com/scalable.
I'm looking for a programmatic way to find out how long SQL Server has been running. However, I need to be able to get the answer from a Transact-SQL (T-SQL) script.
You can easily get this information from the sysprocesses table, a table in the master database that contains information about processes running on the server. The cool thing about sysprocesses is that it's not like an ordinary table because it doesn't have permanent disk storage associated with it. SQL Server builds the sysprocesses table dynamically whenever a user queries it, and you can't update it. However, it contains all sorts of information, as you can see in Table 1, that's helpful for performance tuning and doing SQL tricks.
You can figure out how long SQL Server has been running by checking the login_time column for any of the four system processes--CHECKPOINT, Mirror Handler, Lazy Writer, and RA Manager--that SQL Server uses to help manage certain tasks. These processes run continuously, so you can use the login_time for any of these processes to check the time that the server started. For example, you can use the command
SELECT login_time FROM master ..sysprocesses WHERE spid = 1
What's a hash bucket, and why would you want more of them?
SQL Server always accesses a page from the data cache. Sometimes the page is already there, and sometimes SQL Server needs to read the page from disk. In either case, SQL Server must first check to see whether the page is in the data cache. SQL Server uses a hashing algorithm with a defined number of hash buckets controlled with the sp_configure command.
You can compare hash buckets to dresser drawers when you're looking for a specific pair of socks. You know that the socks (data page) must be in that drawer (hash bucket) unless they're dirty and still in the laundry (the page is not already in the data cache). You know the socks are in the laundry (on disk) if you search through the entire drawer (hash bucket) without finding the socks. Of course, the time it takes to search for the socks depends on how many pairs of socks you have in the drawer. You can search through 5 pairs of socks faster than you can search through 50 pairs of socks.
By default, SQL Server 6.x uses 7993 hash buckets and hashes the requested data page's ID to determine the bucket the page is buffered in, if the page is already in the data cache. SQL Server reads the page from disk if it can't find the requested page in the bucket. Just like your socks, SQL Server can search the bucket quicker if fewer pages are buffered in that bucket. Oddly, you can degrade system performance by adding lots of memory if the hash bucket chain length (the number of pairs of socks in the drawer) gets too long. Microsoft says that you need to keep the average bucket chain length to four pages or fewer and that you probably won't need to change the default 7993 value unless your system has more than 192MB of memory.
From our experience, average chain lengths can easily be longer than four pages with much lower memory configurations. Always monitor the length of your hash buckets regardless of how much memory you have, and increase the sp_configure value if your average chain lengths are greater than four. You can easily check chain sizes with two Database Consistency Checker (DBCC) commands: DBCC BUFCOUNT and SQLPERF(HASHSTATS).
Both commands report slightly different views of the same hash chain information. BUFCOUNT reports the length of the 10 longest chains, and SQLPERF(HASHSTATS) reports additional information on the average and maximum chain lengths. SQLPERF (HASHSTATS) is available only in SQL 6.5 Service Pack 3 (SP3) and later. You must run DBCC TRACEON(3604) before you run either command so that SQL Server will send the DBCC output to the client screen rather than just writing it to the error log. See Table 2 and Table 3 for sample output.
Do you know how long your hash chains are? A properly tuned hash chain can have a surprisingly big effect on performance, and you can easily make this fix. Go tune it right now. But don't get too attached to hash buckets, because SQL Server 7.0 eliminates them.
I have a stored procedure that loops through all the records in a customer table, performs a calculation, and updates each record. I've got my Lock Escalation (LE) threshold maximum set to the default (200), but my stored procedure never escalates the individual page locks to an exclusive table lock as I would expect. What's happening?
SQL Server 6.x typically locks at the page level, but sometimes it escalates to a table lock to be more efficient. Three sp_configure parameters control this escalation process. (As in the case of hash buckets, SQL Server 7.0 will eliminate these parameters, probably because of changes that the new row-level locking requires.) Here's a brief description of the LE parameters:
LE threshold maximum. This parameter determines the maximum number of page locks to hold before escalating to a table lock. If the number of page locks is greater than the escalation maximum, a table lock will occur regardless of whether the number of page locks has exceeded the LE threshold percent. The default is 200.
LE threshold minimum. This parameter determines the minimum number of page locks a user must have before SQL Server escalates to a table lock. A table lock will occur only if a user reaches this minimum when the number of page locks exceeds the LE threshold percent. LE threshold minimum prevents the server from escalating to a table lock for small tables, which quickly reach the LE threshold percentage. The default is 20.
LE threshold percent. This parameter specifies the percentage of page locks needed on a table before a SQL Server requests a table lock. The default (0) causes a table lock to occur only when the number of page locks reaches the LE threshold maximum.
Escalation is usually straightforward, but the original question points out one gotcha that many database administrators and developers fall into. Let's say you've set all the LE parameters to their defaults (i.e., LE threshold maximum = 200). Escalation happens when one statement has locked more than 200 pages. The focus on statement is important, because a transaction can lock more than 200 pages of a table without escalating to a table lock. Consider the following example:
DECLARE @LoopCounter int
SELECT @LoopCounter = 1
WHILE @LoopCounter <= 300
UPDATE CustomerTable SET FakeColumn = @LoopCounter
SELECT @LoopCounter = @LoopCounter + 1
This batch file updates 300 records in one transaction, so you might think that the individual page locks will escalate to one table lock. But this batch file will eat up the 300 individual page locks because each UPDATE statement affects only one record and locks only one page. Lock escalation works exactly as Microsoft documents it: Escalation occurs only when one command affects more than the maximum number of pages.
Does the REVOKE command always revoke permissions?
Strange but true: The REVOKE command doesn't work when used against a table with a certain number of columns. Explaining this phenomenon isn't easy because the behavior changes between service packs. In SQL Server 6.5 Service Pack 4, the REVOKE command doesn't remove all existing permissions if the binary representation of the number of columns in the table ends in 111. The command removes some columns but not all of them. You can hack the system tables directly to remove the rogue permissions, but adding an extra column is a much safer and easier method. Table 4, page 201, shows the decimal and binary representation of numbers that cause this problem; the query in Listing 1 also reveals which tables are affected.
The script in Listing 2 shows what happens when you change permissions on a table and then run sp_helprotect to look at the configured permission. The resulting output illustrates that you can't revoke permissions when the table has seven columns, but you can after you add an eighth, dummy column.
This problem happens with all the numbers listed in Table 5 (i.e., numbers that end with binary 111) when you're running SQL Server 6.5 with any service pack. Some users have reported that the condition also occurs on tables in which the number of columns is divisible by 7. Windows NT Magazine hasn't been able to confirm the divisible-by-7 problem, but keep the possibility in mind if you experience strange REVOKE problems. Don't fret, Microsoft has fixed this problem in SQL Server 7.0 beta 2.
Can you handle the tempdb database in some way other than leaving it on the master device? Users frequently ask me how to move tempdb from the master device and put it on its own device, but I'm unsure how to move it.
Richard Waymore provided a solution procedure in a SQL Professional article about a year ago. Here are the key steps:
- Move tempdb to RAM (this move takes tempdb off the master device).
- Use sp_diskdefault to unmark the master and any other device as a default device.
- Create a new device for tempdb and mark the device as the default.
- Take tempdb out of RAM. Tempdb will now be on the new device.
- Expand the device as necessary and alter tempdb to use the expanded space.
- Use sp_diskdefault to re-mark whatever default devices you want.