Skip navigation

SQL Server Q & A - 01 Sep 1999

Can you automatically update a field in a table every time a record is updated? I want to write to the record as a user update. Currently in every update statement that the application sends to the server, I include the user name and datetime stamp. I want to automate this operation. I tried it by trigger, but the DELETE and INSERT (virtual table) don't permit updating the field. What do you suggest?

Set a default value so the column is automatically populated when you add data to the table:

CREATE table mytable ( c1 int NOT NULL, ... username sysname default user_name(), when_modified datetime default getdate() )

I tried to update a record with the default value, but the default value is valid for inserts only and not an explicit value for the column. For example, the server automatically inserts the default values for username and when_modified. If I want to automatically set the user and the datetime, I can't use the default statement because the field is already set by a previous insert statement, and the default doesn't work because the data field value is already set. Which user updated and inserted into a row, and when was the action performed? For example:

CREATE TABLE TBRAGMAX (
   szMU varchar(5) NOT NULL,
   szInsUser sysname default user_name(),
   dtIns datetime default getDate(),
   szModUser sysname default user_name(),
   dtMod datetime default getDate(),
   CONSTRAINT XPKTBRAGMAX 
   PRIMARY KEY (szMU, szCodRag, iYear, iMonth)
   ON SEG_01
) 
go

The fields szInsUser and dtIns contain the user and the date from the initial insert, and szModUser and dtMod should update.

Try a recursive trigger to update the table. This method modifies a row when an update's performed.

USE pubs
go
CREATE table test (c1 int NOT NULL
c2 datetime default getdate())
go
INSERT test (c1) values (1)
go
CREATE trigger mytesttrig ON test FOR UPDATE AS
UPDATE test SET c2 = getdate() from test, inserted WHERE test.c1 = inserted.c1
go
SELECT  * FROM test
go
UPDATE test SET c1 = 2 WHERE c1 = 1
go
SELECT * FROM test
go

I want to gather data from 15 SQL Server 7.0 servers to one SQL Server 7.0 server. Should I use replication or Data Transformation Services (DTS)? Someday I might implement data gathering of 500 to 600 source servers with one destination server.

The question comes down to whether replication meets your needs (because of the transformations) and how frequently you want to move the data to your central site. If you want just a few seconds or minutes of latency, use replication. If you want to have the data at your central site updated once or twice a day, look at DTS (although replication still might work for you). Remember that replication doesn't perform transformations, and it gives you exactly the same data on the target servers that you have on the source server.

I want to access data from 30 partner destinations to a customer site. Of the 30 partners, 15 use our product and 15 use other products. The 15 partners who don't use our product send data through electronic data interchange (EDI), which is not a problem. The others in the group use replication or DTS. The partners don't send all data from the application (e.g., they might send general information about a person, but not the person's employment information). The partners don't want the customer to see their data when they log on directly to a partner's SQL Server. The destination server is a central repository of raw data, so this situation is different from a standard headquarters and division scenario. The data at the destination can contain duplicate records (e.g., the same person might have two records at the destination server from two different partners). A customer can create and update new information referencing migrated data at the destination server. I plan to send data once a day.

In replication, can I send only the data that has changed since last replication and let SQL Server take care of it automatically in transactional replication? Is DTS like snapshot replication?

DTS is more than snapshot replication. It lets you access data from anywhere, which includes flat files transferred through EDI, or directly from another relational database management system (RDBMS), including SQL Server. DTS also lets you perform programmatic logic against the data before you load it. Also, you can configure DTS to grab only rows newer than a flag, such as a last-changed timestamp on a client that you keep track of. (For more information about DTS, see Don Awalt and Brian Lawton, "Introducing Data Transformation Services," Premiere Issue, and "Using DTS to Populate a Data Warehouse," April 1999.)

I have set up a parent/child relationship on my tables and I want to implement a cascade delete. What is the equivalent cascade delete option for Oracle PL/SQL on SQL Server?

SQL Server 7.0 doesn't have a cascade delete declarative referential integrity (DRI) feature, so you need to code cascading deletes as triggers. Because the delete restrict of DRI is checked before any cascading delete triggers can fire, you can't use foreign key DRI on any tables with which you want this functionality. Therefore, you need to enforce all your foreign keys with triggers. However, you can still declare primary keys. (For more information, see Itzik Ben-Gan, "Maintaining Referential Integrity," available to WebSQL subscribers at http://www.sqlmag.com.)

SQL Server 6.5 updates and deletes have two modes, deferred and direct. I can't find any SQL Server 7.0 documentation that tells whether these update and delete modes still exist. Both graphical and textual execution plans don't identify a mode that is used for the operation. Do these two modes still exist?

No, these two modes aren't relevant to SQL Server 7.0. In SQL Server 6.x, a deferred update writes information to the log, and then the log is rescanned. Then the appropriate deletes and inserts (modifications) are read back from the log and processed. The SQL Server 6.x query processor isn't smart enough to avoid updated rows that move forward in an index and update again in the scan.

This scheme is inefficient for large updates because the index is scanned and log records written to the log. The log is rescanned (which slows the process down), and the index is rescanned to perform the updates. (If the update is large, the pages read first might have fallen out of the cache.)

Because of the deferred update's inefficiencies, it isn't part of SQL Server 7.0. The SQL Server 7.0 scheme for large updates relies on the query processor to prevent the problems encountered in SQL Server 6.x. For large updates, which are typical in SQL Server 7.0, the process updates in key order and touches each updated page in the index only once. Some large updates run 10 times faster on SQL Server 7.0.

SQL Server 6.5 introduced the direct field update in-place (DFUIP) feature. DFUIP optimized fixed-length column updates somewhat. SQL Server 7.0 doesn't have DFUIP, but instead optimizes the detection of differences within a row to minimize the log amount generated for the row and the time to compute the differences.

So the answer is there is no such thing as deferred update in SQL Server 7.0. And the distinction between direct and in-place is somewhat removed as well.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish