Skip navigation

Enhance Your Data Access With ADO.NET 2.0

ADO.NET 2.0 introduces a slew of new data access capabilities to improve theperformance, scalability, and maintainability of your applications.

DataStream

LANGUAGES: ADO.NET | XML

ASP.NET VERSIONS: 1.0 | 1.1

 

Enhance Your Data Access With ADO.NET 2.0

ADO.NET 2.0 introduces a slew of new data access capabilities to improve the performance, scalability, and maintainability of your applications.

 

By Brian Noyes

 

At the end of October, Microsoft unveiled a host of new technologies at the Professional Developers Conference (PDC) that will have significant impact of the future of development for the Windows platform. The main topics of PDC included Longhorn (the next version of Microsoft Windows), Whidbey (version 2.0 of the .NET Framework and the next version of Visual Studio), and Yukon (the next version of SQL Server). .NET 2.0 will include significant enhancements for just about every kind of application you are developing for Windows and the Web today. The December issue of asp.netPRO has extensive coverage of the new features of ASP.NET 2.0. In this article, I want to introduce you briefly to some of the new features coming in ADO.NET 2.0 that you can use to enhance your data access architectures and code.

 

Two Intersecting Worlds of Data

As usual with .NET, there are two major axes to the data access improvements - those focusing on relational data access and those focusing on XML data access. Some of the enhancements in both arenas are just improvements to the existing classes in the .NET Framework to make them more flexible or better for advanced data access scenarios. A lot of the changes have to do with taking advantage of some of the new capabilities provided by Yukon, the next version of SQL Server, that will be released at about the same time as .NET 2.0. The changes are pretty widespread, so all I have room for here is a quick overview and summary of the changes to the ADO.NET relational side of things. I'll be covering the changes in the XML arena in future articles, but wanted to mention them here to whet your appetite. Figure 1 summarizes the significant areas of change in the relational and XML data access functionality in .NET 2.0.

 

Relational Changes

XML Changes

SQL Server managed code support

XML document change management support

Asynchronous command and query support

SQL Server XML native type support

Bulk copy capability

XQuery support

Batched update support

XML Views

Object Relational Mapping Engine

 

Figure 1. Summary of new data access features in .NET 2.0.

 

There are several thrusts I want to cover on the relational front. The first has to do with improvements to enable the use of managed languages within the SQL Server environment to write user-defined types, stored procedures, user-defined functions, and triggers. There has already been a lot of discussion and controversy surrounding this capability and a lot of debate as to whether this is really a good thing. I am a firm believer in keeping your stored procedures as simple as possible for maintainability in enterprise applications. You should keep your business logic encapsulated in objects in your middle tier, unless you are forced to push some of that logic down into the database for network bandwidth reasons. It is much easier to scale out and reuse business objects in the middle tier than business logic in the database.

 

With the ability to write code in the database using managed languages, there is going to be a natural temptation to put more and more business logic into the database. I think people should resist that urge, but there are still many scenarios where you need to have some logic in the database, and the ability to write that code in the managed language of your choice instead of with T/SQL will definitely lead to more maintainable code in the data tier. When you couple that with the features that Visual Studio Whidbey brings for stepping into your managed code running within the SQL Server environment, the code you run within SQL Server suddenly becomes much more understandable and maintainable.

 

So ADO.NET 2.0 includes a set of classes that form the layer between your managed code and the SQL Server environment. There are modifications and additions to the classes you know and love, such as SqlCommand, SqlDataReader, and SqlParameter. There are also a number of new classes related to connecting to the data environment within SQL Server and executing commands. There is a new class to encapsulate query results called SqlResultSet that gives you server-side cursor capability that you may have been missing in the disconnected world of ADO.NET data access outside the database. There is also direct support for settings and receiving SQL Server notifications, allowing you to design more event driven data applications.

 

Commands are from Venus, Queries are from MARS

Another new capability in ADO.NET 2.0 is focused on improving the performance and scalability of your data access code through asynchronous execution of commands. The first part of this is something called Multiple Active Result Sets (MARS), which lets you execute multiple queries simultaneously on an open connection. With ADO.NET in version 1.X of the Framework, you could only have one active query on an open connection, such as an open SqlDataReader. MARS addresses that and allows you to have more than one open cursor in the database while you are iterating through the results. There is also an asynchronous execution pattern wrapped around the SqlCommand object now, so that you can issue a query and have a callback method executed when the query is completed.

 

Another common requirement that was lacking in ADO.NET 1.x was the ability to easily and efficiently ship data from one source to another programmatically. ADO.NET 2.0 includes a SqlBulkCopyOperation class that allows you to perform programmatic transfer of data from one source to another in a fashion similar to that achieved using the bcp.exe utility. The code to do this is as simple as that shown in Figure 2.

 

public void CopyCustomers()

{

   SqlConnection connSrc = new SqlConnection(m_connStrSrc);

   SqlConnection connDest = new SqlConnection(m_connStrDest);

   SqlCommand cmdGet = new SqlCommand("SELECT * FROM Customers", connSrc);

   SqlBulkCopyOperation bulkCopy = new SqlBulkCopyOperation(connDest);

   SqlDataReader readerSrc = null;

   try

   {

      connSrc.Open();

      connDest.Open();

      SqlDataReader readerSrc = cmdGet.ExecuteReader();

      bulkCopy.DestinationTableName = "Customers";

      bulkCopy.WriteDataReaderToServer(readerSrc);

   }

   finally

   {

      bulkCopy.Close();

      if (readerSrc != null)

      {

         readerSrc.Close();

      }

      connSrc.Close();

      connDest.Close();

   }

}

Figure 2. The new SqlBulkCopyOperation class allows you to push data from one source to another in bulk.

 

If you performed updates using .NET 1.X with DataSets and SqlDataAdapters with large numbers of modified rows, you may have noticed a significant performance hit. This is because the way the SqlDataAdapter works in .NET 1.X is to perform a separate update, insert, or delete query for each modified row in a DataSet that is passed to the SqlDataAdapter, resulting in a separate round trip to the database for each row. The SqlDataAdapter has been modified to allow batched updates, meaning it can group those update queries into a batch and only make one round trip to the database for each batch. You can change the batch size just by setting a property on the SqlDataAdapter.

 

The final new relational capability I wanted to highlight is a new object-relational mapping engine called ObjectSpaces. ObjectSpaces forms a layer between a relational data source and your business objects to allow you to retrieve and store object state without having to use any explicit data access code (see Figure 3). You provide a set of XML schemas that define the relational schema, the object schema, and a mapping between the two. There are tools that will be available as part of Visual Studio and the Framework to help you create these schema files. Using these schemas, the ObjectSpaces engine allows you to load and save the state of the objects into an underlying set of tables in the database without the application code needing to know anything about what that underlying relational storage looks like. I'll be writing an article covering all the details on programming with ObjectSpaces in the March issue of C#PRO.

 


Figure 3. ObjectSpaces forms a layer between a relational data source and your business objects to allow you to retrieve and store object state without having to use any explicit data access code.

 

As you can see, the new features in ADO.NET are more than cosmetic improvements. You existing ADO.NET code should continue to work just fine, but for future applications, there are some significant new capabilities you can take advantage of to build more robust, maintainable, and scalable data-driven applications.

 

Brian Noyes is a consultant, trainer, speaker, and writer with IDesign, Inc. (http://www.idesign.net), a .NET focused architecture and design consulting firm. Brian specializes in designing and building data-driven distributed applications. He has more than 12 years experience in programming, engineering, and project management, and is a contributing editor for C#PRO, asp.netPRO, and other publications. Contact him at mailto:[email protected].

 

 

 

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish