Skip navigation

9 Data-Access Best Practices Tips

Follow this checklist to keep your code efficient.

Feature Companion

LANGUAGES: All .NET Languages

ASP.NET VERSIONS: 1.0 | 1.1

 

9 Data-Access Best Practices Tips

Follow this checklist to keep your code efficient.

 

By Jeffrey Hasan and Kenneth Tu

 

Good data-access code begins with making solid design decisions. In addition, it helps if you follow a list of best practices, which you can think of as a checklist that keeps your code efficient and might also buy you extra performance. Here are some generally recognized best practices for writing data-access code.

 

1. Use the Appropriate Data-Access Object

The easiest and smartest step you can take is always to use the most appropriate ADO.NET data object for the given scenario. Always use the DataReader's streaming data access for read-only data retrieval operations. Use the DataSet object for data update operations only if you need to perform the updates in disconnected mode. (Alternatively, you can write dedicated update stored procedures that resolve updates for you). Use the DataView object when you want to work with filtered views of a larger DataSet object. The DataView object provides many of the benefits of the DataSet object, but without as much overhead.

 

2. Use Stored Procedures, Not Embedded T-SQL

An accepted T-SQL design approach is to compile your Data Manipulation Language (DML) statements into stored procedures. Stored procedures execute much faster than T-SQL statements because they are precompiled on the database server and are reusable. The ADO.NET Command object lets you execute embedded T-SQL statements directly by assigning the statement to the CommandText property and setting the CommandType enumeration to Text. We have seen code listings that assemble a T-SQL string in code dynamically. But you should always avoid this approach. Not only does T-SQL execute more slowly than a stored procedure, but you also can introduce parsing errors into the T-SQL statement, which in turn generates runtime errors. Worse yet, if your application executes any dynamic T-SQL statement, you might inadvertently allow the application to execute commands that modify the database structure (such as dropping tables). Always execute stored procedures, and use input parameters (preferably with strong typing).

 

You can minimize the size of the returned result set by filling in only the records you need. This is especially important for database query results that will be marshaled over the wire to Web clients. Also, avoid using the * wildcard in SQL queries; always specify the exact fields you want to extract.

 

3. Use Complex Stored Procedures, Not Multiple Retrievals

Use complex stored procedures that return multiple result sets rather than making multiple calls to multiple stored procedures. ADO.NET makes it easy to work with multiple result sets. For example, you can use the SqlDataReader object's NextResult method. Here is a code listing that demonstrates how to iterate through every record in every returned result set:

 

sqlDR = objDB.RunQueryReturnDR("MyStoredProcedure")

Dim arrResult(0) As String

Do

    While sqlDR.Read() ' Position the pointer on the first record

            i += 1

            ReDim Preserve arrResult(i)

            arrResult(i) = sqlDR("ProductNumber")

    End While

Loop While (sqlDR.NextResult()) ' Move to the next resultset

 

If possible, write your stored procedures to batch-related result sets. This helps reduce network traffic and overhead on the database server.

 

4. Use SQL Data Types With SQL Server

The .NET Framework and SQL Server use different data types that do not always convert with each other. The System.Data.SqlTypes namespace provides a set of .NET Framework structures that represent SQL Server data types in the managed environment. In addition, the SqlDataReader class provides typed accessor methods that map retrieved field values into the appropriate structure automatically. Always use typed accessor methods when retrieving SQL Server data to avoid type-conversion errors.

 

5. Use Connection Pooling

Always use connection pooling. The SQL Server managed provider supports connection pooling by default, with little effort required on your part. The most work you must do is modify the connection string to override default settings for connection pooling parameters.

 

6. Use Centralized Data-Access Functions

Always centralize your data-access functions in a dedicated class file. This lets you maintain your database code in one central location, which makes it easier to write and maintain. A data-access class implements wrapper functions for basic data-access operations, including executing a stored procedure and returning either a DataReader, DataSet, XmlReader, or no return value at all.

 

These wrapper functions encapsulate the details of setting up the Connection and Command objects, as well as any additional objects. This code becomes repetitive, and the last thing you want to do is have the same constructs included in dozens of locations throughout your code. Not only is this difficult to maintain, but it inflates the size of the application executable artificially.

 

Microsoft provides a .NET component named the Microsoft Application Blocks for .NET, which contain optimized data-access wrapper functions. You can read more about this, and download the code, at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/daab-rm.asp.

 

7. Use Good Programming Sense

The .NET managed runtime environment provides advanced garbage collection designed to optimize resource usage and remove unnecessary references. You should always implement, however, the fundamental housekeeping rules that keep your code readable and efficient. In particular, always clear object references when you are done with them. If you use data connections, make sure you keep the connection open for as short a time as possible. Open the connection right before it is needed, and close it as soon as it is no longer required. Never leave open connections in your code, especially if you are working with streaming data objects such as the DataReader. In summary, close connections and clear unused object references - this is good coding practice, and it makes good sense.

 

8. Use Exception Handling Appropriately

Exception handling is expensive, especially throwing errors. Always implement structured error handling in your applications, but design your code to avoid falling into exception handlers. This might seem obvious, except to developers who are used to coding inline, such as when using classic ASP. Inline code often uses On Error Resume Next constructs that let code continue executing past errors in order to check for an expected error result. This approach is unnecessary in the .NET managed environment. Design your code to use exception handlers as destinations of last resort. Use the multitude of error- and type-checking functions to detect errors before the compiler is forced to raise a runtime error.

 

Throw exceptions only if absolutely necessary because this is an expensive operation. The Exception classes provide a large amount of information that might go unused by the calling code that receives the thrown error. In this case, it is better to raise a custom error using Err.Raise than to throw an exception. This operation transmits basic error information such as an error number, source, and message, but it avoids expensive information such as the detailed call stack.

 

Finally, if you provide more than one catch statement with differing filter criteria, remember to order them from most specific type to least specific type. For example:

 

Try

Catch SqlErr as SqlException

Catch err As Exception

Finally

End Try

 

9. Use Helper Technologies

ASP.NET provides technologies that complement ADO.NET in providing optimized data access. In particular, ASP.NET supports a sophisticated set of caching options ranging from page-level output caching to data caching using the Cache API. Caching is an important consideration in ASP.NET application design. It is considerably faster to read data from a cache than it is to access it fresh from a data source. Caching reduces the number of queries executed against the database and delivers data more responsively. Caching does come with a price in terms of increased memory usage, particularly if you are caching large DataSets. But caching is efficient in ASP.NET and almost always proves to be worth the small price you pay in increased resource usage. The one caveat with caching is it can provide only a stale view of data, so you need to factor in the appropriate refresh rate into a caching implementation.

 

ASP.NET and the .NET Framework provide additional features that can serve as helper technologies for data access. Above all, keep in mind that with ASP.NET, you have access to the full .NET class framework, and this alone is a powerful advantage - for data-access code and beyond.

 

Jeffrey Hasan and Kenneth Tu are technical architects and software developers who specialize in Microsoft technologies at InfoQuest Systems (http://www.infoquest.tv), a leading provider of business intelligence applications and services for the telecommunications and broadband industries. Their primary expertise is in .NET enterprise application development, with a special focus on developing enterprise Web applications using ASP.NET. They recently co-authored Performance Tuning and Optimizing ASP.NET Applications (Apress). Read more about this and other publications at http://www.asptechnology.net. E-mail Jeffrey Hasan at mailto:[email protected].

 

 

 

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish