Skip navigation

Will Database Tuning Become Obsolete?

Last week, my business partner sent out a message to all the partners that included the following comments, which I’ve slightly edited to remove specific comments about our company:

“Over the past couple of years, I have been closely watching some technology changes that are occurring in the storage area. The technology I am speaking of is that of SSD—Solid State Drives. Over the past year, I have been teaching about their imminent arrivals within the IT community. Earlier this year, significant advances in the technology have hastened that arrival.

“As you may know, Apple with its 'Air' laptop recently began shipping a SSD as its primary hard drive. Quickly following that was Dell and several others. In addition, there have been several companies that have released server-class storage products based on SSD technology. The performance increases between legacy spindles and SSD are phenomenal. For example, seek times for a top-end SCSI or SAS drive is 3-6 milliseconds. For SSD it is 15 microseconds. IOPS (I/Os Per Second) for legacy drives are measured in hundreds per spindle. For SSD it is measured in hundreds of thousands.

“Recently IBM just released results that have now pushed IOPS for SSD to over 1 million. This is simply unheard of in today’s spindle based storage solutions.
http://www.networkworld.com/news/2008/082808-ibm-flash-memory-million-iops.html?hpg1=bn

“Those types of performance numbers at the storage level can do miraculous things to cover up extremely poor database design and inefficiency. This type of technology widely implemented may render many (or most) current tuning problems a moot point.


“In the past, the costs of these systems have precluded them from widespread use but that is changing, and by next year I believe we may begin encountering server-based SSD installations; if not in 2009, definitely by 2010.”

So what does SSD mean for the glorious profession of database tuning? If developers can write any code, with any (or no) indexes, and still have it perform well, why would tuning matter?  I'm sure there are purists among us who would still want to get the best code and best indexes they could. But others, who aren't so well versed in monitoring their SQL Servers and examining their query execution plans, might not even realize there are improvements that can be made, if no one is complaining about slow-running queries.

There's still the aspect of concurrency and blocking problems that database tuners might have to busy themselves with. As I tell my students, “You can have the fastest running query in the world, but if it can’t get to the data it needs because the data is locked exclusively by another process, it doesn’t matter how fast your query is when run in isolation. It will appear that the query is slow." I also think that database design will start to become a bigger issue. Most of my clients just want the query to run fast, and now. No one wants to even think about completely redesigning their database.  But poor database design can not only lead to slow queries, which we might not be worrying about anymore, but also more overhead to make sure duplicate data is managed appropriately and your queries are written correctly. Database design is a seriously overlooked topic, and maybe if there’s less need to worry about query speed there will be more time to evaluate your design. That’s not a bad thing at all!

So what do you think? How will your tuning needs change if your queries start running a thousand times faster? Will you worry about good indexes and execution plans at all?

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish