A Look Ahead at "Denali"--The Upcoming Release of SQL Server

A preview of the next version of SQL Server, code-named "Denali," featuring support for columnar indexes and other goodies

The next version of SQL Server, code-named "Denali", will be a major release. Like many others in the SQL Server community, I'm very excited about what Denali will offer. So I wanted to spend a bit of time sharing some thoughts about what I think Denali will mean—upon release and into the future.

"Denali": The Next Major SQL Server Release
Unlike SQL Server 2008 R2, Denali will be a major release of SQL Server, which means that it will ship with a large number of new features, tools, and capabilities. Too many new details, in fact, to cover with any degree of detail in a single article—though Mary-Jo Foley provides a nice summary of the major components.

However, since Denali was officially released upon the world via a series of Keynotes at last week's PASS Summit, you can go and watch recordings of the keynotes from Tuesday, Wednesday, and Thursday, where details of SQL Server "Denali" were unveiled. (Free registration with PASS is required—but it's painless to sign up, and you should already be a PASS member if you care even a little bit about SQL Server.)

In addition to watching the keynotes, you can also download the Denali CTP. If you decide to get your feet wet and install the CTP, make sure to check out Aaron Betrand's SQL Server v.Next (Denali) : Setup walk-through—it's a great overview of some potential issues and configuration choices/changes that you may need to make in order to get Denali installed on a virtual machine (i.e., don't put it on a production workstation or server).

Of course, if you're like me, then 70 percent of what you're looking for when a CTP like Denali comes out is a chance to get to the "What's New" section in SQL Server Books Online. Happily, if that's what you're after, Books Online for SQL Server Denali is already available, without the need for installation:

And, if you take a peek at those links, you'll see that lots of the content is either in "rough" form or is missing—but that's because we're still so early into this release. So early, in fact, that Books Online is still missing details or even mentions of some of the key things that were covered at PASS.

What follows, therefore, is a quick overview of some thoughts and insights about things covered either during the PASS keynotes, or mentioned in this early drop of Books Online.

Columnar Indexes
NoSQL continues to gain more and more attention in development circles. I think it's the wrong solution in well over 99 percent of development projects, but that doesn't mean that NoSQL is without strengths. Nor does it mean that NoSQL won't continue to grow in popularity. In fact, my expectation all along has been that as NoSQL continues to gain clout and attention, that SQL Server and other RDBMs will be forced to address some of the key benefits that NoSQL brings to the table.

Happily, that's exactly what we're seeing with Denali's announced support for columnar indexes. (This isn't covered in Books Online or in the CTP, so see the last quarter of Tuesday's keynote, where Amir Netz is showing off Project Crescent.) More specifically, columnar indexes will give SQL Server the added ability to physically store data in columnar fashion (i.e., columnar indexes) instead of physically storing data in rows as SQL Server has always done. This, in turn, means that we'll be able to pick up some of the key benefits that NoSQL delivers when it comes to certain types of queries or operations—without having to suffer through ALL of the negatives that NoSQL imposes.

Even though specifics on columnar indexes are still vague at this point, I'm convinced that they'll be a huge area of concern for developers and for DBAs. Moreover, I personally suspect that columnar indexes will help pave the way for increased performance capabilities beyond Denali—but more on that in a moment.

"Atlanta": Metrics for Troubleshooting
Another feature that caught my attention from Tuesday's keynote was project "Atlanta," which provides Microsoft's Customer Service and Support (CSS) engineers with detailed environmental information that will aid during troubleshooting and support calls (while also, apparently, allowing Microsoft to proactively scan for common configuration missteps and anti-patterns).

On the surface, Atlanta looks cool—and looks like it can make troubleshooting problematic SQL Server bugs and issues much easier for Microsoft. Only, in my experience as a consultant, many organizations never use Product Support Services (PSS) for SQL Server when they run into bugs or problems. Instead, many organizations (especially small-to-medium businesses) tend to reach out to consultants when a problem rears its head. Granted, one big reason why I think many organizations chose not to engage Microsoft for help with bugs is the fact that PSS/CSS commonly want to put environment-altering traces and monitoring software into place, so I think that Atlanta was designed to address a very real problem.

Only, I'm afraid it just won't receive enough attention from actual SQL Server users. For starters, Atlanta appears to be an additional, add-on, service that costs extra. That, in and of itself, makes it instantly less appealing for a product that costs what SQL Server costs and when support incidents are not free. It also appears that Atlanta will be trying to store/aggregate data up in the cloud, which means that many businesses will worry about what might be accidentally lost or exposed—to the point where many organizations will just say "no way, no how".

Don't get me wrong: I want Atlanta to succeed. If DBAs and organizations have access to easy-to-digest instrumentation and metrics on their environment, that's something that would benefit them greatly. As a consultant, I also found myself licking my lips in anticipation during the keynote, when thinking about how I might be able to access this data to help troubleshoot problems for clients, or when conducting audits.

My specific worry with Atlanta though is that Microsoft is going to put too much of a focus on meeting the needs of their CSS/PSS teams and not enough focus on listening to customers about how they want Atlanta to meet their needs. And it looks like I'm not alone in those worries. Brent Ozar opined some similar sentiments on his blog about Atlanta. (Make sure to check out the comments, too.)

High Availability and Disaster Recovery
And speaking of Brent Ozar, it looks like he too is very excited about another feature of SQL Server Denali. It's new approach to mingling, or merging, high availability (HA) and disaster recovery (DR) into a single solution: HA/DR. Brent does, as always, a fantastic job of outlining why HADR is going to be such a big deal, along with calling out some immediate concerns and considerations that DBAs will have to grapple with when it comes to implementing HADR in their own environments.

Make no mistake, if Microsoft implements HADR correctly, it will drastically change the SQL Server (and RDBMS) landscape. Over and over again, one thing I see with many of my clients is that as they try to become more proactive about addressing failure or outages, they keep turning to HA solutions. Yet, over and over again, I find that it's very easy for organizations to confuse HA and DR—to the point where I wrote an article for SQL Server Magazine that outlines some of the pitfalls. Likewise, another common concern that I see when it comes to HA solutions is that most of them result, typically, in lots of wasted hardware—or hardware that largely sits around waiting for a disaster to occur. (This problem becomes even more acute when management learns about tons of "idle" hardware whenever there are performance problems or reporting needs.)

With Denali's new approach to mirroring, I see an end to much of the confusion, difficulty, and waste that are currently associated with trying to juggle both HA and DR concerns. Or, in other words, I think that once organizations cut their teeth on some aspects of the learning curve and start using Denali's HADR to meet real-world needs that we'll get to the point where we can have our cake and eat it, too.

Musings on the Possibilities of HADR and AZURE
More importantly though, if you combine some aspects of HADR (in the form of linked/synchronized databases) along with some of the potential benefits that will, eventually, make their way in from what the SQL Azure team is working on, I think that there will be some very exciting potential opportunities down the road.

Specifically, if "contained databases" end up being as "portable" as it looks like they'll become—and if they're then capable of running within an Azure environment, then not only will we see some very cool HA and DR options (or endpoints) available, but I think we'll finally be another step closer to seeing SQL Server act as a "host" for database workloads (in much the same way that a hypervisor is the host for a computing workload).

Likewise, if HADR paves the way for letting "contained" or "portable" databases interact with each other to synchronize data, then it's not too big of a leap to assume that somehow coupling the HADR pipeline for partitioning capabilities wouldn't be that far off. And if that's possible, then we'd end up with the ability to create "nodes" or "clusters" of linked databases that would be able to scale out in very powerful ways. From there, SQL Server is then able to compete head-on with NoSQL in the sense that it gains the potential for massive scalability (coupled with columnar indexes) without having to sacrifice any of the true benefits of the relational engine that we all know and love.

We've already got massive scalability when it comes to SQL Server's Parallel Data Warehouse (PDW). And as we saw with columnar storage, Microsoft can and will use advances in the BI space to address improvements in the SQL Server OLTP engine. Consequently, I'm just assuming that we'll eventually get to a place where the same kind of massive scalability found in PDW will make its way into OLTP systems—without the need for appliances. Lessons learned from SQL Azure (and Azure in general) are likely to help pave the way for this. Likewise, I'm assuming that HADR would fit the bill for "partitioning" capabilities nicely—though I could be wrong.

But no matter how you slice it, Denali is definitely doing a great job of addressing needs today while opening up all sorts of potential options and benefits tomorrow. As such, I'm very excited to begin working with it—and will address it again in future articles.

Michael K. Campbell ([email protected]) is a contributing editor for SQL Server Magazine and a consultant with years of SQL Server DBA and developer experience. He enjoys consulting, development, and creating free videos for www.sqlservervideos.com.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.