The holiday season is upon us and I still haven't wrapped up coverage of my first annual Scary, Spooky SQL Stories contest. In case you missed my October 25 editorial, I started a contest for people to share their most horrible and frightening real-life SQL Server stories. Who needs ghosts and ghouls when we have computers, right?I could have cheated and announced the two winners without mentioning the fact that they were the only two people who submitted entries, but I didn't. I was bummed that there wasn't a bigger turnout, but I'll kick off the contest earlier next year. However, both of the stories entered are "award worthy." The reader submissions can be found at http://www.sqlmag.com/Article/ArticleID/97411/sql_server_97411.html#comment. Let's take a look at each of the stories. Jcelko wrote "One of my favorites was a tale from Ken Henderson about an outside consultant who talked a client into drastically increasing their hardware to improve performance. It did not work. Ken came in and rearranged a few indexes and everything was fine. But they had all this extra hardware left over..." Scary? This story probably wasn't too scary for Ken (who is one of the top people on the SQL Server Product Support Services team), but I've been doing performance tuning for more than a decade and have several similar stories. Trust me, there aren't too many things that are scarier than being the person who authorized a five, six, or seven figure investment in hardware, only to find that spending money didn't fix the problem and that fixing the problem was as simple as adding a few indexes. I was involved in a similar situation several years ago. I quoted a company a fixed price for a tuning audit, and eventually found out the company's hardware representative was going to get a very merry holiday bonus rather than the company spending a small amount of money for me to first identify and scope their problem. The company called me back a few weeks after the hardware had arrived and performance was scarily slower than it had been. I knew the people on the team pretty well, so they weren't too offended when I charged them a buck more than my original quote. Jonas-pr shared the following story. "My most memorable is 'The Hanging Pool' and 'The Snake in the Ceiling!' 'The Hanging Pool' as I like to call it, started on regular business day at around lunch. That's when we noticed our horrified technical personnel running in and out of our data center located on the 23rd floor. I used to work there too when the 'The Snake in the Ceiling' occurred (but that’s for next year), so I said to myself 'What now?' So, everyone rushed to the room. We couldn’t believe what was happening! It was raining on a sunny day… Unfortunately, construction workers ruptured some pipes two floors up! However, luckily for us, we had our servers covered with make shift temporary roof made of plastic sheets and two by fours wooden frame since 'The Snake in the Ceiling' incident few weeks ago. But this was no small leak, we literally had one foot deep pool hanging above our servers. Never less, the servers were running on the full power beneath it. How can we shut down the servers? Our clients depend on it and sales people are running demos. Of course, building management drained the pool within hours and I was hoping to have a golden fish pond…. This particular 'little' incident forced owners to rush the long delayed move of servers to the real data center. Believe it or not this a true story…" Wow. I'm not even going to speculate about "The Snake in the Ceiling" story, but will be waiting with great anticipation for next year when I can learn more. I've been doing database work since 1990 and had started to think I had heard and seen it all. But I have to admit that "The Hanging Pool" is a pretty cool story.
Each of these stories presents some valuable lessons and explores time-honored best practices, which was the underlying point of the contest. Sure, I wanted to have some fun with this topic, but exploring the worst things that we have had happen to us professionally typically shines a light on best practices to emulate or worst practices to avoid. So what are the key lessons from these two stories?
The first story highlights the fact that trying to spend your way out of a performance problem, without first understanding the root cause of your problems, is a recipe for disaster. That's a fact I've been preaching for more than a decade, and it's a fundamental truth of performance tuning that will probably never change.
"The Hanging Pool" story helps us remember that we do contingency planning because stuff happens. Is a pool going to drain into your server room? Well, maybe not. Will you eventually suffer some type of catastrophic loss if you decide that you don't need to invest in some sort of continuity plan and think through flaws in your current deployment model? Yes. "The Hanging Pool" management team got off easy, and learned from their near mishap by accelerating the deployment of their servers to a data center. Although if the servers are truly mission critical, I hope they didn't stop with deploying them to just a single data center.
So who is the grand-prize winner? Well, I've always been partial to performance tuning tales of woe, but it's hard to trump a story about a hanging pool and references to snakes in the ceiling. Planning isn't quite as sexy and fun as tuning (at least in my book), but sometimes it's the simple things that save your bacon. So this year's grand prize goes to jonas-pr.