Skip navigation

SQL Server Innovators Solve Business Problems with Style

Ingenuity and solid database skills meet in the 2007 Innovators award-winning solutions

Downloads
97594.zip

Executive Summary:
The fifth annual SQL Server Magazine Innovators Awards recognize SQL Server professionals--DBAs and database developers--have used their technical skills to devise inventive solutions to business problems using Microsoft SQL Server technologies. The winning solutions feature a T-SQL program that uses SQL Server tables and set operations to do the math for a video poker game, a solution that uses stored procedures to centrally maintain applications' data, a database maintenance tool for technical and nontechnical users, and a disaster recovery solution that ensures high availability of crucial surgical information for a hospital.

SQL Server Calculates Video Poker Payouts

After more than 30 years as a programmer, Joe Krynicky was ready to retire from his job, but not from solving programming challenges. He and some colleagues had an idea for a video poker game that incorporated a few twists on the arcade classic and formed a company, Kirei Technology, to develop and market the product. They found a company that would buy their idea if Kirei could program the math for the game: the poker-hand simulations needed to create pay tables. (A pay table is the amount that the game returns to a player for each winning hand.)

Joe talked to gaming-industry math experts, but none of them would take on the problem since the game had more cards (six instead of five) and winning hands than the traditional video poker game. “With \[our game\], it would take weeks of computer time for the evaluation of each pay table using the traditional simulation methods, because of the millions of possible combinations of hands,” Joe said.

Joe thought that SQL Server might offer a way to perform the needed calculations in a reasonable amount of time. Typically in video poker, the best way to play each hand is determined by calculating the expected value for each combination of cards kept and discarded from the dealt hand and selecting the best value for the player. “But we said, these are the winning hands; what are the different ways to get there? And we went through all the different strategies. For example, if you’re dealt a full house, you’re going to keep it. But if you’re dealt three cards in an inside straight, and a low pair in the same hand, what do you do?”

The solution that Joe devised uses about 200 SQL Server tables to contain the various hands and player strategies. “We had a SQL table containing a row for each possible hand of dealt cards, 27 tables of all the winning combinations in the game and how much was paid for each winning hand, and then we had about 100 tables containing rows for each hand of cards fitting specific starting card-playing strategies,” Joe says.

Although there are more than 120 million possible hands, they break down into certain similar groupings, and Joe used these as the basis for the tables. “For instance, we have a table containing all hands that contained two jacks or higher and a table containing all hands that contained three of five cards that could be used for a straight,” Joe says. “These tables were created once at the beginning of the process, so that much of the repetitive processing wouldn’t need to be performed multiple times later in the iterative portions of the processing.”

After the tables were created, the next task was to write the program that calculates the pay table. The potential game buyer wanted a 98 percent return— that is, the player would get 98 cents back on every dollar bet on the game. As Joe explains, his team wrote a two-step T-SQL stored procedure that calculates the return to a player by joining the table of all possible hands against tables containing all hands that match each hand-playing strategy, in descending order of percentage return (e.g., a dealt royal flush has the highest-percentage return for any hands), one strategy at a time. “For each strategy, the percentage return of the strategy is multiplied by the number of records returned from the join, and this amount is added to a running total of money won by the player. The records from the join with each strategy table are then removed from the remaining hands table, and the next highest strategy is joined to the remaining card table after the previously played strategies were removed. These steps are repeated until all possible hands have been evaluated and the table of remaining cards contains no rows. The running total of amount won is divided by the total number of hands evaluated, and the resulting amount is the percentage of money bet returned to the players. The percentage of money not returned to the players is the hold percentage for the machine manufacturer.”

Innovation is at the essence of the solution that Joe and his colleagues devised. They took a seemingly complex math-calculation problem and reformulated it into a series of set operations that can calculate a new pay table in about two hours—far more quickly than a simulation program that processes each of the millions of records individually multiple times. “This shows that the SQL language isn’t just useful for storing and analyzing data, but it can solve math problems as well,” Joe said. “Without this solution, we wouldn’t be able to sell our company’s video poker game.”

Streamlining Data Flow Among Applications

For all the automation time savings that software packages provide, ironically, IT may spend a great deal of time on keeping data for those applications updated. “Like many organizations, my agency had collected an overabundance of software packages that were put into action to streamline and consolidate our data flow, but as new programs were implemented, little thought was put into the integration of these systems,” says David Dye, a sergeant and DBA with the Cape Coral, Florida, police department. For example, IT staff for the department needed to manually maintain information for about 700 publicsafety– department employees in seven separate applications, including Active Directory (AD), a process that was prone to data entry error.

David’s involvement with IT for the police department started about five years ago, when he was assigned as the project lead to implement new computer-aided dispatch (CAD) and records management systems. Several years later, when David’s job duties had expanded to include SQL Server database reports and integration tasks, he and the network administrator, Matthew Vilord, were discussing the ongoing problem of having to manually update employee records in the various applications. “We decided we needed a starting point where \[the information\] could just trickle down \[to all the applications\]. AD seemed like the most logical place to start. We’ll go in and we’ll replace a user within AD, create a Windows account, then move down through the other applications.”

David’s first step in consolidating and automating application-data administration was to inventory the existing systems and data flow between them. Since most of the applications used SQL Server as their back-end database, David decided to use SQL Server as the basis of his solution. He created a chart of data-flow processes, beginning with adding a new user into AD, and charted the progression of adding, changing, or removing users throughout the agency and information that could be passed between systems, rather than entered manually. “After I created a comprehensive flow chart, I determined that it was feasible to create a setbased solution that used AD as the starting point and trickled the information down throughout our data infrastructure.”

Next, David identified additional data-flow functionality that the system needed. He opted to run a separate physical server—which he created from three computers seized from a drug dealer—running a separate instance of SQL Server. To get production-environment information onto the new server, David created a stored procedure that queried the maintenance plans of the production servers—which provided the most recent transaction-log backup—and restored that backup to the new server. “I owe much of the work \[in creating the stored procedures for the solution\] to Itzik Ben- Gan’s book, Inside Microsoft SQL Server 2005: T-SQL Querying,” David says. He then created a job that used this stored procedure, which ran hourly 20 minutes after the log backups. The new SQL Server instance now had databases that held information from the production environment, to be used for reporting and for centrally performing data-updating tasks across all the applications.

The existing system lacked a means for notifying users of certain transactions, such as the status of police reports. “Department personnel were required to log into numerous applications to find such notifications, if they even existed, David says.” To address this need, David wrote stored procedures that collected information from the backed-up transaction logs and put them in a temporary table. “Then, utilizing a cursor and the xp_smtpsendmail procedure, written by Gert Drapers, I \[created a procedure that\] emailed the notifications to employees, based on time limits denoted in IF BEGIN clauses. I placed these stored procedures into SQL jobs and scheduled them to run daily.”

Continue to next page

The solution provides multiple benefits for department users. “We’ve been able to substantially reduce the personnel hours spent on maintaining separate data sources, so that \[employees can\] serve their primary job function rather than entering redundant data in numerous application front ends,” says David. Another benefit is the consistency in data replicated among the applications. And the functionality that David added—most notably the stored procedures that email notifications—made it much easier for employees to obtain the up-to-date information they needed on the status of various reports, overtime requests, and subpoenas.

High Availability for a

Critical Hospital System
High availability of essential applications is vital for any business, but nowhere more so than a hospital, where patients’ lives depend on systems being available 24 × 7. Although the Oracle-based electronic medical records system at Spectrum Health had plenty of redundancy, physicians and staff at the healthcare center’s six surgical centers wanted further assurance that surgical-staging data—data that describes how to set up a surgical room for a specific surgeon and surgery—would always be available on site in the surgical centers, even if the network went down. “We found that we needed to have something available, in case there was a failure with that central system, to keep surgeries going on and ensure that an OR is staged the way a surgeon wants it staged, so an operation goes as smoothly as possible,” says Tim, the senior DBA with Spectrum Health for about eight years.

To meet this requirement for additional redundancy, Spectrum’s Oracle DBA originally enlisted the help of a colleague to develop an extract from the Oracle relational database management system (RDBMS), which he bulk-loaded into a standalone instance of Oracle on a dedicated PC in each surgical center biweekly. The data was available via a Microsoft Access front end consisting of two forms and a report. However, the loading process failed frequently because employees in the surgical units often shut down the PCs, not knowing what the dedicated PCs were being used for and that they were supposed to stay on all the time.

“When I took over support of this process, I looked into streamlining the solution, knowing that it had to be scalable because we had plans to bring other surgical centers online,” Tim says. “I also wanted to add monitoring functionality and notification on failure.” Tim opted to use a SQL Server database to provide the redundancy, along with a DTS package for exporting the data from Oracle into SQL Server.

The solution is straightforward, as Tim explains. “We developed extracts out of the main Oracle system that were specific to each of the surgical centers. A DTS package ftps the extracts \[flat files containing surgical-staging information\] from the Oracle RDBMS to our SQL Server 2005 Enterprise cluster. A transformation task loads the data into two tables: a reference table for the various medical supplies and the actual surgical preference cards, as they’re referred to in our system. The data in each surgical center resides in a SQL Server 2005 Express Edition database on a dedicated workstation, and the Access file used for the original Oracle instance was relinked to the new SQL Express local instance. A second transformation task pushes data from the central database to each surgical center’s database. This DTS package is scheduled to run weekly as a SQL Server Agent job, which notifies me via email upon job completion.”

Since the staging data changes only about once a week, the process of extracting the flat files from the Oracle database and transferring them to SQL Server occurs at the beginning of each week. Tim’s solution also monitors the dedicated workstations to make sure they’re running, using a solution he developed to track the 70 SQL Server instances hosting all 800-plus databases across the medical system’s domain. “Using linked servers and by querying system tables, I track backup history, SQL Agent history, file space, and other select metadata. SQL Server Reporting Services (SSRS) then provides me with daily delivered and on-demand reporting of this metadata. I use SMTP trapping to notify me if one of the monitored SQL instances is unresponsive—for instance, if someone has turned off a server,” Tim says.

The solution meets the need of surgical staff to have guaranteed uptime of the staging system. Tim says he’s in the process of converting the monitoring system to a solution similar to the one that SQL Server Magazine contributor Rodney Landrum describes in his recent articles about SQL Server Integration Services (SSIS) and SSRS. (For more information, see “SQL Server Integration Services,” May 2007, InstantDoc 95385 and “SQL Server Reporting Services,” June 2007, InstantDoc ID 95745.) It’s good to see one of our authors inspiring an Innovators award winner to innovate even more!

Database Maintenance for the Masses


Société GRICS, a company that develops applications and provides training and consulting services for school boards and other markets throughout Quebec, needed a way to enable nontechnical end users to perform routine SQL Server maintenance. Although the school boards have part-time DBAs who maintain their databases, staff outside the school boards didn’t know how to do any maintenance other than scheduled backups through SQL Server Agent.

As Société GRICS’ SQL Server specialist, Maurice knew that the ability to perform routine database maintenance was essential to ensure that applications ran smoothly and to avoid data losses. He also noticed that support-call problems were increasingly related to inadequate database maintenance and saw a need for a reliable database-maintenance solution that all Société GRICS customers could use. The solution would need to perform database-integrity testing, selective index reorganization, statistics updating, regular full database and log backups, cleanup of old backups, and cleanup of SQL Server logs and history logs, all without IT involvement. “Being a T-SQL addict myself and having some knowledge of how database maintenance works, I envisioned ways to set up all this in a single T-SQL solution,” says Maurice.

The solution Maurice envisioned would meet the needs of both end users and IT. The solution’s main requirement was that it had to run without affecting system performance or users’ ability to access data. Additionally, says Maurice, “the tool’s only \[prerequisite for a user to run it\] would be a working SQL Server Management Studio (SSMS) query window. The tool needed to log all its actions and store the log in a SQL Server database. The solution should configure everything necessary to run by itself and require a minimal set of parameters so it would be easy to use, even for non-IT people. Finally, it should provide daily maintenance reporting, disable access to a specific database in the event integrity problems occur, and be easy for IT to customize.”

Using SQL Server 2005, Maurice wrote a T-SQL solution that includes a script that end users download from the support site, paste into an SSMS query pane, and launch. The script creates a maintenance database that includes log tables and a stored procedure that initiates the various maintenance tasks. The solution relies heavily on the use of dynamic queries to perform tasks such as obtaining information about a particular maintenance process for reporting, job scheduling, and error handling. Also within the script are examples that users can customize—one a call to the setup stored procedure that specifies email recipients and the mail server to use to send them maintenance messages, and the other a sample backup directory for database backups.

“SQL Server 2005 includes decent T-SQL exception handling, which made it easy to do everything the solution required,” Maurice says. “Long strings in varchar(max) datatype ensured that I didn’t have to worry about the length of dynamic queries needed for the solution. This also makes it easy to store TSQL queries and error messages into the log tables.” By implementing Maurice’s user-friendly databasemaintenance solution, IT support staff at Société GRICS can be sure that users will perform database maintenance setup according to their best practices wish list, preventing data losses while ensuring that applications will run properly without performancedegradation problems related to poor database maintenance.

TAGS: SQL
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish