Skip navigation

Investment Banking on Performance

Introducing the Innovators:
In this month's Reader to Reader section, Altan Arslan, grand-prize winner in SQL Server Magazine's 2003 Innovator Awards contest, shares his winning solution. Arslan is a software projects manager for Turk Economy Bank in Istanbul, Turkey. Watch Reader to Reader for more winning solutions.

I work as a software projects manager for TEB Investment, the investment banking and brokerage subsidiary of Turk Economy Bank. TEB Investment serves clients through two Internet branches, one call center, an International Capital Markets (ICM) department, a VIP customer department, 75 banking branches, and 21 investment branches.

The volatile nature of the stock market has created some interesting problems for our software-development team. First, stock-order transactions must be extremely fast to keep up with rapid price changes. And the system must deliver order-status changes to the appropriate clients as quickly as possible. Some specific quirks of TEB Investment's system presented further challenges. Despite the fact that customers execute stock transactions through SQL Server, some of the customers' accounts are on TEB Bank's Oracle database. In addition, clients can use more than one distribution channel simultaneously, so the system must check every order against business rules that vary depending on whether the client account is on SQL Server, Oracle, or both systems. This complication makes offline stock transactions impossible. Finally, because the SQL Server application that we created directs all of TEB Investment's operations, the system must be stable. In the case of any system failure, we have to be able to quickly replace the database with a backup without any data loss. Because of these requirements, we made the following database-design assumptions:

  • The database must be able to serve at least 1000 concurrent clients. (Our test procedures simulate as many as 1200 concurrent users.)
  • The system must complete all order transactions within 50ms.
  • The system must deliver order modifications to the appropriate clients within 200ms. (Because 92 percent of our clients connect to the system through a WAN, approximately 100ms of that 200ms is lost to network delays.)

To address the need for rapid customer notification, we developed a data-distribution layer. Because the maximum allowed delay is 200ms, the server must handle 6000 queries per second from client applications (1200 users x 5 queries per second). Typical hardware and database-management software can't handle this query volume. To solve the problem, our data-distribution layer queries SQL Server every 50ms for any changes, then sends the query results through an IP socket in XML format to clients that those changes affect, as Figure 1 shows. The data-distribution layer can determine which clients need the query result because when a client logs in to the system, the client sends identifying information about itself to the data-distribution layer. The data-distribution layer uses the client information to send modification information only to the clients that need it. This notification technique reduces the volume of database queries and network traffic by reducing the total number of notifications.

The second challenge we faced was that whereas SQL Server stores the stock transactions, Oracle stores customer accounts. So we developed a business layer that coordinates the transaction workflow between the two databases, as Figure 2, page 13, shows. A transaction that starts on SQL Server can continue and commit on Oracle, and an Oracle transaction can commit on SQL Server.

When a client initiates a job that both SQL Server and Oracle need to process, the business layer opens a transaction on the primary server. When the transaction completes on the primary server, the business layer keeps the transaction open and opens a second transaction on the secondary server. When the transaction on the secondary server completes, the business layer commits both transactions. This technique lets us coordinate a transaction on both servers.

Third, to address the need for high availability, we developed an application that backs up the transaction log every 60 seconds and applies the transaction-log backups on another server. In a failure situation, we can switch the server and continue the workflow in less than 120 seconds. To ship the logs, the application backs up the transaction log from the main server every 60 seconds, then compresses the backups for low network allocation and ships them to the standby server through an FTP protocol, as Figure 3 shows. Finally, another application on the standby server applies the transaction logs to the standby server.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish