DB2 & IMS

This is one of the biggest trade-offs when it comes to database performance. If data access objects are built on top of individual database objects, then your applications have no way of capitalizing on the specific performance features of the database you’re accessing. In tests:

• Coding a two-table SQL join over accessing the two tables in separate SQL statements has shown up to 30 percent CPU savings.

• Reducing the number of columns an SQL statement returns has demonstrated up to 50 percent CPU savings.

Some organizations have considered using the database software as an access method rather than a database. This has led to highly denormalized database designs and, in some cases, a Binary Large Object (BLOB)-type storage of all data into a single database column. While these designs show promise on paper in lowering database access costs, they can have extreme database performance consequences, including:

• Increased backup and recovery costs because there’s no way to separate only the part of the database that changes. This means having to continuously back up all data, leading to increased backup footprint size, huge backup windows, and increased machine costs to support the strategy.

• Extreme costs in handling database updates. This means that while access costs go down, update costs go up because more data has to be updated in the generic design. If the 80/20 rule applies (80 percent read and 20 percent update), maybe the savings outweigh the costs, but if your activity varies, the costs can become excessive. Also, with a generic design, you could end up updating far more often than is necessary. That is, with the generic design the 80/20 rule may not apply, as there can be fewer controls minimizing the issuing of update statements.

• The potential for update anomalies in a generic or denormalized design can lead to data integrity issues. This can increase the costs associated with testing and troubleshooting.

You should seriously consider the costs associated with generic design and denormalization. Consistent use of denormalization as a solution to data access costs for a generic design can slow or even halt a company’s update operations. One company that experienced this problem eventually had to abandon their entire development effort and restart with a more specifically tuned design. This occurred after investing years of development time and money. Even if they could have achieved acceptable performance, their other goal of having a flexible, accessible database (via SQL and Web applications) could never have been realized.

Getting the Best of Both Worlds

A canonical database architecture can work; it can give the enterprise a common methodology for accessing various disparate data stores and processes using messaging and a common communication pattern. It gives the enterprise the ability to quickly build interfaces and react to changes in business.

Is this easy to do? Maybe. Is it the best for performance? No way! You can, however, achieve both commonality and performance if exceptions to the enterprise rules can be made for high-performance situations. Most applications can be written or retrofitted to the enterprise architecture without high-performance costs if they’re generally low-volume applications that don’t move much data. Also, for most mission-critical, high-volume applications, you also can use the enterprise layered design. However, for the high-volume portions of the application, a custom design can be used to deliver huge performance gains. Applying these exceptions to the rule can achieve the desired performance and this approach typically involves a small amount of code that’s accessed most of the time. It gives you all the benefits of enterprise architecture without the major performance implications.

As always, there are trade-offs to consider, and you must remain aware of the specific needs of your applications and database to succeed in these environments.

4 Pages