DB2 & IMS

There have been many articles, white papers, presentations, and manuals that dealt with DB2 for z/OS tuning specifics. From buffer pools to access paths to distributed SQL to configuration parameters, these references covered a lot of territory.

What’s a DB2 systems professional to make of all this detail? How much, if anything, changes when you migrate to a new version of DB2? How do you know what to tune and when? Where do you start?

In this article we look at fashioning a DB2 systems tuning strategy and assisting the DBA and systems programmer in developing the basic instincts necessary for supporting the IT enterprise. For simplicity we’ll assume the DBA is responsible for the overall system tuning strategy. In some IT shops, this will be true; in others, the systems programmer will be responsible. In larger shops a team may handle tuning efforts.

Good or Bad Tuning?

Situation 1: After analysis of accounting records, a DBA determines that the RunStats utility is responsible for almost 2 percent of the CPU used in one Logical Partition (LPAR). When 90 percent of these jobs are removed, weekly CPU usage is reduced.

Situation 2: The DBA reviews statistics for a long-running application. It contains an SQL statement that joins two partitioned tables; however, the access paths aren’t taking advantage of parallelism. By implementing parallelism, the DBA cuts the job’s total execution time from two hours to two minutes.

Situation 3: A popular online transaction typically takes several minutes to execute. After some analysis, the DBA creates three new indexes on tables that are joined in the application. Response time for the transaction now averages less than a second.

What do these situations have in common? All may be examples of DB2 systems tuning failures! On the surface they may appear to be success stories, but the deeper reality is that, in each case, the DBA made a tuning decision based on a local symptom or problem without considering the systemwide implications.

In the first scenario the DBA eliminated many RunStats jobs. For high-volume critical applications timely statistics may be essential for the DB2 optimizer to choose efficient access paths for SQL statements. This is especially important if distributed applications are using dynamic SQL.

In the second scenario the DBA greatly reduced one application’s elapsed time, but during job execution it’s possible that overall system CPU utilization will be much higher. This happens because the CPU usage that was once spread over two hours is now compressed into a much shorter interval.

5 Pages