Once you’ve found the highest consuming packages and obtained management approval to tune them, you need additional analysis of the programs that present the best opportunity. Ask questions such as:

  • Which DB2 statements are executing and how often?
  • How much CPU time is required? Figure 3: Simple Chart Developed by an In- House Naming Standard and CPU-to-Dollars Formula
  • Is there logic in the program that’s resulting in an excessive number of unnecessary database calls?
  • Are there SQL statements with relatively poor access paths?

Involve managers and developers in your investigation. It’s much easier to tune with a team approach where different team members can be responsible for different analysis.

There are many ways to gather statement- level information:

  • Get program source listings and plan table information
  • Watch an online monitor
  • Run a short performance trace against the application of interest.

Performance traces are expensive, sometimes adding as much as 20 percent to the overall CPU costs. However, a short-term performance trace may be an effective tool for gathering information on frequent SQL statements and their true costs.

If plan table information isn’t available for the targeted package, then you can rebind that package with EXPLAIN(YES). If it’s hard to get the outage to rebind EXPLAIN(YES) or a plan table is available for a different owner id, you could also copy the package with EXPLAIN(YES) rather than rebinding it.

Figure 4 shows PLAN_TABLE data for two of the most expensive programs in our example. Here, our most expensive program issues a simple SQL statement with matching index access to the PERSON_ACCT table, and it orders the result, which results in a sort. Programmers advise that the query rarely returns more than a single row of data. In this case, a bubble sort in the application program replaced the DB2 sort. The bubble sort algorithm was almost never used because the query rarely returned more than one row, and the CPU associated with DB2 sort initialization was avoided. Since this query was executing many thousands of times per day, the CPU savings were substantial.

Our high-consuming reporting program is performing a hybrid join. While we don’t frown on hybrid joins, our system statistics were showing us that there were Relational Data System (RDS) subcomponent failures in the subsystem. We wondered whether this query was causing an RDS failure, and reverted to a tablespace scan. This turned out to be true. We tuned the statement to use nested loop over hybrid join, the RDS failures and subsequent tablespace scan were avoided, and CPU and elapsed time improved dramatically.

While these statements may not have caught someone’s eye just by looking at EXPLAIN results, when combined with the accounting data, they screamed for further investigation.

4 Pages