The next statement will perform an explain of the access path for the statement in the cache with a statement id of 11087, and populates the normal set of explain tables under the schema of the person who executes it:
EXPLAIN STMTCACHE STMTID 11087;
The DSN_STMTCACHE_TABLE is similar to the other explain tables, and must be created under your own schema. The output from the EXPLAIN STMTCACHE ALL statement is placed in this table. You can then run queries against the table to examine the performance metrics captured for the dynamic statements.
Figure 1 describes just some of the columns in the statement cache table. As you can see, there’s a significant amount of performance information gathered. Most important are such things as the number of executions, accumulated elapsed time, and accumulated CPU time. These numbers continue to be accumulated in the cache until the statement is flushed from the cache, and can be exposed every time the statement cache is explained so you can derive delta values.
Let’s take a look at a large-scale customer (online merchandise retailer) that had an established base of legacy CICS/COBOL programs, but was moving large volumes of applications into a new enterprise architecture. This new architecture involved object-based design with modern programming languages and remote application servers. What resulted was a large volume of remote dynamic SQL that no one could track and tune. Each application was assigned its own authid, so DB2 accounting information could be summarized at this level and reported by application. This enabled us to first address the application that used the most CPU.
The customer wanted a reduction in CPU consumption as their peak season approached without becoming CPU-constrained. They also didn’t want to upgrade processors. Moreover, there can be hundreds of thousands of daily product sales, so their order processing applications can be quite active. All their modern enterprise applications, including the order processing, are written in Java to run under the control of WebSphere application servers.
We collected DB2 accounting data (classes 1,2,3) from the System Management Facility (SMF) records, ran accounting reports using a reporting software package, then loaded important fields from the reports into a DB2 table. Queries were used to organize the data by daily CPU consumption. We tracked the applications that used the most CPU over several months. We examined those applications first. Figure 2 shows that the application with the DMDMGMTE authid is a daily large CPU consumer.