Dec 1 ’04

DB2 for z/ OS Application-Level Performance Reporting & Tuning

by Editor in z/Journal

As much as we hate to hear it, there’s no silver bullet for improving overall large system performance. We can tune the DB2 subsystem parameters, logging and storage, but often we’re only accommodating a bad situation. Here, the “80/20” rule applies; you’ll eventually have to address application performance.

There’s a way to quickly assess application performance and identify the significant offenders and SQL statements causing problems. You can quickly identify the “low-hanging fruit,” report on it to your boss, and change the application or database to support a more efficient path to the data. Management support is a must, and an effective manner of communicating performance tuning opportunities and results is crucial. 

Setting the Appropriate Accounting Traces

DB2 accounting traces will play a valuable role in reporting on application performance-tuning opportunities. DB2 accounting traces 1, 2, 3, 7, and 8 must be set to monitor performance at the package level. Once you do that, you can further examine the most expensive programs (identified by package) to look for tuning changes. This reporting process can serve as a quick solution to identifying an application performance problem, but can be incorporated into a long-term solution that identifies problems and tracks changes.

There’s been some concern about the performance impact of this level of DB2 accounting. The IBM DB2 Administration Guide states that the performance impact of setting these traces is minimal and the benefits can be substantial. Tests performed at a customer site demonstrated an overall system impact of 4.3 percent for all DB2 activity when accounting classes 1, 2, 3, 7, and 8 are started. In addition, adding accounting classes 7 and 8 when 1, 2, and 3 are already started has nominal impact, as does the addition of most other performance monitor equivalent traces (i.e., your online monitor software).

Summarizing the Accounting Data

To effectively communicate application performance information to management, the accounting data must be organized and summarized up to the application level. You need a reporting tool that formats DB2 accounting traces from System Management Facilities (SMF) files to produce the type of report you’re interested in. Most reporting tools can produce DB2 accounting reports at a package summary level. Some can even produce customized reports that can filter only the information you want out of the wealth of information in trace records.

You can process whatever types of reports you produce so that a concentrated amount of information about DB2 application performance can be extracted. This information is reduced to the amount of elapsed time and CPU time the application consumes daily and the number of SQL statements each package issues daily. This highly specific information will be your first clue as to which packages provide the best DB2 tuning opportunity. Figure 1 shows a typical package-level report and the fields of interest.

If you lack access to a reporting tool that can filter out just the pieces of information desired, you can write a simple program in any language to read the standard accounting reports and pull out the information you need. REXX is an excellent programming language well-suited to this type of “report scraping,” and you can write a REXX program to do such work in a few hours. You could write a slightly more sophisticated program to read the SMF data directly to produce similar summary information if you wish to avoid dependency on the reporting software. Once the standard reports are processed and summarized, all the information for a specific interval (say one day) can appear in a simple spreadsheet. You can sort the spreadsheet by CPU descending. With high consumers at the top of the report, the low-hanging fruit is easy to spot. Figure 2 shows a spreadsheet derived from the summarized package-level accounting report.

Looking for some simple issues will help you to identify the first programs to address. For example, package ACCT001 consumes the most CPU per day, and issues nearly 2 million SQL statements. Although the CPU consumed per statement on average is low, the sheer quantity of statements issued indicates an opportunity to save significant resources. If just a tiny amount of CPU can be saved, it will quickly add up. The same applies to package ACCT002 and packages RPTS001 and RPTS002. These are some of the highest consumers of CPU and they also have a relatively high average CPU per SQL statement. This indicates there may be some inefficient SQL statements involved. Since the programs consume significant CPU per day, tuning these inefficient statements could yield significant savings.

ACCT001, ACCT002, RPTS001, and RPTS002 represent the best opportunities for saving CPU, so examine those first. Without this type of summarized reporting, it’s difficult to do any sort of truly productive tuning. Most DBAs and systems programmers who lack these reports and look only at online monitors or plan table information are really just shooting in the dark.

Reporting to Management

To do this type of tuning, you need buy in from management and application developers. This can sometimes be the most difficult part because, unfortunately, most application tuning involves costly changes to programs. One way to demonstrate the potential ROI for programming time is to report the cost of application performance problems in terms of dollars. This is easy and amazingly effective!

The summarized reports can be used to display information on an application level. An in-house naming standard can be used to combine all the performance information from various packages into application-level summaries. This lets you classify applications and address the ones that use the most resources.

For example, if the in-house accounting application has a program naming standard where all program names begin with “ACCT,” then the corresponding DB2 package accounting information can be grouped by this header. Thus, the DB2 accounting report data for programs ACCT001, ACCT002, and ACCT003 can be grouped together, and their accounting information summarized to represent the “ACCT” application.

Most capacity planners have formulas for converting CPU time into dollars. If you get this formula from the capacity planner, and categorize the package information by application, you can easily turn your daily package summary report into an annual CPU cost per application. Figure 3 shows a simple chart developed using an in-house naming standard and a CPU-to-dollars formula. Give this report to the big guy and watch his head spin! This is a really great tool for getting those resources allocated to get the job done.

Make sure you produce a “cost reduction” report, in dollars, once a phase of the tuning has completed. This makes it perfectly clear to management what the tuning efforts have accomplished and gives incentive for further tuning efforts. Consider providing a visual representation of your data. A bar chart with before and after results can be highly effective in conveying performance tuning impact.

Finding the Statements of Interest

Once you’ve found the highest consuming packages and obtained management approval to tune them, you need additional analysis of the programs that present the best opportunity. Ask questions such as:

Involve managers and developers in your investigation. It’s much easier to tune with a team approach where different team members can be responsible for different analysis.

There are many ways to gather statement- level information:

Performance traces are expensive, sometimes adding as much as 20 percent to the overall CPU costs. However, a short-term performance trace may be an effective tool for gathering information on frequent SQL statements and their true costs.

If plan table information isn’t available for the targeted package, then you can rebind that package with EXPLAIN(YES). If it’s hard to get the outage to rebind EXPLAIN(YES) or a plan table is available for a different owner id, you could also copy the package with EXPLAIN(YES) rather than rebinding it.

Figure 4 shows PLAN_TABLE data for two of the most expensive programs in our example. Here, our most expensive program issues a simple SQL statement with matching index access to the PERSON_ACCT table, and it orders the result, which results in a sort. Programmers advise that the query rarely returns more than a single row of data. In this case, a bubble sort in the application program replaced the DB2 sort. The bubble sort algorithm was almost never used because the query rarely returned more than one row, and the CPU associated with DB2 sort initialization was avoided. Since this query was executing many thousands of times per day, the CPU savings were substantial.

Our high-consuming reporting program is performing a hybrid join. While we don’t frown on hybrid joins, our system statistics were showing us that there were Relational Data System (RDS) subcomponent failures in the subsystem. We wondered whether this query was causing an RDS failure, and reverted to a tablespace scan. This turned out to be true. We tuned the statement to use nested loop over hybrid join, the RDS failures and subsequent tablespace scan were avoided, and CPU and elapsed time improved dramatically.

While these statements may not have caught someone’s eye just by looking at EXPLAIN results, when combined with the accounting data, they screamed for further investigation.

The Tools You Need

The application performance summary report identifies applications of interest and provides the initial tool to report to management. You can also use these other tools, or pieces of documentation, to identify and tune SQL statements in your most expensive packages:

Don’t overlook the importance of examining the application logic. This has to do primarily with the quantity of SQL statements being issued. The best performing SQL statement is the one that’s never issued, and it’s surprising how often application programs will go to the database when they don’t have to. The program may be executing the world’s best-performing SQL statements, but if the data isn’t used, then they’re really poor-performing statements.

A Word on Dynamic SQL

Many Web-based or client/server applications being developed today use DB2 on the mainframe as the database server. These types of applications, though capable of using static SQL, are using dynamic SQL to access DB2. This dynamic SQL presents a challenge for DBAs and systems programmers responsible for capturing and reporting on inefficient SQL. This challenge lies in the fact that most of the SQL is accounted for under just a few packages (e.g., DSNJDBC2), and so our package-level accounting methodology lumps all these applications and queries together.

However, there are several ways you can capture and isolate accounting information for these dynamic applications. You can use the correlation id to identify the application, although this isn’t effective for Java, since the executable is actually the Java virtual machine. You can set specific authorization ids for each application, and then further itemize your package-level information by application using those ids. You can also use special DB2 accounting fields (e.g., ACCTSTR) that the applications can set when they establish their connections (your reporting program must be able to read these newer fields).

Techniques for capturing dynamic SQL statements include running a performance trace or a Call Level Interface (CLI) or Java DataBase Connectivity (JDBC) trace. Each technique will get you the statement text and response time; the performance trace will also report on CPU consumption, getpages, and index access, etc. Online performance monitors are also useful, but more dependent upon timing to actually view the SQL statement executing. This can be difficult to do if there are many statements that run rather quickly.

Another lesser-known option is to use the DB2 CLI/ODBC/JDBC static profiling feature. Although this feature’s primary purpose is to capture SQL statements for binding into the database as static, it will capture all SQL statements issued during the time it’s set in capture mode. A DB2 command is used to bind the statements into a package, and the EXPLAIN(YES) bind option can be used to generate access path information.

Summary

Using subsystem-wide, package-level reporting and application summary capabilities considers application performance from an enterprise perspective, taking into consideration frequency of access and statement performance. It presents that information to management in an organized, meaningful manner. It provides the starting point for identifying the applications and packages that represent the best possibility for tuning. This technique works for tuning in-house or purchased software and helps management direct tuning efforts to the applications that have the most impact on the bottom line.