DB2 & IMS

An example of this is a subsystem analyzer that looks at I/O activity. The DBA can look at the data that’s read from DB2 and brought into memory buffers. What he wants to know is how efficiently the buffers are being used. To improve performance, he can partition a DB2 object so the data can be spread across more memory.

Performance analyzers can also be used to fine-tune SQL statements that can impact DB2 performance. The analyzer can be embedded in a workflow, and it can be preset to identify the top SQL statements that cost the most to run, based upon their resource consumption. The tool returns metrics that explain the processing and also the CPU costs. This enables sites to improve the cost of their SQL statements and how these statements process against DB2. In a batch process, the analyzer can be employed to assess the impact of a potential change, and even to stop the batch process altogether if the return code exceeds a specific value.

Database Reorganization and Backup

In the past, organizations ran a weekly backup and database reorganization and rarely monitored the CPU costs incurred from running a full reorg this often. This practice persists for many, as it’s a comfortable feeling to know your backup and DB2 reorganization process is mature and foolproof, given the many other priorities on your plate.

However, for those shops intent on reducing their CPU costs and potentially the resources they need for backup and reorganization, there are now tools that can provide real-time statistics and keep track of all new updates to DB2 database tables since the last reorganization. These tools give DBAs the opportunity to automate DB2 backup and reorganization scripts based on certain thresholds they define.

For example, let’s say the typical DB2 backup and reorganization consists of 100 jobs that must be run each week. Sometimes, the database tables involved in these jobs may have no changes, and therefore, no reason to be reorganized. In other cases, there may have been changes, so reorganization must be done. By looking only at DB2 tables where changes have actually occurred, an automated tool can only reorganize these tables, potentially reducing the need to reorganize hundreds of DB2 objects, which takes hours and lots of CPU, to only a few objects which completes in one hour—and greatly reduces CPU cost.

Conclusion

New tools and automation offer enormous potential to DBAs to monitor and optimize the performance of their DB2 databases. In some cases, shops are unaware of the full capabilities of these tools, so it’s a question of fully understanding and exploiting the performance and cost benefits these tools can provide.

IT is also being asked to quantify the dollar value of the investments it makes. New tools assist this process because they produce reports that show the history of CPU consumed and provide visibility into the reduction of CPU usage based on your tuning efforts, thereby gaining a corresponding cost savings. Empirical results substantiate this. In one case, a site undertook a two- to three-month performance tuning project on the batch side of its processing and saved $350,000 annually in CPU cost to their outsource vendor. In another case, a site shaved one-quarter of a second off its transaction times and saved almost $1 million a year. This shows that even a small reduction of CPU for a transaction with high execution rates can save lots of money.

IT also has the ability to tailor how it uses these tools to best fit its infrastructure and its mode of operation. A site can use these tools’ automation, but also retain the ability of using “halt points” within the automation that give IT the ability to self-inspect a situation and then to either proceed with processing or modify it. Automation comes with best practice rule sets, but these rules can also be modified by IT to conform to the processing parameters IT wants for its own data and operations.

Just as important, these tools monitor the entirety of IT processing—regardless of which resources, applications and databases workloads traverse. A common set of information is available to all stakeholders in the process, and each stakeholder can customize instrumentation and dashboards to his liking and needs. This gives the DBA in charge of DB2 performance the control over what he wants to monitor and tune. It also provides a common set of information that’s consistent with the performance data others in the process receive (i.e., the application developer, the network administrator, the systems programmer). In today’s collaborative IT environment, common toolsets and consistent information are as critical as ever—and are a sure way to ensure currency between application and DB2 changes.

2 Pages