Aug 27 ’13

Streamlining Operations and Managing Complexity With DB2

by Troy Coleman in Enterprise Tech Journal

Over the last 10 years, the velocity of change in applications and databases has increased while IT staff has become leaner. This challenges Database Administrators (DBAs) to keep databases optimized and in step with supported applications. Fortunately, there are also tools that provide greater visibility of end-to-end application performance and its impact on DB2, as well as new automation that can simplify some of the workloads DBAs face on a daily basis.

One way to look at new DB2 tools is to review some of the specific DB2 maintenance and performance issues enterprises are facing today.

Change Management

A persistent challenge many enterprises face with DB2 and other databases is maintaining an effective change management practice that ensures database structures keep pace with the changes in the applications the database supports. This isn’t just a question of ensuring DB2 and the applications that use it stay synchronized in production; the same level of synchronization should also be maintained throughout application development. If you’re a DBA, this means that databases and applications must stay in sync in the initial development process, in the QA environment where the application and database get checked out, and finally, in the production environment.

On the application side, sites will always be challenged to maintain or improve performance, especially with the growth of Web-based, customer-facing applications such as loan processing, insurance claim processing or a customer choosing to review his bank account online. When IT detects an access delay or other performance issue, it immediately addresses the problem in the application, but not necessarily in the database. The key with DB2 or any other database is to be proactive with database changes so that the database is always managed with an eye toward minimum adverse impact on application performance.

To do this, DBAs should create a baseline performance model before changes and then compare the new performance against the baseline. This gives you the ability to see, for example, if you move from segmented to universal-table-space-partition-by-growth, if the change has a negative impact on your application. 

Collaborative Problem-Solving

In past IT practice, the DBA would use his own individualized tools to monitor, troubleshoot and tune DB2 for performance. Today, a common toolset is available that can show everyone involved with a given system or application the end-to-end performance—from the time a transaction enters through a Web service, its path through a variety of different servers, and finally, its entry into the mainframe. The DBA can compare application and DB2 performance between any points in time he chooses—whether it’s how the application was performing six months ago, one year ago or presently. From a database perspective, the DBA can see where slow transactions are coming from (e.g., from a server in a particular geographic region).The DBA also has the ability to see into the total topology of an application that’s being tracked. In this way, the DBA can understand application behavior so he can determine whether a particular slowdown in performance is normal or abnormal for the application. Because this toolset is universal, everyone responsible for end-to-end application performance—including the DBA—is working with a “single version of the truth.” They no longer independently use their own tools, which can generate disparate information and delay problem resolution.

Predictive Analysis

Many enterprises don’t take full advantage of some of the predictive tools they already have for advance modeling of proposed changes to applications and databases before they implement them. In some cases, they don’t realize these tools are available. In other cases, they have over the years constructed their own regression testing and predictive models for their particular IT infrastructures and they’re hesitant to redirect or automate some of these processes.

Unfortunately, these older methodologies can also be more subject to human error and oversights. They can even adversely impact application performance in production.

In contrast, when a site uses newer tools and automation, it can perform a predictive analysis of application and DB2 performance, and the tool will come back with the results, listing any potential negative impacts the proposed change could have on performance in metrics that include CPU usage, run-time, etc., and whether the application will run better or worse with the change. This predictive process can be automated so it won’t proceed once a performance problem is detected. At this point, the tool will also produce a set of recommendations for fixes. The DBA, and others working on overall application performance, have the option of implementing or bypassing these recommendations. In essence, the automated, predictive tool provides another set of eyes in case there’s something that might have been overlooked.

DB2 Performance Tuning

New simulation tools can also be exploited to help sites understand application and database usage, to analyze workflows and to recommend best practices.

An example of this is a subsystem analyzer that looks at I/O activity. The DBA can look at the data that’s read from DB2 and brought into memory buffers. What he wants to know is how efficiently the buffers are being used. To improve performance, he can partition a DB2 object so the data can be spread across more memory.

Performance analyzers can also be used to fine-tune SQL statements that can impact DB2 performance. The analyzer can be embedded in a workflow, and it can be preset to identify the top SQL statements that cost the most to run, based upon their resource consumption. The tool returns metrics that explain the processing and also the CPU costs. This enables sites to improve the cost of their SQL statements and how these statements process against DB2. In a batch process, the analyzer can be employed to assess the impact of a potential change, and even to stop the batch process altogether if the return code exceeds a specific value.

Database Reorganization and Backup

In the past, organizations ran a weekly backup and database reorganization and rarely monitored the CPU costs incurred from running a full reorg this often. This practice persists for many, as it’s a comfortable feeling to know your backup and DB2 reorganization process is mature and foolproof, given the many other priorities on your plate.

However, for those shops intent on reducing their CPU costs and potentially the resources they need for backup and reorganization, there are now tools that can provide real-time statistics and keep track of all new updates to DB2 database tables since the last reorganization. These tools give DBAs the opportunity to automate DB2 backup and reorganization scripts based on certain thresholds they define.

For example, let’s say the typical DB2 backup and reorganization consists of 100 jobs that must be run each week. Sometimes, the database tables involved in these jobs may have no changes, and therefore, no reason to be reorganized. In other cases, there may have been changes, so reorganization must be done. By looking only at DB2 tables where changes have actually occurred, an automated tool can only reorganize these tables, potentially reducing the need to reorganize hundreds of DB2 objects, which takes hours and lots of CPU, to only a few objects which completes in one hour—and greatly reduces CPU cost.

Conclusion

New tools and automation offer enormous potential to DBAs to monitor and optimize the performance of their DB2 databases. In some cases, shops are unaware of the full capabilities of these tools, so it’s a question of fully understanding and exploiting the performance and cost benefits these tools can provide.

IT is also being asked to quantify the dollar value of the investments it makes. New tools assist this process because they produce reports that show the history of CPU consumed and provide visibility into the reduction of CPU usage based on your tuning efforts, thereby gaining a corresponding cost savings. Empirical results substantiate this. In one case, a site undertook a two- to three-month performance tuning project on the batch side of its processing and saved $350,000 annually in CPU cost to their outsource vendor. In another case, a site shaved one-quarter of a second off its transaction times and saved almost $1 million a year. This shows that even a small reduction of CPU for a transaction with high execution rates can save lots of money.

IT also has the ability to tailor how it uses these tools to best fit its infrastructure and its mode of operation. A site can use these tools’ automation, but also retain the ability of using “halt points” within the automation that give IT the ability to self-inspect a situation and then to either proceed with processing or modify it. Automation comes with best practice rule sets, but these rules can also be modified by IT to conform to the processing parameters IT wants for its own data and operations.

Just as important, these tools monitor the entirety of IT processing—regardless of which resources, applications and databases workloads traverse. A common set of information is available to all stakeholders in the process, and each stakeholder can customize instrumentation and dashboards to his liking and needs. This gives the DBA in charge of DB2 performance the control over what he wants to monitor and tune. It also provides a common set of information that’s consistent with the performance data others in the process receive (i.e., the application developer, the network administrator, the systems programmer). In today’s collaborative IT environment, common toolsets and consistent information are as critical as ever—and are a sure way to ensure currency between application and DB2 changes.