Over the last 10 years, the velocity of change in applications and databases has increased while IT staff has become leaner. This challenges Database Administrators (DBAs) to keep databases optimized and in step with supported applications. Fortunately, there are also tools that provide greater visibility of end-to-end application performance and its impact on DB2, as well as new automation that can simplify some of the workloads DBAs face on a daily basis.
One way to look at new DB2 tools is to review some of the specific DB2 maintenance and performance issues enterprises are facing today.
A persistent challenge many enterprises face with DB2 and other databases is maintaining an effective change management practice that ensures database structures keep pace with the changes in the applications the database supports. This isn’t just a question of ensuring DB2 and the applications that use it stay synchronized in production; the same level of synchronization should also be maintained throughout application development. If you’re a DBA, this means that databases and applications must stay in sync in the initial development process, in the QA environment where the application and database get checked out, and finally, in the production environment.
On the application side, sites will always be challenged to maintain or improve performance, especially with the growth of Web-based, customer-facing applications such as loan processing, insurance claim processing or a customer choosing to review his bank account online. When IT detects an access delay or other performance issue, it immediately addresses the problem in the application, but not necessarily in the database. The key with DB2 or any other database is to be proactive with database changes so that the database is always managed with an eye toward minimum adverse impact on application performance.
To do this, DBAs should create a baseline performance model before changes and then compare the new performance against the baseline. This gives you the ability to see, for example, if you move from segmented to universal-table-space-partition-by-growth, if the change has a negative impact on your application.
In past IT practice, the DBA would use his own individualized tools to monitor, troubleshoot and tune DB2 for performance. Today, a common toolset is available that can show everyone involved with a given system or application the end-to-end performance—from the time a transaction enters through a Web service, its path through a variety of different servers, and finally, its entry into the mainframe. The DBA can compare application and DB2 performance between any points in time he chooses—whether it’s how the application was performing six months ago, one year ago or presently. From a database perspective, the DBA can see where slow transactions are coming from (e.g., from a server in a particular geographic region).The DBA also has the ability to see into the total topology of an application that’s being tracked. In this way, the DBA can understand application behavior so he can determine whether a particular slowdown in performance is normal or abnormal for the application. Because this toolset is universal, everyone responsible for end-to-end application performance—including the DBA—is working with a “single version of the truth.” They no longer independently use their own tools, which can generate disparate information and delay problem resolution.
Many enterprises don’t take full advantage of some of the predictive tools they already have for advance modeling of proposed changes to applications and databases before they implement them. In some cases, they don’t realize these tools are available. In other cases, they have over the years constructed their own regression testing and predictive models for their particular IT infrastructures and they’re hesitant to redirect or automate some of these processes.
Unfortunately, these older methodologies can also be more subject to human error and oversights. They can even adversely impact application performance in production.
In contrast, when a site uses newer tools and automation, it can perform a predictive analysis of application and DB2 performance, and the tool will come back with the results, listing any potential negative impacts the proposed change could have on performance in metrics that include CPU usage, run-time, etc., and whether the application will run better or worse with the change. This predictive process can be automated so it won’t proceed once a performance problem is detected. At this point, the tool will also produce a set of recommendations for fixes. The DBA, and others working on overall application performance, have the option of implementing or bypassing these recommendations. In essence, the automated, predictive tool provides another set of eyes in case there’s something that might have been overlooked.
DB2 Performance Tuning
New simulation tools can also be exploited to help sites understand application and database usage, to analyze workflows and to recommend best practices.