Applications & Databases

Mar 23 ’10

Snapping the DB2 for z/OS Dynamic Statement Cache

4 Pages

DB2 for z/OS does a good job of caching dynamic SQL statements to avoid recompiling (aka binding) the statements on the fly. This can save significant system resources. However, we still have the problem of figuring out just what dynamic statements are coming in to DB2 and how they’re performing. DB2 for z/OS Version 8 introduced the ability to capture and report on the statements in the dynamic statement cache. Here, we’ll focus on use and management of this facility, and how to report, understand, and act on the available performance information. 

Introduction to the Dynamic Statement Cache 

The dynamic statement cache is an area of memory reserved inside DB2 for storing the text of dynamic SQL statements and the compiled run-time structures associated with the statement (also referred to as a prepared statement). It’s part of the set of EDM storage and is enabled by the CACHEDYN installation parameter, which by default is set to YES. The EDMSTMTC installation parameter controls the size of this cache, which can range from 5MB to 2GB. There’s a formula in the installation guide you can use to properly size the cache, but typically, a systems programmer sizes the pool based on the amount of available memory. 

The primary purpose of this cache is to avoid the overhead of re-preparing a statement for every execution. The statement prepare process, where DB2 determines the access path, can be expensive, especially for complicated statements, and can result in high CPU costs. The concept is simple: If the user and statement don’t change, and no table structures or statistics change in the database, then there’s no reason to re-prepare the statement; the previous executable version can be reused. 

A potential performance issue is that dynamic statements must be bound to the database during a prepare similar to the way static statements are bound. During the bind process, statements are validated, object definitions and statistics are accessed in the DB2 system catalog, the access paths are determined with the optimal path chosen, and the run-time structures are created and stored. 

This process can be quite expensive, especially for more complicated statements. In some transaction environments, the prepare represents about 90 percent of the transaction CPU cost. The dynamic statement cache is used to retain the output from the bind process across executions so it can be reused, avoiding the bind overhead. If an incoming statement matches the authorization id and SQL text (byte for byte including spaces), and the object structures and catalog statistics have remained unchanged, then the run-time structures are reused and the prepare avoids the statement bind process. This can dramatically improve performance of some dynamic statements. 

Leveraging the Dynamic Statement Cache 

Only certain statements can take advantage of the dynamic statement cache, and these are statements normally associated with transactions—statements that execute often and repetitively. For a statement to match in the statement cache, it must match an existing statement, byte for byte, for the given authorization id. If any part of the statement doesn’t match, then the statement must go through the expensive bind process. 

It’s important to code SQL statements so they have a better chance of matching in the cache. The main way is to use parameter markers for variables in the statement. If you use embedded literals, and those literal values change across statement executions, then there’ll be little or no matching. Also, you should use constants for authorization ids. A statement can only match in the cache for the same authorization id. So, you should use a three-tier architecture with connection pooling and fixed authorization ids to increase the matching in the statement cache.

Remember that the cost of matching isn’t free, but it’s not high, especially relative to the cost of the prepare/bind process. If most of the statements executed aren’t properly matching, then it’s better to turn off the statement cache. Statement cache matching overhead can represent about 10 percent of the CPU consumed per statement. 

4 Pages