DB2 & IMS

Top-10 Ways to Waste CPU Using DB2

5 Pages

Common problems include excessive LOCK TABLEs during online processing, too many suspensions, too many timeouts, too many deadlocks, etc. All this is the result of taking too many locks or having a poor application design. It’s a CPU, availability and throughput issue.

Are you using the default package in WebSphere that’s bound Read Stability (RS)? This may be causing share lock escalations. Using row-level locking inappropriately? This costs a lot of CP U and counts toward your numbers for lock escalation. If you see drain requests during online processing when no utilities are running, check whether you have partition key updates occurring. (This applies to V7 only; in V8, it’s not a problem.)

There are several ways to reduce excessive locking. Avoidance is key. In V7, you need to be bound with CURRENTDATA(NO) and ISOLATION(CS) and perform frequent commits. In V8, you can be bound with either CURRENTDATA(NO) or (YES) with ISOLATION(CS), and take frequent commits.

Lock avoidance is a process that puts a latch on the data, not a lock; it’s held in the buffer manager and costs only a few CP U instructions. It’s really just controlling the sequence of users of the data. A better option is to use uncommitted read. You avoid the overhead of a lock as well as lock avoidance.

Are you worried about seeing uncommitted data? How do you account for it now? Unless you’re using Repeatable Read (RR), if you leave the page, you release the lock and things could change before you read it again.

When using uncommitted read, you can take advantage of a technique called optimistic locking to avoid anomalous deletes and updates. Using this technique involves placing an update timestamp column on every table, then using that update column as part of the condition of every update and delete statement. In this way, if any other application process has modified data, or rolled back uncommitted updates, then the update or delete fails. This ensures data integrity and high availability, along with lower CPU costs. Figures 2 and 3 show a SELECT and UPDATE in an application that uses optimistic locking, which will become integrated into DB2 for z/OS Version 9.

We can’t stress enough that you must be careful and selective with row-level locking. It’s the default on other DBMSes (including DB2 LUW), but it also works differently on those platforms. On DB2 z/OS, it carries the same overhead as the other locks and the number of row locks counts toward lock escalation and can cause escalation to occur faster. Row locks don’t escalate to page, they escalate to tablespace. They also don’t resolve deadlocks; they can actually cause more.

8 Too much processing in SQL statements

5 Pages