DB2 & IMS

If new and better access paths aren’t reason enough to do a rebind, here’s another: Back in V3, DB2 delivered a function called fast column processing or SPROCs (SELECT procedures). A SPROC is a performance enhancement that internally assists DB2’s movement of columns. If a plan or package used a SPROC in V7, the SPROC was using 31-bit code. Because DB2 V8 is running in 64-bit, the SPROC is disabled. The only way you can re-enable the procedure is by rebinding it. Until you do that rebind, and if the plan or package was using a SPROC in V7, your application’s performance will be degraded. Following the rebind, you should see a performance improvement and a reduction in CPU. The rebind can occur in CM or in NFM. If you do rebind the plan or package in CM, there’s no reason to bind it again in NFM because of the SPROC.

DB2 does have other procedures, IPROCs (INSERT procedures), and UPROCs (UPDATE procedures) that aren’t affected by their move to 64-bit and require no action.

There’s a fairly old APAR available that fixes a problem with the metrics that track SPROCs. Make sure you have the APAR applied to ensure that the information gathered on a SPROC’s use isn’t misleading. The APAR is PK31412: “Invalid SPROC Counter QISTCOLS Not Updated.

OK, so that eased some of the CPU increase we might have experienced. What else can we do in CM that’s more or less non-disruptive? One of the easiest changes you can make is to longterm page fix DB2’s buffer pools. You don’t want to fix all of them right away; rather, slowly take advantage of this fix. A page has to be fixed in memory during the I/O process. In V7, DB2 did this for you because the 31-bit instructions were very inexpensive. However, in V8, the 64-bit flavors of these instructions are more expensive. The pages must be fixed in memory; you have no control over that. The solution is to long-term page fix an entire buffer pool. V8 added the PGFIX keyword to the ALTER BUFFERPOOL command to accomplish this. You should long-term page fix your critical buffer pools that are experiencing a lot of I/O first, and slowly move your way through the rest of the pools while monitoring the effect. Leave the catalog and sort work pools for last, if at all.

While you’re still in CM, you also can “supersize” your buffer pools and increase your use of DB2’s hardware compression. Both can reduce the amount of I/O performed by DB2, reducing the amount of CPU DB2 uses. Compression requires a DDL change and, for both, you need to keep an eye on your IFCID 225 records and RMF reports. Even though people talk like you now have unlimited space above the bar, you don’t. You still need to ensure that the amount of virtual storage DB2 uses never exceeds the amount of real storage you have available. Even the slightest amount of paging could be catastrophic to DB2’s performance.

To save CPU while in CM, consider whether you can stop using the DSNZPARM keywords MINSTOR and CONTSTOR. Both of these may have been invaluable to you while you were in V7 and particularly if you were having storage problems. However, they both come with a CPU cost to reduce storage use. When you have to choose between using a little extra CPU and keeping DB2 up and running by not running out of storage, the choice was easy. Now that you’re in V8, you should re-examine your need to use these keywords. If you have verified you no longer have a storage issue and aren’t even close to having a storage issue, you can consider turning them off and reducing the CPU they use. However, if there’s any chance your storage problem may resurface, you probably want to leave them on. There’s an article that discusses CONTSTOR on IBM’s DB2 support Website. To access, search on the DSNZPARM keyword CONTSTOR.

If you’re using your accounting data to compare your V7 CPU usage to CPU use in V8 and you observe an increase, it’s possible your CPU cost didn’t really go up as much as you think. There are some cases in V8 where the CPU that was charged to the DB2 address spaces in V7 is now being charged to the application’s TCB. What will initially appear to be an increase is no more than movement of the CPU cost from one place to another. To address this, examine your entire DB2 to see if an increase in one place isn’t being offset by a decrease someplace else. Of course, you can’t compare anything if you’re not saving the necessary records to reproduce your accounting and statistics reports from V7.

New Function Mode

If you’re using data sharing, you’ll want to quiesce your entire data-sharing group at some point after going to NFM. This is necessary to enable DB2’s new locking protocol II. This changes the locking strategy used with the coupling facility and could buy back a few CPU cycles. There’s a great deal of detail in the DB2 manuals about locking protocol II.

If you haven’t rebound your plans and packages in CM, you’re definitely going to want to do it once you get to NFM. This will kick in the SPROCs again and reduce the “puffing” DB2 has to perform when it executes a plan or package. Until rebind, everything in a plan or package is 31-bit. At run-time, DB2 has to “puff” everything up to its 64-bit equivalent so it will run. Once DB2 completes the process, it throws it away. Then, the next time that package is needed, it has to go through the entire “puffing” process again. Rebinding for the sole purpose of eliminating the “puffing” is only going to buy back a minuscule amount of CPU, almost immeasurable. However, it‘s CPU, and rebind is something you should do anyway.

3 Pages