This same puffing process happens to a DBD. In CM, DB2 keeps DBDs in V7 format for fallback purposes and puffs them each time they’re used; there’s nothing you can do to change that. With NFM, though, you can no longer fall back to V7; it’s time to get the z / J o u r n a l • J u n e / J u l y 2 0 0 7 • 1 9 DBDs into V8 64-bit format and stop doing all the “puffing.” Again, the CPU cost is minor, but it’s a cost. Any change made to a DBD will force DB2 to store it in a 64-bit format. So you can do something minor, such as change its secondary extent to -1 and flip the DSNZPARM MGEXTSZ to YES to take advantage of sliding secondary extents, which is another nice NFM-enabled feature.
Let’s go back to the idea of reducing I/O. A new DSNZPARM keyword DSVCI lets DB2 use 4KB, 8KB, and 16KB CIs that match the DB2 page sizes. If you have an 8KB page, you get an 8KB control interval. When you were still running V7, DB2 had to chain all its I/Os when the page size was greater than 4KB. For example, if a page was defined as 8KB, DB2 did two 4KB chained reads to get the page into the buffers. If a page was defined as 32KB, DB2 did an eight 4KB-chained read.
With DSVCI set to YES, the VSAM CI size will be the same as the page size with the exception of a 32KB page. If you have a page size of 8KB, you’ll have a CI size of 8KB and a physical block size of 8KB, a page size of 16KB will have a CI size of 16KB and a physical block size of 16KB. However, if you have a page size of 32KB with a 32KB CI, the CI will span two physical 16KB blocks. The 32KB exception is there only for space utilization. With a 48KB track size, only one 32KB CI could fit per track if a 32KB block was used. Using 16KB blocks allows VSAM to span tracks for a 32KB CI and it lets VSAM handle all the I/O.
Another method to reduce your CPU takes advantage of some of the new SQL being delivered in V8, but is probably the most difficult method to implement. This method isn’t available until you’re in NFM, although you can start planning what you want to use and how you want to use it long before moving to NFM.
The most significant coding change you should consider is the use of multirow FETCH and INSERT. The CPU savings when using these new SQL statements are significant. This improvement, along with its CPU savings, is also available with multi-row cursor UPDATE and DELETE. Multi-row processing also is included in the new DSNTEP4 (replaces DSNTEP2 when you get to NFM) and DSNTIAUL.
Star joins used in memory work files and Materialized Query Tables (MQTs) also can be used to help reduce your CPU time. AFTER Trigger processing and CPU utilization have been improved by eliminating the need for work files in certain situations. There’s even a slight CPU improvement when using Rexx. Even changes such as allowing multiple DISTINCTs in a single SQL statement, support for multi-column predicates in a sort merge join, and backward index scan can help reduce your CPU usage.
Something else available only in NFM is the ability to combine a SELECT INTO, ORDER BY and FETCH FIRST to reduce the CPU cost of finding the first occurrence of a row.
The Nice Thing About Maintenance
Maintenance is critical to your success running DB2 V8 and, in some cases, keeping those CPU numbers low. You should have a solid maintenance strategy, preferably using Recommended Service Upgrade (RSU) and Consolidated Test (CST), attempting to stay as current on maintenance as possible. If you’ve been following this strategy, the following PTF may already be applied to your DB2 subsystem. The APAR is PK28561: “Accounting Package Detail” and it was closed back in September 2006; its PTF, UK18090, became available in October 2006.
This APAR has potential benefits in letting you reduce the amount of CPU DB2 will use collecting package accounting information. Many customers run Accounting Class 1, 2, 3, 7, and 8 traces as a standard practice. Prior to PK28561, V8 added a bunch of new stuff to the Class 7 trace record, more specifically to IFCID 239. Detailed information about lock manager, buffer manager, and SQL statistics were accumulated at the package level. The idea, excellent in theory, was to make IFCID 239 more useful. Unfortunately, this idea ended up adding an unacceptable increase in CPU to the process. IBM then decided to still give you the ability to gather this invaluable information. However, this APAR gives you the choice of incurring the additional CPU by enabling IFCID 239 to a different trace class. After this APAR, a Class 10 trace needs to be active to record the new information to IFCID 239. Monitor Class 10 also accomplishes the same thing.
You still have to turn on Accounting Class 7 or Class 8 before anything will be recorded by Class 10. Turning on only Class 10 (no 7 or 8) will not cause IFCID 239 to be written. You should turn on Class 7 and 8 so you have package-level accounting information. If you need more detail for problem determination, then also turn on Class 10 (in addition to 7 and 8) while you’re trying to solve the problem. Not having Class 10 active will reduce the amount of CPU DB2 will use collecting package accounting information. Also, if you think you’re using too much CPU for the accounting traces, make sure this APAR is on. Be aware, however, that if you have any homegrown tasks that require an old version of IFCID 239 from the Class 7 or 8 records before this APAR was applied, they may no longer give you the results you expect after applying this APAR. Also, if you’re an OMEGAMON fan, there are a couple of APARs affected by this APAR that you also may want to check out. Take a look at APARs PK31295 and PK33459.
So, it’s possible, more likely probable, that you will experience some CPU regression with V8 CM, but there are ways to control it and even reduce your CPU usage. However, once you move to NFM, you should start to see a decrease in CPU used. You also should experience a CPU improvement again when you upgrade to DB2 9 CM, and even more significant CPU improvements in DB2 9 NFM. In both DB2 V8 and DB2 9 NFM, the more new functions used, the greater the CPU savings. You’re going to be quite pleased by the improvements in DB2 9. Z