DB2 & IMS

Top-10 Ways to Waste CPU Using DB2

5 Pages

During the past several years, we’ve worked with many companies to design high-performance databases and applications, and have done extensive testing and research on ways to minimize I/O and CPU. Many of those applications and databases are faster than we ever could imagine; however, those are the exception. Too often, shops have critically CPU-constrained environments. Most have one thing in common: They’re wasting CPU. There are some common ways to do this and we’ll examine how you can safely and effectively reduce CPU usage in a DB2 z/OS environment.

Of course, every environment is different and processing needs will vary. However, there are ways to reduce CPU usage through some practical performance tuning efforts and better design efforts early in development. Here are the top-10 ways, ordered in terms of CPU consumption impact, that customers waste enormous amounts of CPU unnecessarily (and unknowingly):

  • Overuse of DB2 compression
  • Excessive locking
  • Processing of data in an SQL statement
  • Overuse of DISTINCT
  • Inefficient predicates
  • Inappropriate BIND parameters
  • Blind updates
  • Random distribution/access
  • Too many indexes
  • Blackbox I/O modules.

We’ll look at each issue and discuss in detail how it wastes CPU, and present ways you can fix the problems.

10 Overuse of DB2 compression

DB2 compression often is used everywhere without any verification there’s a benefit and not a detriment. Often, shops have a standard to compress everything. Benefits of DB2 compression include:

  • More rows per page
  • More data in memory, less I/O
  • Less time needed for image copies
  • Less DASD for storage
  • Benefits sequential and dynamic prefetch.

However, we also must consider the negatives. With more rows per page, we may cause more contention because a single page lock will now be locking more rows at once. Also, some data (short rows, binary data, encrypted data, and few repeating strings) doesn’t compress well. Long rows may not be the best candidates, either. If you have a 4K page size and a row around 2,500 bytes, you’re getting only one row per page to begin with. If you compress, you still may have only one row on the page. Consider moving to a large page size and then compressing.

When a row is compressed, it’s considered varying length. This can lead to row relocations and will cause increases in the NEARINDREF and FARINDREF numbers. This means rows are being relocated to a different page during the updates, resulting in more I/O to get the same data, and more frequent REORGs. The varying length row also may result in more logging for updates, as DB2 will log from the first byte changed to the end of the row for a varying length row that changes in length as a result of the update.

The “pros” are definitely worth the effort to use DB2 compression and can have significant performance benefits and space savings. We must ensure we’re getting those benefits by monitoring the PAGESAVE and PCTROWCOMP columns in the catalog and not introducing contention or relocated rows. Here are some additional considerations and suggestions for DB2 compression:

  • Version 8 (V8) indexes aren’t compressed and that must be considered in capacity planning efforts.
  • Avoid multi-table tablespaces because the compression dictionary patterns are different and the overall compression may not be optimal.
  • A REORG will build a better dictionary than LOAD.
  • Data compression supported on older non-IBM hardware may not provide the most optimal benefits and can be expensive in terms of CPU. T he most important consideration is to not compress small rows. This doesn’t provide any benefit; it costs CP U and introduces additional contention problems.

9 Excessive locking

Locking isn’t free. The CPU cost can add up, not to mention hurt availability. As of V8, a lock is approximately 400 CP U instructions and 540 bytes of memory. Do your daily reports contain statistics such as those in Figure 1?

5 Pages