Modern DASD subsystems have the potential for huge front-end controller cache. We’ve seen sites use this to the max with great results. We’ve seen average synchronous I/O response rates at about 2.7ms, and in some situations, as low as 1ms. Many believe that should be the standard for high-volume databases. At many sites, however, the response range averages between 4ms and 7ms, with some peak response rates of 12ms to 14ms. These could be situations where there’s not enough DASD cache, or perhaps an operating system bottleneck such as high I/O queue times.
Modern, high-performance DASD subsystems don’t necessarily have faster disk drives than traditional disk drives. We get the high DASD response rates with a combination of distributing the data across multiple devices in a Redundant Array of Inexpensive Disk (RAID) configuration, and through large stores of cache at the DASD subsystem level. In situations where your applications are driving high volumes of random I/O, DASD cache may be a good idea.
While some cache at the DASD subsystem level can be extremely effective in improving I/O response rates, it’s probably one of the least overall beneficial caching techniques. You should have a lot of DASD cache, but caching at the DASD level doesn’t eliminate the I/O at the operating system level while all the other caching techniques highlighted in this article do. Also, if you’re going to use a lot of data server cache (i.e., DB2 buffers), the DASD cache and data server buffers often can be redundant.
Caching in DB2
When it comes to DB2 performance, using the available cache can prove to be quite important. DB2 buffer pools are virtual storage that temporarily store pages of table spaces or indexes. When a program accesses a row of a table, DB2 places the page containing that row in a buffer. When a program changes a row of a table, DB2 must (eventually) write the data in the buffer back to disk, normally either at a DB2 system checkpoint or a write threshold.
The way DB2 buffer pools work is fairly simple by design, but tuning these operations can greatly impact application performance. The data manager issues GETPAGE requests to the buffer manager, which hopefully can satisfy the request from the buffer pool instead of having to retrieve the page from disk. We often trade CPU for I/O to efficiently manage our buffer pools. Buffer pools are maintained by subsystem, but individual buffer pool design and use should be by object granularity and, in some cases, also by application.
The virtual buffer pools contain three types of pages: Available pages are pages on an available queue and are available for stealing. The in-use pages are pages currently in use by a process and unavailable for stealing. Updated pages are pages that aren’t in use, not available for stealing, and are considered to be dirty pages waiting to be externalized.
You can have up to 80 virtual buffer pools, a limit that allows for up to 50 4K-page buffer pools (BP0–BP49), up to 10 32K-page buffer pools (BP32K–BP32K9), up to 10 8K-page buffer pools (BP8K0-BP8K9), and up to 10 16K-page buffer pools (BP16K0–16K9). The physical memory available on your system limits the size of the buffer pools, with a maximum size for all buffer pools of 1TB. It doesn’t take additional resources or CPU to search a large pool vs. a small pool. DB2 can manage large buffer pools well. However, if you exceed the system’s available memory, the system will start swapping pages from physical memory to disk, a consequence that can severely impact performance. Also, large buffer pools don’t necessarily mean you’ll have better performance.
Best Utilization of DB2’s Buffer Pools