Operating Systems

• Number of LSR pools defined

• Buffer pool monopolization

• Number of strings required

• Maximum key size.

The ROI received from tuning these areas will vary by installation. Some of these changes are minor, but the idea is to have a well-tuned system. So, if you’re going to look at the LSR pools to make them optimum, then fix everything you can fix.

Buffer Fragmentation

Buffer fragmentation is common when using LSR pools in CICS; it occurs when the file CISZ is smaller than the CICS LSR pool buffer being used for the I/O operation. It’s common because CICS uses only 11 buffer sizes to handle the I/O requests from VSAM files. VSAM has 28 different CI sizes available. So, buffer fragmentation is inevitable unless you limit the VSAM cluster definitions to the sizes CICS LSR supports. Buffer fragmentation results in extra virtual and real storage use for I/O operations other than the actual CISZ.

There are two basic types of buffer fragmentation:

1. Buffer fragmentation that results from selecting a CISZ that isn’t one of the 11 CICS buffer sizes (such as selecting a CISZ of 1.5KB but having to use a 2.0KB buffer)

2. Buffer fragmentation that results in not having defined a particular buffer size that results in a CISZ using a larger buffer than required, such as in the previous example having a CISZ of 1.5KB but using a 4.0KB because we didn’t define a 2.0KB buffer.

The second type is definitely an area that requires attention. An interesting CISZ for a non-VSAM/E (non-Extended VSAM) file is 18KB. This CISZ provides the best track usage (three 18KB CIs per track or 810KB per cylinder) and usually the most records per cylinder. However, this CISZ causes buffer fragmentation because the appropriate CICS LSR buffer is 20.0KB or a fragmentation of 2.0KB. So there’s a choice to be made involving disk space vs. buffer fragmentation. Buffer fragmentation is justified because of the positive disk space effect that results from using this CISZ. The 18.0KB size isn’t as good a selection for Extended VSAM (VSAM/E) files because the space allocations aren’t as good. This is caused by the need to add a 32-byte trailer to the physical record size—causing VSAM to use three 6.0KB physical records to handle the 18.0KB CISZ. This results in a lower track utilization of 720KB per track and represents an 11 percent reduction in disk space use. This an important point to remember when converting from VSAM to VSAM/E.

Overcoming Problems

One of the major problems associated with locating any buffer fragmentation is the number of VSAM files you can assign to a particular pool. An option used in some installations to eliminate buffer fragmentation is to simply assign buffers to all 11 options. The problem with this alternative is that you may wind up allocating virtual storage to unused buffers—storage that could have been used to improve other buffers in the pool. So, how many buffers do you allocate to unused buffers without wasting too much storage? Some organizations are willing to allocate anywhere from three to 25 or more buffers to address this condition.

Consider a hypothetical situation. Suppose you have no files that use a 16KB buffer and you have 500 20.0KB buffers defined. Assigning 20 16.0KB buffers results in an allocation of 320KB of unused storage every day, week and month. If you never have a 16.0 CISZ, this becomes wasted storage.

Wouldn’t it be better to take that 320KB wasted storage and convert it to 16 20.0KB buffers, increasing the total to 516 buffers? The initial advantage is that the storage is used, immediately benefiting the 20KB buffers. If a 16.0KB file is opened, you have part of the allocation in the 20.0KB pool. That amount allocated wasn’t the original planned allocation, but so what? It was an estimate anyway. The important thing is to be able to identify the occurrence and take action. There’s an exception to this condition. You’d always ensure you’ve defined at least three 32.0KB buffers (minimum allocation) as a safety valve.

Another similar consideration involves the many VSAM CI size options and buffer allocations. It’s somewhat typical to see a 1.5KB CISZ allocated for an index CI that has a 4.0KB data size on a 3390 geometry disk drive. The LSR buffer required to process this CISZ is 2.0KB. Is there any benefit to increasing the index CISZ so the entire buffer is used? In the case of the index component, you may benefit from raising the CISZ from 1.5KB to 2.0KB. The benefit may not be seen at the sequence set level (lowest index level for a KSDS file), but at the index set level (the second and higher levels in a KSDS file). There are several benefits that can occur from increasing the index CISZ to match the buffer size and eliminate buffer fragmentation, such as:

• The larger index record may result in fewer index set records in the file. This means fewer buffers are required to hold the entire high-level index in virtual storage.

• In some cases, a larger index set record may reduce the number of index levels a file has. The fewer index levels, the fewer index reads required to locate the data—reducing the CPU overhead to locate a record in LSR.

• In some cases, having a larger index CI can reduce or eliminate potential key compression problems that result in lost disk space and premature CA splits. Key compression problems aren’t easy to identify.

The major problem with small data CI sizes is that they tend to yield lower track utilization and usually result in larger index CI sizes. So, for the data component, if you have a CISZ of 2.5KB that requires a 4.0KB LSR buffer to process, you’ll get better track utilization by raising the data CISZ to 4.0KB and will probably also achieve a lower index CI size. There are other factors to consider in this decision, such as the record size and the need to have only one record per CI to avoid exclusive control conflicts.

A small benefit available in the LSR pool definition is the number of buffers allocated for the .5, 1.0 and 2.0KB buffers. Buffers are allocated in 4.0KB increments on a 4.0KB boundary. So, you could have inadvertently requested an incorrect multiple of buffers for these sizes, resulting in fragmented buffer storage. When requesting storage for these particular sizes, ensure that the total number of buffers is a multiple of:

• 0.5KB – multiple of eight

• 1.0KB – multiple of four

• 2.0KB – multiple of two.

For example, imagine that you requested 25 1.0KB buffers. This would require 25KB to accommodate the buffers. However, the buffers are allocated in page increments (4.0KB). The actual allocation would be 28KB, or space for an additional three buffers without increasing the total allocated storage.

Hashing’s Impact

CICS has a capacity of eight separate LSR pools to be defined in z/OS and 15 in z/VSE. These pools are known by a sequential number of one to eight (or 15 in the case of z/VSE). When LSR support was first added to CICS, the search for a particular CI in the buffers was sequential. As you added more buffers to a pool, the search for the record got longer and used more CPU than doing an actual physical I/O operation. So, the use of different pools was justified to distribute the search cost across several pools, reducing the average search cost. However, IBM improved the LSR search algorithm in the early ’90s (later for z/VSE) and changed the algorithm from a sequential search to a hashing technique. Hashing provides a relatively even search cost regardless of the pool size. There’s still a little additional overhead possible when having to handle synonyms (CIs that hash to the same place). This is usually a minor cost. However, we now can allocate large buffer pools without the sequential search cost. So, the question is, do we need multiple pools and what are the benefits?

As a result of the new search algorithm and the capacity to allocate up to 32KB buffers by buffer size, the need to allocate more than one pool has been reduced. Consolidating multiple buffer pools into one buffer pool can have some advantages because a larger number of buffers is available among pool participants. Access patterns to disk files may not be consistent. Some files may be heavily accessed while others may be lightly used. The access patterns may vary during the day. Heavily accessed files during the morning may not be as heavily used in the afternoon. This concept can be seen in the larger capacity disk drives available today. There was a fear many years ago that, as you increased the disk storage capacity under one access mechanism, there would be increased contention. Modern disk technology includes disk cache storage that has been instrumental in reducing contention. So, if the disk volumes have varied access patterns, then wouldn’t it also be true for the buffers assigned?

There’s a tendency to add “a few more buffers” when defining a pool as a kind of safety valve. For example, you may determine that, in a particular pool, you need 400 4.0KB buffers. However, you may allocate 425 or 450 buffers to allow for growth or to give you a small fudge factor. If you do this across the different buffer pools, you have overallocated the pools. Consolidating these pools gathers all the buffers and fudge factors into one large pool and lets VSAM allocate the buffers to the files that need it.

Buffer Monopolization

Another reason for having additional LSR pools is to segregate files that monopolize an LSR pool buffer. The emphasis on the word buffer is intentional. A file monopolizes a buffer in a pool and not the pool. However, what CICS statistic do you use to determine that a file is monopolizing a particular LSR buffer? CICS file statistics concern the number of I/O requests and not the number of buffers used. One could reach the conclusion that I/O requests equal the number of buffers, but that could be wrong. Consider a highly accessed key-point file that may consist of 100 CIs. This file could reflect millions of requests a day but would use only 100 buffers, maximum. Another example would be a heavily browsed file. Here, the number of browse requests doesn’t consider the number of lookaside hits you had in the same CI before you had to use another buffer for the next CI.

Most performance monitors don’t reflect the number of buffers a file actually uses; this is the information needed to determine which, if any, file is monopolizing a buffer in a pool. However, before determining that a file is monopolizing a particular buffer, there’s another step to take. You also must define the word “monopolizing.” What do you consider a monopolizing percent? 90 percent? 70? 50?

We need to review certain concepts before we can address LSR buffer monopolization.

First, all files in a CICS system aren’t simultaneously accessed. In fact, studies have shown that 20 percent of the defined files in a CICS region are continuously used. This is similar to the “old” inventory 80/20 rule where 20 percent of the inventory represents 80 percent of the activity. The 20 percent may represent your installation’s most important files; they’re your “bread and butter” files.

Second, LSR is a means of sharing resources. The algorithm CICS uses to allocate buffers is an LRU formula. Buffers that contain data are holding the most recently referenced CIs. Your bread and butter files probably populate most of these buffers. Don’t you need fast access to provide good response times for these files? So, what’s wrong with having them monopolize the buffer pools?

Robin Hood in Reverse

LSR tuning is really a Robin Hood story in reverse. In LSR, you use the resources of lightly or intermediately used files to better support higher activity files. In other words, “you rob from the poor to give to the rich.” You take resources from low-activity files to give to high-activity files. If your response times are good, why should you worry because you had to do an I/O for one of the 80 percent of your files that has less activity? Their participation in LSR is to contribute resources. It sounds sort of cruel, but that’s the truth. Or, you can think of it another way. If you had placed this low-activity file into NSR to “protect” its resources, you’d have virtual storage allocated to the file at the expense of being able to provide this storage to your bread and butter files.

So, when is buffer monopolization a problem? The best response is, when you aren’t receiving good response times from your important files while achieving a high look-aside hit ratio. This condition usually manifests itself in one or two files controlling more than 80 percent of the buffers—affecting other high-activity or important files. A possible solution is to add more buffers even though you may be achieving the look-aside hit ratio. The idea is that, eventually, the high-activity files in the buffer pool will meet their buffer requirements and allow other lower activity files to acquire and hold buffers.  The other possibility is to move one or both of these files to their own pool.

There may be times when you want to place a file in its own separate pool to ensure the entire file is in virtual storage (this is the equivalent of a data table). This concept is used when you have a data table candidate, but due to the amount of output I/O, the data table may not be a good performer. The quoted I/O objective is that a data table should be used for a file that has 90 percent or more read operations. So, if you have a file that has 60 percent read vs. write operations, the file may not perform as well in a data table as in LSR. This type of file is usually highly accessed and small enough to be a data table candidate. However, due to the high write-to-read ratio, it may not be a good candidate for a data table. So assigning this particular file to its own LSR pool may be justified.

How Many Strings?

One final area that may require defining more pools is if you have a pool that requires more than the maximum 255 strings that can be assigned to a pool. This may happen, but be sure you’re getting good look-aside hit ratios before exercising this option. The better the look-aside, the faster the string is released. 

Another area that requires attention   is the number of strings assigned to an LSR pool. The number of strings required depends on the pool activity and type of operations being performed. The first consideration is the activity against the files. Some activity requires only one string, but other activity that involves AIX files will require additional strings, especially if it’s an upgrade AIX. Certain I/O requests immediately release the string, such as a direct read. However, browse or read for update operations will hold the string for a longer period of time.

The number of strings required also is affected by how well the look-aside hit ratio is working because the better the look-aside hit ratio, the faster the string is released for use by another file. The number of strings assigned is generally overallocated. So, before increasing the number of strings in a pool, ensure you’re attaining the look-aside hit ratio for both the index and the data components. The higher achievement in the look-aside hit ratio, the fewer physical I/O requirements for an I/O operation. CICS provides information about how many strings are allocated, how many are currently in use, the peak number of strings, the number of times you had to wait for a string, and how many are currently waiting for a string. You want to have zero wait on strings. The objective is to have the peak number of strings used around 40 to 60 percent of the total allocated.

There are two types of string waits associated with files assigned to LSR. The first is a string wait caused by having insufficient strings assigned to the entire pool to handle the concurrent requests that can occur at any given point. If the look-aside objectives are being met, add more strings. The second condition is associated with the number of strings that can be assigned to the file at any moment. This type of short on strings condition would be reflected at the file level, not at the LSR pool level. The number of strings specified for a file represents the maximum number of concurrent requests allowed for that file at any given moment. As with all short on strings conditions, ensure the proper look-aside hit ratios are being achieved for the associated buffer sizes. If not, adjust the buffers first to get the improved look-aside hit ratio.

Key length

Associated with the number of strings specified is the key length assigned to the pool. Fixing the specified pool key length is a minor point because not much storage is involved. The key length must be at least as large as the longest key of any file opened in the pool. Many installations simply specify a key length of 255 bytes. This specification results in a waste of virtual storage in most installations because it’s rare to see a key greater than 128 bytes long. The key length is used during each I/O operation in which a string is involved. So, if both strings and key length are over-allocated, the resulting storage reflects some waste. Reduce the key length to 128 or lower and use the saved storage for additional buffers.

Conclusion

In closing, it’s probably much easier to identify which files aren’t candidates for LSR. As previously mentioned, Share Options 4 files should never be assigned to LSR because there’s little look-aside that can occur on these files. Keeping a Share Options 4 file in an LSR pool tends to flush the buffers because the assigned buffer becomes the most recently used and won’t be reused. There are some highly active third-party package control files that are specified Share Options 4 and can have devastating effects on your LSR pool statistics.

Another type of file is one that doesn’t follow command-level programming guidelines. An example is a non- RLS file that receives a read for update request in the middle of a browse operation (READNEXT). Although this function works while the file is assigned to NSR, in LSR (non-RLS) the task will hang and can cause problems that affect the entire system. In NSR, a second string is assigned to handle the read for update request and the CI is reread into a new buffer assigned to the second string. The CI would appear twice in storage. Another file type that could cause problems for an LSR pool is one that has many CA splits because the buffers aren’t reused. These types of files should be tuned separately to reduce the number of CA splits if you want to keep them in LSR.

Note that CA splits under LSR don’t tie up the CICS Task Control Blocks (TCBs) while performing the CA split. LSR uses synchronous file requests and the UPAD exit to handle CI and CA splits. This method doesn’t tie up either the subtask (CO) or main task (QR) TCBs. VSAM takes the UPAD exit while waiting for a physical I/O to complete, allowing for processing to continue, and lets CICS dispatch other work. NSR file control requests are done asynchronously and cause the CICS main task (QR TCB) or subtask (CO TCB) to wait during the split. It’s best to activate VSAM subtasks if you have NSR files prone to CI/CA splits. Effective use of the VSAM subtask (CO TCB) requires that CICS be running on a multi-processor. CICS supports transaction isolation for files defined in LSR but not in NSR.

This article reviewed many concepts on how to improve the performance of your LSR pool that can result in better response times for your transactions in an online environment. The better the look-aside hit ratio, the less I/O overhead the system is going to have, and the better use of CPU, storage, and I/O resources for applications running under CICS. Tuning LSR pools is easy, but it requires occasional review once you do your initial tuning.

6 Pages