Jul 1 ’08
Tuning CICS TS LSR Pools: The Robin Hood Theory in Reverse
This article examines several areas fthat should be considered when ftuning Local Shared Resource (LSR) pools. The main performance objective when tuning CICS LSR pools is to improve the lookaside hit ratios, but there are other tuning areas that are generally overlooked. In this article, we will attempt to answer the questions asked in many CICS tuning classes such as:
• How many pools should I define?
• Are there any advantages to defining multiple pools?
• How many strings should I define?
• How do I know when a file is monopolizing a pool and should be segregated into another pool?
We’ll cover how to improve the look-aside hit ratios and additional areas you can tune to improve performance of CICS LSR pools and response times. Applying these recommendations has helped some organizations improve online response time and reduce overall CPU use attributed to CICS. Given the better look-aside algorithm available in LSR, it’s best to assign most VSAM files to LSR buffering in CICS. We’ll review areas you should watch when tuning LSR files (but not general VSAM tuning or programming techniques that also could be quite helpful in improving file performance in an online environment).
CICS uses three basic buffering techniques:
• Non-Shared Resources (NSR)
• Record Level Sharing (RLS)
• Local Shared Resources (LSR).
NSR and RLS aren’t covered in this article. In the case of LSR, data sets share resources; that is, they share common buffers and control blocks. This tends to reduce the amount of resources needed to support files. In addition, look-aside occurs at all levels and isn’t limited to the index set records as in NSR. With proper buffering, you can avoid any physical I/O operations to satisfy a request and obtain a 100 percent look-aside hit ratio once all the Control Intervals (CIs) have been initially loaded into the LSR buffers. Major VSAM enhancements in CICS previously required assigning the data to LSR. However, some files will work when defined as NSR but not defined as LSR. An example of such a file is one that in the middle of a browse operation (READNEXT) issues a read for update command without having issued the ENDBR command. This will lock the request and could have negative consequences on the CICS system.
You assign a file to LSR by specifying a value of one to eight (or 15 in the case of z/VSE) to the parameter “LSRPOOLID” when defining the VSAM data set using CEDA in CICS. The number indicates the resource share pool to which the file belongs. Entering “NONE” places the file into NSR buffering. There’s a possibility of defining many separate LSR pools in one CICS region. You can’t share resources assigned to one LSR pool with another pool. However, all files you assign to one pool share the resources defined to that pool.
In assigning resources, VSAM uses a Least Recently Used (LRU) algorithm. When a buffer is needed, VSAM assigns the oldest unreferenced buffer to the file. Contents of the buffer are overlaid. If the overlaid CI is needed again, a new buffer must be assigned and the CI physically read from the data set. So, the trick to good look-aside hit ratios is to reference the buffer often to avoid being flushed out of the pool.
Hiperspace buffers are no longer supported by z/OS; any Hiperspace definitions use real storage, causing the buffers to be moved unnecessarily in real storage and simulating an expanded storage move to real storage. These Hiperspace buffers should be redefined as part of the main LSR buffer pool. However, you may be justified in maintaining or defining Hiperspace buffers; if there’s a buffer size for which you may want to define more than 32KB buffers in the same pool, then the only way is to define Hiperspace buffers for that particular buffer size.
LSR pools are built when the first file in a particular pool is opened. If a file in a pool is opened during CICS start-up, the LSR pool is then built. However, if no file is opened during CICS start-up, the LSR pool is built when the first file in the pool is opened. Creating the pool when the file is opened delays the response time for that particular transaction. The resources assigned to the LSR pool can be determined dynamically or statically. You can let CICS determine the amount of resources to be assigned to the pool (dynamic allocation) or prespecify the amount of resources to be assigned to the pool (static allocation).
Dynamic Pool Allocation
Dynamic LSR pool allocation is the easiest method but not necessarily the most optimum. When the first file assigned to the pool is opened, CICS will query the catalog for all files assigned to the pool and allocate the necessary resources. The amount of resources allocated depends on an installation variable, called the SHARELIMIT, which determines how many resources each file will contribute to the pool. The SHARELIMIT is a percentage and defaults to 50 percent but you can increase or decrease the amount in the LSRPOOL definition.
The CICS CEDA definition uses the number of strings assigned to the data set to determine each file’s resource contribution to the pool. The actual process of how the buffers and strings are dynamically allocated can be found in the CICS Performance Guide. Another advantage of having the dynamic capability to create LSR pools is that it provides a fallback position for when files are added to the system with a different LSR pool id from the ones being used. In this case, CICS will allow the files to open and be accessed without having to recycle the system.
The first shared resource LSR uses is the buffer space. CICS supports only 11 different buffer sizes (in KB): 0.5, 1.0, 2.0, 4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 28.0 and 32.0. If a file has a CISZ of 2.5KB, then this data set must use a 4.0KB buffer to accommodate this CISZ. We use the term “CISZ” to describe the VSAM file “block size” definition, and “buffer size” to describe the LSR pool area where the CI is processed. Whenever the data set CISZ isn’t one of the 11 selected sizes, buffer fragmentation occurs that may or may not be acceptable. CICS allocates buffers to one of the 11 possible sizes depending on the file’s CISZ. You can define up to 32KB buffers for each of the 11 sizes CICS supports.
A second shared resource is the number of strings allocated to the pool. Strings are allocated using the SHARELIMIT percentage up to a maximum of 255 strings. The final shared resource is the key length. The key length specified must be sufficiently large to accommodate the largest key belonging to the files in the shared LSR pool. If a file has a longer key length than the one specified to the pool, the file won’t open.
Dynamic pool allocation is easy to implement and reduces the systems programmer’s intervention because there’s no need to inventory the files to determine the largest key size, the number and size of buffers required, or the number of strings needed. However, there are some major disadvantages to dynamically creating an LSR pool. The first disadvantage is that the dynamic allocation algorithm creates only one buffer pool that’s shared between data and index buffers. Having only one buffer pool for both can create a contention between similar CI sizes in the data and index. In other words, mixing indices and data CIs in the same buffer pool tends to flush indices out of the pool; these tend to have a more concentrated access pattern because there are fewer index CIs than data CIs in files. This is especially true in a pool that contains heavily browsed files.
Another disadvantage is that the number of buffers selected is based mainly on the number of strings and not by file activity. You would need to increase the number of strings allocated to a file in order to increase the number of buffers allocated to one particular CISZ. This technique also would increase the number of strings allocated up to a pool maximum of 255, wasting virtual storage that could have been allocated to other purposes such as additional buffer space. Finally, dynamically creating the pool takes a lot of time because the catalog is accessed for every file in the pool to determine the key length and CI sizes of the data and the index. Since the pool is created when the first file is opened, all the other data sets are closed and require CICS to access the catalog to get the required information. The real clunker is that you will have to pay the price to access the catalog for the files again when you reference them for the first time.
The best practice is to statically predefine the LSR pools through CEDA. To obtain a static LSR pool definition, you must provide the buffer definitions, string, and key length information. Buffer definition includes the number of buffers desired by size. If any of the elements are missing, then a dynamic allocation for the missing piece occurs and the advantages of predefining the pools may be fully attained. Static definition allows for a faster initialization because the catalog need not be queried. This avoids the overhead associated with having to “open” the files to determine the key length and CI sizes involved. In addition, you can now individually tune the buffer pool based on activity basis and also separate the data and the index buffers to avoid contention for the same CI sizes. You also can optimize the number of strings assigned to the pool.
Static definition requires systems programming intervention to determine the buffer sizes and quantity to be allocated, the number of strings required, and the maximum key length. The process requires planning and exposes a systems programmer to errors. For example, if you forget to allocate a particular buffer size a file requires, the file will use the next larger buffer size, if one is available. However, if there’s no larger size, then the file won’t open. So all pools should have at least three 32KB buffers defined to ensure that files will open even though performance isn’t optimum. The process to tune LSR pools is repetitive. Changes occur, the pool is re-initiated, and new results measured.
LSR pool effectiveness is measured by the look-aside hit ratio. Generally accepted ratios are:
• Data: 80 percent or better
• Index: 95 percent or better
• Combined: 93 percent or better.
These objectives can vary but the important thing is to have objectives that can be used to measure the effectiveness of the LSR pools and the changes made. The index objective is higher because there usually are more I/O operations to the index component than to the data component. Most KSDS files have two or more index levels. So you would require two or more reads to the index component and only one read to the data component to locate a record.
Improving the hit ratio is usually a function of adding buffers to the particular size in question. The number of additional buffers depends on the amount of virtual and real storage available. Adding buffers to improve the look-aside hit ratio should concentrate on those buffers that have the highest request activity. For example, imagine that the 4KB and 20KB buffers reflect hit ratios of around 77 percent in the data pool. Suppose the 4KB buffer pool has 1.4M requests, while the 20KB buffer pool has 529KB requests. Both buffers should be improved. However, if there’s a shortage of virtual or real storage, then fixing the 4KB buffer would probably have a better effect on the overall LSR pool hit ratio. If there are sufficient resources available, then fix both. Determining the number of buffers to add to the pool (data or index) usually occurs by trial and error. You look at the different attainments and add a certain number to each of the buffer sizes that need adjustment.
LSR look-aside buffer hit ratios and percentages can be misleading. The look-aside percentage attained is for the buffer in the pool and not necessarily for the files that access the buffer. For example, a 4KB buffer may reflect a look-aside hit ratio of 85 percent. This particular buffer size may be used by many files in the pool. So, some of the files can have a better look-aside hit ratio percentage than the buffer attainment while other files may have a lower look-aside hit ratio percentage. Tuning the index buffer pools should take precedence over tuning the data pools because the index pool usually has more I/O activity. In addition, index CI sizes tend to be smaller than the data CI sizes. So, it’s possible that a smaller investment in virtual and real storage would be required to improve the hit ratio. Also, there are fewer index CIs than there are data CIs, so the reference patterns for index records would be more concentrated.
Tuning data buffers can vary by system because the search patterns for the data component are generally random and access is dispersed because of the size of the data component. Obtaining high hit ratios in the data component usually entails a large investment in virtual and real storage. Good data hit ratio candidates are files with a compressed access pattern that often reference the data, files with sequential read activity (browse), and files with read for update/rewrite and delete activity.
Extremely large files with random activity can have a negative effect on the data pool reference pattern. However, even though a data set has disperse access to the data, the index portion of this data set can obtain excellent look-aside hit ratios that more than justify the data set be allocated in LSR. Data sets with disperse access usually have three levels of indices, so you could receive hits on the index portion, which is three of the four I/O operations. VSAM files that have Share Options 4 specified should never be in the LSR pool because direct reads of the CI cause a reread from the disk to ensure we have the latest copy. This negates the look-aside capability of the LSR pool and tends to flush good buffers.
Once changes have been made, immediately measure the results after adding buffers to a pool. A good result is 3 to 5 percent look-aside improvement to justify the virtual and real storage investment being made for a particular buffer. If there’s little ROI, then consider reassigning these resources to another buffer or pool because resource availability has a limit. Finally, there’s a possibility of improving hit ratios by standardizing data CI sizes. Standard CI sizes will let you create large buffer pools of a limited number of CI sizes. This facilitates better resource usage.
Tasks required for performance tuning of LSR pools are:
• Maintain an inventory of files in each pool, identifying the associated CI sizes
• Reconcile the LSR buffers with the file CI sizes to ensure the buffers are properly allocated
• Review the CICS statistics and ensure the installation look-aside hit ratio percentages are achieved
• Ensure there are sufficient strings available in the LSR pool
• Ensure the LSR pools are statically defined via CEDA.
There are other areas associated with tuning the LSR pools that are overlooked in many installations because they aren’t visible or identifiable, there’s insufficient manpower to dedicate to the tuning process, or there’s a lack of understanding as to their importance. In some cases, the overlooked area is discovered as a result of new applications or changes that occur. Some of the overlooked areas are:
• Buffer fragmentation
• LSR buffer size vs. file CISZ reconciliation
• Page boundary allocation
• Number of LSR pools defined
• Buffer pool monopolization
• Number of strings required
• Maximum key size.
The ROI received from tuning these areas will vary by installation. Some of these changes are minor, but the idea is to have a well-tuned system. So, if you’re going to look at the LSR pools to make them optimum, then fix everything you can fix.
Buffer fragmentation is common when using LSR pools in CICS; it occurs when the file CISZ is smaller than the CICS LSR pool buffer being used for the I/O operation. It’s common because CICS uses only 11 buffer sizes to handle the I/O requests from VSAM files. VSAM has 28 different CI sizes available. So, buffer fragmentation is inevitable unless you limit the VSAM cluster definitions to the sizes CICS LSR supports. Buffer fragmentation results in extra virtual and real storage use for I/O operations other than the actual CISZ.
There are two basic types of buffer fragmentation:
1. Buffer fragmentation that results from selecting a CISZ that isn’t one of the 11 CICS buffer sizes (such as selecting a CISZ of 1.5KB but having to use a 2.0KB buffer)
2. Buffer fragmentation that results in not having defined a particular buffer size that results in a CISZ using a larger buffer than required, such as in the previous example having a CISZ of 1.5KB but using a 4.0KB because we didn’t define a 2.0KB buffer.
The second type is definitely an area that requires attention. An interesting CISZ for a non-VSAM/E (non-Extended VSAM) file is 18KB. This CISZ provides the best track usage (three 18KB CIs per track or 810KB per cylinder) and usually the most records per cylinder. However, this CISZ causes buffer fragmentation because the appropriate CICS LSR buffer is 20.0KB or a fragmentation of 2.0KB. So there’s a choice to be made involving disk space vs. buffer fragmentation. Buffer fragmentation is justified because of the positive disk space effect that results from using this CISZ. The 18.0KB size isn’t as good a selection for Extended VSAM (VSAM/E) files because the space allocations aren’t as good. This is caused by the need to add a 32-byte trailer to the physical record size—causing VSAM to use three 6.0KB physical records to handle the 18.0KB CISZ. This results in a lower track utilization of 720KB per track and represents an 11 percent reduction in disk space use. This an important point to remember when converting from VSAM to VSAM/E.
One of the major problems associated with locating any buffer fragmentation is the number of VSAM files you can assign to a particular pool. An option used in some installations to eliminate buffer fragmentation is to simply assign buffers to all 11 options. The problem with this alternative is that you may wind up allocating virtual storage to unused buffers—storage that could have been used to improve other buffers in the pool. So, how many buffers do you allocate to unused buffers without wasting too much storage? Some organizations are willing to allocate anywhere from three to 25 or more buffers to address this condition.
Consider a hypothetical situation. Suppose you have no files that use a 16KB buffer and you have 500 20.0KB buffers defined. Assigning 20 16.0KB buffers results in an allocation of 320KB of unused storage every day, week and month. If you never have a 16.0 CISZ, this becomes wasted storage.
Wouldn’t it be better to take that 320KB wasted storage and convert it to 16 20.0KB buffers, increasing the total to 516 buffers? The initial advantage is that the storage is used, immediately benefiting the 20KB buffers. If a 16.0KB file is opened, you have part of the allocation in the 20.0KB pool. That amount allocated wasn’t the original planned allocation, but so what? It was an estimate anyway. The important thing is to be able to identify the occurrence and take action. There’s an exception to this condition. You’d always ensure you’ve defined at least three 32.0KB buffers (minimum allocation) as a safety valve.
Another similar consideration involves the many VSAM CI size options and buffer allocations. It’s somewhat typical to see a 1.5KB CISZ allocated for an index CI that has a 4.0KB data size on a 3390 geometry disk drive. The LSR buffer required to process this CISZ is 2.0KB. Is there any benefit to increasing the index CISZ so the entire buffer is used? In the case of the index component, you may benefit from raising the CISZ from 1.5KB to 2.0KB. The benefit may not be seen at the sequence set level (lowest index level for a KSDS file), but at the index set level (the second and higher levels in a KSDS file). There are several benefits that can occur from increasing the index CISZ to match the buffer size and eliminate buffer fragmentation, such as:
• The larger index record may result in fewer index set records in the file. This means fewer buffers are required to hold the entire high-level index in virtual storage.
• In some cases, a larger index set record may reduce the number of index levels a file has. The fewer index levels, the fewer index reads required to locate the data—reducing the CPU overhead to locate a record in LSR.
• In some cases, having a larger index CI can reduce or eliminate potential key compression problems that result in lost disk space and premature CA splits. Key compression problems aren’t easy to identify.
The major problem with small data CI sizes is that they tend to yield lower track utilization and usually result in larger index CI sizes. So, for the data component, if you have a CISZ of 2.5KB that requires a 4.0KB LSR buffer to process, you’ll get better track utilization by raising the data CISZ to 4.0KB and will probably also achieve a lower index CI size. There are other factors to consider in this decision, such as the record size and the need to have only one record per CI to avoid exclusive control conflicts.
A small benefit available in the LSR pool definition is the number of buffers allocated for the .5, 1.0 and 2.0KB buffers. Buffers are allocated in 4.0KB increments on a 4.0KB boundary. So, you could have inadvertently requested an incorrect multiple of buffers for these sizes, resulting in fragmented buffer storage. When requesting storage for these particular sizes, ensure that the total number of buffers is a multiple of:
• 0.5KB – multiple of eight
• 1.0KB – multiple of four
• 2.0KB – multiple of two.
For example, imagine that you requested 25 1.0KB buffers. This would require 25KB to accommodate the buffers. However, the buffers are allocated in page increments (4.0KB). The actual allocation would be 28KB, or space for an additional three buffers without increasing the total allocated storage.
CICS has a capacity of eight separate LSR pools to be defined in z/OS and 15 in z/VSE. These pools are known by a sequential number of one to eight (or 15 in the case of z/VSE). When LSR support was first added to CICS, the search for a particular CI in the buffers was sequential. As you added more buffers to a pool, the search for the record got longer and used more CPU than doing an actual physical I/O operation. So, the use of different pools was justified to distribute the search cost across several pools, reducing the average search cost. However, IBM improved the LSR search algorithm in the early ’90s (later for z/VSE) and changed the algorithm from a sequential search to a hashing technique. Hashing provides a relatively even search cost regardless of the pool size. There’s still a little additional overhead possible when having to handle synonyms (CIs that hash to the same place). This is usually a minor cost. However, we now can allocate large buffer pools without the sequential search cost. So, the question is, do we need multiple pools and what are the benefits?
As a result of the new search algorithm and the capacity to allocate up to 32KB buffers by buffer size, the need to allocate more than one pool has been reduced. Consolidating multiple buffer pools into one buffer pool can have some advantages because a larger number of buffers is available among pool participants. Access patterns to disk files may not be consistent. Some files may be heavily accessed while others may be lightly used. The access patterns may vary during the day. Heavily accessed files during the morning may not be as heavily used in the afternoon. This concept can be seen in the larger capacity disk drives available today. There was a fear many years ago that, as you increased the disk storage capacity under one access mechanism, there would be increased contention. Modern disk technology includes disk cache storage that has been instrumental in reducing contention. So, if the disk volumes have varied access patterns, then wouldn’t it also be true for the buffers assigned?
There’s a tendency to add “a few more buffers” when defining a pool as a kind of safety valve. For example, you may determine that, in a particular pool, you need 400 4.0KB buffers. However, you may allocate 425 or 450 buffers to allow for growth or to give you a small fudge factor. If you do this across the different buffer pools, you have overallocated the pools. Consolidating these pools gathers all the buffers and fudge factors into one large pool and lets VSAM allocate the buffers to the files that need it.
Another reason for having additional LSR pools is to segregate files that monopolize an LSR pool buffer. The emphasis on the word buffer is intentional. A file monopolizes a buffer in a pool and not the pool. However, what CICS statistic do you use to determine that a file is monopolizing a particular LSR buffer? CICS file statistics concern the number of I/O requests and not the number of buffers used. One could reach the conclusion that I/O requests equal the number of buffers, but that could be wrong. Consider a highly accessed key-point file that may consist of 100 CIs. This file could reflect millions of requests a day but would use only 100 buffers, maximum. Another example would be a heavily browsed file. Here, the number of browse requests doesn’t consider the number of lookaside hits you had in the same CI before you had to use another buffer for the next CI.
Most performance monitors don’t reflect the number of buffers a file actually uses; this is the information needed to determine which, if any, file is monopolizing a buffer in a pool. However, before determining that a file is monopolizing a particular buffer, there’s another step to take. You also must define the word “monopolizing.” What do you consider a monopolizing percent? 90 percent? 70? 50?
We need to review certain concepts before we can address LSR buffer monopolization.
First, all files in a CICS system aren’t simultaneously accessed. In fact, studies have shown that 20 percent of the defined files in a CICS region are continuously used. This is similar to the “old” inventory 80/20 rule where 20 percent of the inventory represents 80 percent of the activity. The 20 percent may represent your installation’s most important files; they’re your “bread and butter” files.
Second, LSR is a means of sharing resources. The algorithm CICS uses to allocate buffers is an LRU formula. Buffers that contain data are holding the most recently referenced CIs. Your bread and butter files probably populate most of these buffers. Don’t you need fast access to provide good response times for these files? So, what’s wrong with having them monopolize the buffer pools?
Robin Hood in Reverse
LSR tuning is really a Robin Hood story in reverse. In LSR, you use the resources of lightly or intermediately used files to better support higher activity files. In other words, “you rob from the poor to give to the rich.” You take resources from low-activity files to give to high-activity files. If your response times are good, why should you worry because you had to do an I/O for one of the 80 percent of your files that has less activity? Their participation in LSR is to contribute resources. It sounds sort of cruel, but that’s the truth. Or, you can think of it another way. If you had placed this low-activity file into NSR to “protect” its resources, you’d have virtual storage allocated to the file at the expense of being able to provide this storage to your bread and butter files.
So, when is buffer monopolization a problem? The best response is, when you aren’t receiving good response times from your important files while achieving a high look-aside hit ratio. This condition usually manifests itself in one or two files controlling more than 80 percent of the buffers—affecting other high-activity or important files. A possible solution is to add more buffers even though you may be achieving the look-aside hit ratio. The idea is that, eventually, the high-activity files in the buffer pool will meet their buffer requirements and allow other lower activity files to acquire and hold buffers. The other possibility is to move one or both of these files to their own pool.
There may be times when you want to place a file in its own separate pool to ensure the entire file is in virtual storage (this is the equivalent of a data table). This concept is used when you have a data table candidate, but due to the amount of output I/O, the data table may not be a good performer. The quoted I/O objective is that a data table should be used for a file that has 90 percent or more read operations. So, if you have a file that has 60 percent read vs. write operations, the file may not perform as well in a data table as in LSR. This type of file is usually highly accessed and small enough to be a data table candidate. However, due to the high write-to-read ratio, it may not be a good candidate for a data table. So assigning this particular file to its own LSR pool may be justified.
How Many Strings?
One final area that may require defining more pools is if you have a pool that requires more than the maximum 255 strings that can be assigned to a pool. This may happen, but be sure you’re getting good look-aside hit ratios before exercising this option. The better the look-aside, the faster the string is released.
Another area that requires attention is the number of strings assigned to an LSR pool. The number of strings required depends on the pool activity and type of operations being performed. The first consideration is the activity against the files. Some activity requires only one string, but other activity that involves AIX files will require additional strings, especially if it’s an upgrade AIX. Certain I/O requests immediately release the string, such as a direct read. However, browse or read for update operations will hold the string for a longer period of time.
The number of strings required also is affected by how well the look-aside hit ratio is working because the better the look-aside hit ratio, the faster the string is released for use by another file. The number of strings assigned is generally overallocated. So, before increasing the number of strings in a pool, ensure you’re attaining the look-aside hit ratio for both the index and the data components. The higher achievement in the look-aside hit ratio, the fewer physical I/O requirements for an I/O operation. CICS provides information about how many strings are allocated, how many are currently in use, the peak number of strings, the number of times you had to wait for a string, and how many are currently waiting for a string. You want to have zero wait on strings. The objective is to have the peak number of strings used around 40 to 60 percent of the total allocated.
There are two types of string waits associated with files assigned to LSR. The first is a string wait caused by having insufficient strings assigned to the entire pool to handle the concurrent requests that can occur at any given point. If the look-aside objectives are being met, add more strings. The second condition is associated with the number of strings that can be assigned to the file at any moment. This type of short on strings condition would be reflected at the file level, not at the LSR pool level. The number of strings specified for a file represents the maximum number of concurrent requests allowed for that file at any given moment. As with all short on strings conditions, ensure the proper look-aside hit ratios are being achieved for the associated buffer sizes. If not, adjust the buffers first to get the improved look-aside hit ratio.
Associated with the number of strings specified is the key length assigned to the pool. Fixing the specified pool key length is a minor point because not much storage is involved. The key length must be at least as large as the longest key of any file opened in the pool. Many installations simply specify a key length of 255 bytes. This specification results in a waste of virtual storage in most installations because it’s rare to see a key greater than 128 bytes long. The key length is used during each I/O operation in which a string is involved. So, if both strings and key length are over-allocated, the resulting storage reflects some waste. Reduce the key length to 128 or lower and use the saved storage for additional buffers.
In closing, it’s probably much easier to identify which files aren’t candidates for LSR. As previously mentioned, Share Options 4 files should never be assigned to LSR because there’s little look-aside that can occur on these files. Keeping a Share Options 4 file in an LSR pool tends to flush the buffers because the assigned buffer becomes the most recently used and won’t be reused. There are some highly active third-party package control files that are specified Share Options 4 and can have devastating effects on your LSR pool statistics.
Another type of file is one that doesn’t follow command-level programming guidelines. An example is a non- RLS file that receives a read for update request in the middle of a browse operation (READNEXT). Although this function works while the file is assigned to NSR, in LSR (non-RLS) the task will hang and can cause problems that affect the entire system. In NSR, a second string is assigned to handle the read for update request and the CI is reread into a new buffer assigned to the second string. The CI would appear twice in storage. Another file type that could cause problems for an LSR pool is one that has many CA splits because the buffers aren’t reused. These types of files should be tuned separately to reduce the number of CA splits if you want to keep them in LSR.
Note that CA splits under LSR don’t tie up the CICS Task Control Blocks (TCBs) while performing the CA split. LSR uses synchronous file requests and the UPAD exit to handle CI and CA splits. This method doesn’t tie up either the subtask (CO) or main task (QR) TCBs. VSAM takes the UPAD exit while waiting for a physical I/O to complete, allowing for processing to continue, and lets CICS dispatch other work. NSR file control requests are done asynchronously and cause the CICS main task (QR TCB) or subtask (CO TCB) to wait during the split. It’s best to activate VSAM subtasks if you have NSR files prone to CI/CA splits. Effective use of the VSAM subtask (CO TCB) requires that CICS be running on a multi-processor. CICS supports transaction isolation for files defined in LSR but not in NSR.
This article reviewed many concepts on how to improve the performance of your LSR pool that can result in better response times for your transactions in an online environment. The better the look-aside hit ratio, the less I/O overhead the system is going to have, and the better use of CPU, storage, and I/O resources for applications running under CICS. Tuning LSR pools is easy, but it requires occasional review once you do your initial tuning.