Feb 1 ’05

Tuning Non-Shared Resource (NSR) Files Running in CICS

by Editor in z/Journal

There are several ways to handle VSAM files in CICS/TS using different buffering techniques. The more common buffering technique is Local Shared Resources (LSR), where the file’s buffering is shared in a buffer pool. (We’ll review LSR in a future article.) Another alternative is Record Level Sharing (RLS), where the file’s buffering is handled by SMSVSAM in a separate region and data space. This article explores the tuning associated with Non-Shared Resource (NSR) files running under CICS, where the buffering is defined for the exclusive use of each file.

As a starting point, an important question to ask is, “Why is this file in NSR?” There has to be a strong justification as to why a particular file has been assigned to use NSR buffering instead of LSR because LSR’s more superior look-aside algorithm offers better performance.

This article will concentrate on performance issues associated with VSAM KSDS files in our review of NSR.

The first step is to determine which files are designated as NSR in the CICS partition. You can obtain the information from the CICS statistics, file definitions, or a performance monitor (see Figure 1).

NSR is a means of processing VSAM files in which the resources are dedicated to the file. Specifically, dedicated means the control block structures, including the strings and buffers, are owned by the VSAM file to which they were defined. NSR is the default access for batch jobs and can be optionally selected for online CICS files by placing a NONE when defining the LSR pool id in the file definition under RDO. In this case, all strings and buffers defined are allocated for the file’s exclusive use.

To properly tune an NSR file, you need to know how the file is being primarily accessed because the buffer allocations vary depending on whether the file is accessed directly (random) or sequentially. When accessing the file sequentially, you want to allocate additional data buffers (BUFND) to be able to overlap data I/O operations such as reading ahead. When accessing the file directly, you’d want to have additional index buffers (BUFNI) to be able to get better index look-aside hit ratios. For dynamically accessed files, you would probably want to increase both the data and index buffers.

In a CICS online environment, sequential processing occurs when you browse the file (e.g., STARTBR, READNEXT and ENDBR) or whenever a CA split occurs, as the movement of CIs from one CA to another is a sequential process. In these cases, having defined additional BUFNDs can help improve the time it takes to perform the sequential operation. Online direct processing of VSAM files (e.g., READ) occurs whenever you issue commands requesting a specific record. In this case, additional BUFNIs can improve direct access. The challenge would be to determine how many buffers are required for optimum performance. Let us first review how a VSAM file uses the resources allocated. We will start with direct processing.

An NSR file that has one string assigned requires at least two BUFNDs and one BUFNI. The extra data buffer is used to process splits. Each string requires one data and one index buffer. The data buffer is used to read the appropriate data CI while the index buffer is used to read the entire index CIs associated with locating the proper data CI. In other words, a file that has three index levels requires three index CIs to be read directly before the location of the data CI is identified. Thus, there are potentially four I/O operations possible, three for the index that use the index buffer and one for the data that uses the data buffer. The problem is that there’s only one index buffer. So, subsequent file requests will require that the three index I/O operations be done again, even if the same index CIs are requested. This is because each index read overlays the previous index CI. By the time you read the last index record, all higher-level index CIs would be overlaid and unavailable. More index buffers would help improve this condition. The extra index buffers are used to house the index set records (second and higher-level index). So, if additional BUFNIs were specified, you could avoid reading these CIs in future requests. The extra BUFNI buffers provide additional look-aside capacity to an NSR file.

Look-aside is a term used to describe the process of searching for a particular record in storage buffers. Finding the desired CI in a virtual storage buffer reduces the overall access time and helps reduce the CPU time it takes to process the I/O operations to disk. The major objective when tuning I/O is to find the data in virtual storage and avoid a physical I/O operation. A good look-aside hit ratio can improve online response times. The look-aside hit ratio is one of the ways you can measure the effectiveness of the assigned buffering to a file. The higher the percentage of times you find the data in virtual storage, the better the response time. The look-aside formula is:

% HR = Total # of buffer hits* 100  (Total # of misses + total # of hits)

- Hit: when the desired CI is found in the virtual storage buffer

 - Miss: when the desired CI is not found in the buffer requiring an I/O

- HR: look-aside hit ratio.

When processing direct KSDS files, it’s important to determine the number of index buffers to be allocated to the file; this ensures the best look-aside hit ratio for the high-level index records (i.e., second level and higher indices). These indices are called the Index Set (IS) to differentiate them from the lowest index level called the Sequence Set Index (SSI). A VSAM LISTCAT provides us with the necessary information so that we could determine how many IS indices are on a file (Figures 2, 3, and 4). Before entering into the formula, look at the number of index levels in the file to avoid unneeded computations. The number of index levels can be found on the second LISTCAT page at the bottom right-hand side (Figure 3). If the file has two index levels, then there’s only one IS record. So only one additional index buffer is required. If the index level is equal to one, then there are no IS records and no additional BUFNIs are required. In this case, there’s only one SSI record.

The information required from the LISTCAT is as follows:

- Total number of index records (Figure 2)

- Data CISZ (Figure 2)

- Number of data CIs per CA (Figure 2)

- High Used RBA (HURBA) (Figure 3)

- Total number of strings assigned (Figure 4).

To compute the number of records in the IS, you need to obtain the information from a LISTCAT for the file and follow these detailed steps:

1. Compute the number of Control Areas (CAs) in the file. The number of CAs is equal to the number of SSI records (lowest-level index):

High Used Data RBA = Total number of CAs in the file Data CISZ * Data CI/CA 

2. Locate the number of index records in the file and, using the above result, compute:

# Index records - # of CAs in the file = Total # if IS records 

3. The total number of IS records from the above formula represents the minimum additional BUFNIs required for the file. Using the above figure, compute:

# File strings + # of IS records = Total # of required BUFNIs

Once you’ve determined the number of IS records in the file, add this result to the total number of strings to determine the number of BUFNIs required to hold the IS and the associated SSI records. Note that each string requires one index buffer to handle the SSI record. So, the number of strings is used to adjust the formula results from the final step.

We mentioned that the result from the formula was the minimum buffers required for processing the NSR file. Why is this the minimum? If we have a volatile file, then there may be splits within the file. Splits occur in the data portion and the index portion. A data CA split can result in a CA split in the index component. This extra index CI can be another IS record that wasn’t included in the formula. So, if the file is prone to CA splits, then it’s best to add a few more BUFNIs to account for the extra IS CIs that may occur during the life of the file. Usually, two or three extra BUFNIs are sufficient to handle the extra CA split activity unless the file is infrequently reorganized.

In this case, you’d have to compute the number of IS records using the above formula just before reorganizing the file.  In the case of a three-level index file, adding sufficient index buffers will ensure a look-aside hit ratio of at least 50 percent once the extra IS buffers are full. You can increase this hit ratio. There’s a look-aside done at the string level buffers. The string index buffer is searched to see if the desired index SSI CI is present and uses it if found. In addition, the data buffer is also searched when the index SSI was found to see if the data CI is also present. As you add more strings to the file, you reduce the chance of being assigned the correct string. However, there’s a chance that you could get better than 50 percent look-aside hit ratio for an NSR file.

NSR also provides the capability of improving sequential operations such as browses and CA splits. Sequential activity can be improved by adding data buffers in excess of the ones the strings need and the extra one splits use. All extra buffers available are assigned to the first task desiring these extra buffers. So, one cannot give preference to CA split processing over a normal browse operation because the first sequential operation takes all the available buffers. However, the problem lies in how many extra data buffers are needed for definition. For example, you’d want a browse transaction not to have to wait for the records in a browse operation by reading ahead.

However, to determine how many buffers you’d want to read ahead, you’d have to know the applications running in the system. The first problem is determining how many I/O or READNEXT operations are normally requested for a sequential browse. Application programmers probably don’t know the answer to that question, either. So, you may decide to add 10 buffers above the ones allocated to the strings and the extra one for splits.

When the program begins its first browse operation, CICS assigns all the available extra buffers (10) to the operation. So, the I/O would read 11 CIs into the allocated buffers, the one normally assigned to the string plus the 10 additional ones. Reading 11 buffers elongates the response time because you need to wait until 11 buffers are read and the I/O operation is completed before the application program can begin to process the data. What happens if the application program ends the browse after the second or third buffer is processed? Then, the remaining buffers aren’t used and you added overhead to the transaction by reading unused data.

Also, what happens to a second browse request that may occur while the first browse is still active? The second request would get one buffer, the one assigned to the string for the request. Another important question to ask is how much data should the transaction process? The answer is as much as it can display on one screen. Most browse transactions can display only one screen of information or around 16 to 18 lines of detail information. So, why read more than can be displayed. Reading more than the capacity of the display would mean that the program would have to store the data (e.g., Temporary Storage) and have to re-access it when needed, adding to the overhead. In addition, a transaction that finds the desired record in the buffer tends to monopolize the system affecting other transactions running concurrently. Therefore, adding more buffers to benefit one transaction may have a negative effect on the overall system response and may waste I/O operations. A final note may be required regarding Web programming under CICS/TS. You could send the entire output from the browse to a PC and use the scroll bar to view the data without having to return to the CICS system to get more data. However, this would require larger amounts of virtual storage to build the message and you would be increasing the response time by the amount of data transmitted to the PC on the Web.

A better alternative to adding buffers for heavily browsed files is to ensure the proper CISZ. The idea is to determine the average number of browse requests made to the file. This information isn’t readily available, but the CICS file statistics can be used to approximate this figure. You would need to use the number of EXCPs combined with the number of I/O requests for the file (Figure 5). A ratio based on the average number of requests vs. the number of records that fit into a CI can be computed. The objective is to get this ratio to be less than or equal to one.

Once this figure is known, then you would try to make the CISZ sufficiently big so the number of records to be read fit into one CI. For example, suppose you determine that the average number of browse requests was 15. The objective would be to make the CISZ sufficiently large enough to accommodate 15 or more records. If the maximum CISZ cannot accommodate the number of records or generates a bad CISZ that results in poor track utilization, then select a CISZ that would reduce the number of I/O operations required to process the browse request. Naturally, the browse request could start in the middle of a CI and require an additional I/O to complete the browse. However, you’d limit the number of physical reads required to service the browse request to a few external operations. Larger data CIs also provide better free space alternatives.

What about having extra buffers to improve CA split processing? The first problem is that if there’s any browse activity, there’s no way to predict what activity occurs first and acquires the extra buffers. Moreover, the handling of CA splits under NSR ties up the TCB on which the split occurs for the duration of the split processing. This affects the response time of all transactions using this TCB. If the SUBTSKS parameter hasn’t been specified, then the QR TCB will be locked out for the duration of the split. If the SUBTSKS parameter has been specified, then the CO TCB is locked up for the duration of the split. Activate the CO TCB in CICS systems that have NSR files on which CA splits can occur. Unfortunately, the CO TCB is recommended only for multi-processing systems that have more than one CPU available for processing.

So, again, why is this file in NSR? NSR doesn’t provide a good look-side hit ratio when compared to LSR and in the case of splits, can lock out the TCB processing the split. LSR uses a VSAM exit to process splits and doesn’t tie up the processing TCB. In addition, adding buffers can actually have a negative effect if the buffers’ read-ahead are not processed because the operation completed, resulting in slower processing and wasted resources.

Any file that has Share Options 4 specified is suited for NSR. Share Options is usually an integrity issue. Selecting Share Options 4 with the proper ENQ/DEQ programming allows for the file to be shared between more than one address space; for example, batch and online . However, Share Options 4 has a negative effect on performance because it can nullify look-aside processing. If file sharing is required, then RLS and Transactional VSAM (TVS) should be considered. This type of file should not be in LSR because it has a negative effect on the buffers and the hit ratio. Another type of file that could be in NSR is a file being processed by a program that doesn’t follow Command Level guidelines.

Consider, for example, a program that issues a read for update in the middle of a READNEXT sequence. If this file is in LSR, the transaction will hang. However, if this file is in NSR, it will work as long as an additional string is available because the read for update requires a new string to operate. However, you wind up re-reading the CI into virtual storage and have the CI twice in memory. This type of processing leads to a hung transaction even in NSR because of lack of strings. So, the systems programmer over-allocated strings to the file to reduce string lockouts. This results in wasted resources, as each string required the assignment of a data and an index buffer.

NSR was made for batch processing. Its use should be limited in an online CICS environment to handle the two cases mentioned above. Otherwise, the file should be allocated into LSR.

There’s one final problem that should be reviewed when placing files into NSR. Transaction Isolation is one of the new storage protection features available in CICS since CICS/ESA 4.1. This feature is not widely used, but represents an important step toward controlling errant storage violations that can affect system integrity. We recommend its use in spite of the overhead. Transaction Isolation doesn’t support NSR files, so if you plan to use it, then the files have to be defined into LSR. Review all your files and determine the need for any NSR file. If there’s no justifiable reason, then convert the file to LSR and enjoy better performance.