We mentioned that the result from the formula was the minimum buffers required for processing the NSR file. Why is this the minimum? If we have a volatile file, then there may be splits within the file. Splits occur in the data portion and the index portion. A data CA split can result in a CA split in the index component. This extra index CI can be another IS record that wasn’t included in the formula. So, if the file is prone to CA splits, then it’s best to add a few more BUFNIs to account for the extra IS CIs that may occur during the life of the file. Usually, two or three extra BUFNIs are sufficient to handle the extra CA split activity unless the file is infrequently reorganized.
In this case, you’d have to compute the number of IS records using the above formula just before reorganizing the file. In the case of a three-level index file, adding sufficient index buffers will ensure a look-aside hit ratio of at least 50 percent once the extra IS buffers are full. You can increase this hit ratio. There’s a look-aside done at the string level buffers. The string index buffer is searched to see if the desired index SSI CI is present and uses it if found. In addition, the data buffer is also searched when the index SSI was found to see if the data CI is also present. As you add more strings to the file, you reduce the chance of being assigned the correct string. However, there’s a chance that you could get better than 50 percent look-aside hit ratio for an NSR file.
NSR also provides the capability of improving sequential operations such as browses and CA splits. Sequential activity can be improved by adding data buffers in excess of the ones the strings need and the extra one splits use. All extra buffers available are assigned to the first task desiring these extra buffers. So, one cannot give preference to CA split processing over a normal browse operation because the first sequential operation takes all the available buffers. However, the problem lies in how many extra data buffers are needed for definition. For example, you’d want a browse transaction not to have to wait for the records in a browse operation by reading ahead.
However, to determine how many buffers you’d want to read ahead, you’d have to know the applications running in the system. The first problem is determining how many I/O or READNEXT operations are normally requested for a sequential browse. Application programmers probably don’t know the answer to that question, either. So, you may decide to add 10 buffers above the ones allocated to the strings and the extra one for splits.
When the program begins its first browse operation, CICS assigns all the available extra buffers (10) to the operation. So, the I/O would read 11 CIs into the allocated buffers, the one normally assigned to the string plus the 10 additional ones. Reading 11 buffers elongates the response time because you need to wait until 11 buffers are read and the I/O operation is completed before the application program can begin to process the data. What happens if the application program ends the browse after the second or third buffer is processed? Then, the remaining buffers aren’t used and you added overhead to the transaction by reading unused data.
Also, what happens to a second browse request that may occur while the first browse is still active? The second request would get one buffer, the one assigned to the string for the request. Another important question to ask is how much data should the transaction process? The answer is as much as it can display on one screen. Most browse transactions can display only one screen of information or around 16 to 18 lines of detail information. So, why read more than can be displayed. Reading more than the capacity of the display would mean that the program would have to store the data (e.g., Temporary Storage) and have to re-access it when needed, adding to the overhead. In addition, a transaction that finds the desired record in the buffer tends to monopolize the system affecting other transactions running concurrently. Therefore, adding more buffers to benefit one transaction may have a negative effect on the overall system response and may waste I/O operations. A final note may be required regarding Web programming under CICS/TS. You could send the entire output from the browse to a PC and use the scroll bar to view the data without having to return to the CICS system to get more data. However, this would require larger amounts of virtual storage to build the message and you would be increasing the response time by the amount of data transmitted to the PC on the Web.
A better alternative to adding buffers for heavily browsed files is to ensure the proper CISZ. The idea is to determine the average number of browse requests made to the file. This information isn’t readily available, but the CICS file statistics can be used to approximate this figure. You would need to use the number of EXCPs combined with the number of I/O requests for the file (Figure 5). A ratio based on the average number of requests vs. the number of records that fit into a CI can be computed. The objective is to get this ratio to be less than or equal to one.
Once this figure is known, then you would try to make the CISZ sufficiently big so the number of records to be read fit into one CI. For example, suppose you determine that the average number of browse requests was 15. The objective would be to make the CISZ sufficiently large enough to accommodate 15 or more records. If the maximum CISZ cannot accommodate the number of records or generates a bad CISZ that results in poor track utilization, then select a CISZ that would reduce the number of I/O operations required to process the browse request. Naturally, the browse request could start in the middle of a CI and require an additional I/O to complete the browse. However, you’d limit the number of physical reads required to service the browse request to a few external operations. Larger data CIs also provide better free space alternatives.
What about having extra buffers to improve CA split processing? The first problem is that if there’s any browse activity, there’s no way to predict what activity occurs first and acquires the extra buffers. Moreover, the handling of CA splits under NSR ties up the TCB on which the split occurs for the duration of the split processing. This affects the response time of all transactions using this TCB. If the SUBTSKS parameter hasn’t been specified, then the QR TCB will be locked out for the duration of the split. If the SUBTSKS parameter has been specified, then the CO TCB is locked up for the duration of the split. Activate the CO TCB in CICS systems that have NSR files on which CA splits can occur. Unfortunately, the CO TCB is recommended only for multi-processing systems that have more than one CPU available for processing.
So, again, why is this file in NSR? NSR doesn’t provide a good look-side hit ratio when compared to LSR and in the case of splits, can lock out the TCB processing the split. LSR uses a VSAM exit to process splits and doesn’t tie up the processing TCB. In addition, adding buffers can actually have a negative effect if the buffers’ read-ahead are not processed because the operation completed, resulting in slower processing and wasted resources.
Any file that has Share Options 4 specified is suited for NSR. Share Options is usually an integrity issue. Selecting Share Options 4 with the proper ENQ/DEQ programming allows for the file to be shared between more than one address space; for example, batch and online . However, Share Options 4 has a negative effect on performance because it can nullify look-aside processing. If file sharing is required, then RLS and Transactional VSAM (TVS) should be considered. This type of file should not be in LSR because it has a negative effect on the buffers and the hit ratio. Another type of file that could be in NSR is a file being processed by a program that doesn’t follow Command Level guidelines.
Consider, for example, a program that issues a read for update in the middle of a READNEXT sequence. If this file is in LSR, the transaction will hang. However, if this file is in NSR, it will work as long as an additional string is available because the read for update requires a new string to operate. However, you wind up re-reading the CI into virtual storage and have the CI twice in memory. This type of processing leads to a hung transaction even in NSR because of lack of strings. So, the systems programmer over-allocated strings to the file to reduce string lockouts. This results in wasted resources, as each string required the assignment of a data and an index buffer.
NSR was made for batch processing. Its use should be limited in an online CICS environment to handle the two cases mentioned above. Otherwise, the file should be allocated into LSR.
There’s one final problem that should be reviewed when placing files into NSR. Transaction Isolation is one of the new storage protection features available in CICS since CICS/ESA 4.1. This feature is not widely used, but represents an important step toward controlling errant storage violations that can affect system integrity. We recommend its use in spite of the overhead. Transaction Isolation doesn’t support NSR files, so if you plan to use it, then the files have to be defined into LSR. Review all your files and determine the need for any NSR file. If there’s no justifiable reason, then convert the file to LSR and enjoy better performance.