CICS / WebSphere

The correct allocation of strings to files is an important tuning opportunity. Under-allocating the number of strings assigned can result in short on string conditions that adversely affect response times and could also delay transactions despite the availability of other resources such as CPU and virtual or real storage. Over-allocating strings can waste storage for Non-Shared Resource (NSR) files because each string requires a data buffer and, in the case of a Keyed Sequence Data Set (KSDS), you’d also need an index buffer. The unnecessary strings waste storage that could otherwise improve performance of another file or Local Shared Resources (LSR) pool. This article discusses string assignments to individual files and the LSR pool.

First, let’s define a string and consider its purpose. A string is merely an access path to a file. It represents how many concurrent accesses we’re going to allow to a particular file. A string allows the access method to move the I/O request down to the supervisor level. If the disk is available, the I/O request continues. However, if the disk is busy, then the operating system will queue the request on a device queue. Once the previous I/O completes, the next request on the queue, if any, is selected for execution. The queue is at a low level in the supervisor.

If there are insufficient strings available, CICS queues the request at a much higher level, the address space level. Theoretically, another request for this device from another address space could be placed on the device queue ahead of this CICS request because CICS queuing for string waits occurs at a higher level, the address space vs. in the operating system. Specifying the use of I/O priority queuing in the z/OS Workload Manager (WLM) could reduce interference caused by lower-priority address spaces. A lower-priority request could still be dispatched because the supervisor isn’t aware of pending requests that are queued at the address space level. So, strings provide CICS file requests to wait for service in queues at the supervisor level.

For many reasons, strings are often over-allocated. One contributing factor is a lack of understanding of how strings work. Whenever a user application request occurs, a string is assigned. The length of time this string is held depends on how long it takes to complete the request. For example, imagine a device that, on average, is reading a record in approximately 6 ms. If the file has three levels of indices, it will require four reads (three for the index and one for the data) to complete. So, if the one read took 6 ms, the request would take 24 ms and the string would be held for that time. Other requests would hold the string beyond the end of the I/O such as a read for update or a browse request. In these cases, the string would be held until the rewrite (or delete or unlock) occurred or the ENDBR was issued. So, if strings are held for long periods, there could be a point where you run out of strings and transactions would have to wait on strings.

There are three possible solutions to wait on strings:

The obvious one most people choose is simply increasing the number of strings. This also increases the number of assigned buffers as each string requires a data and, if applicable, an index buffer (NSR). This could result in a high increase of virtual storage.

You can also simply not do anything. If the number of wait on strings is relatively small (e.g., less than 1 percent), you could tolerate the small overhead. However, with the increased real storage available on computer systems today, you may not want to tolerate any string waits.

The best and correct solution is to reduce the time the string was held. You can accomplish this by ensuring the file is meeting the installation look-aside hit ratios and has proper buffering assigned.

In the previous example where we had to read four times for a total of 24 ms, the request could occur without any physical reads. The time the string would be held would be the instruction time to locate the four records in the buffer—that’s a much lower value than 24 ms. The string would be released quickly and be available for another request. Even if you didn’t achieve a 100 percent look-aside hit ratio, you could still improve the total time by finding one to three records in the buffer. The correct tuning recommendation is to ensure the file has proper buffering and is achieving the look-aside hit ratios before increasing the number of strings.

There are some instances where an NSR file has a usually high number of strings allocated. You can observe this condition when reviewing files that have many strings assigned, yet the I/O activity against the file is less than other files that reflect higher I/O activity. Often, this type of file has many strings allocated because transactions may require more than one string per I/O operation. An example of this condition is a file that issues a read for update request in the middle of a browse without first ending the browse. In this case, a string is maintained for the browse operation and a new string is assigned for the read for update operation. This type of file is prone to string wait conditions that may result in lockouts when being defined with a few strings. Often, the system programmer doesn’t know how many concurrent requests could occur, so the number of strings assigned to the file is over-allocated to compensate for string waits or to avoid lockout conditions. This type of programming results in the file having to be defined using NSR because this sequence of I/O requests would result in errors or lockouts under LSR. The correct solution is to end the browse before issuing the read for update and have the file transferred to LSR support to receive better look-aside capacity.

The same look-aside hit ratio concept applies to string allocations for the LSR pool. Here, you want to ensure you’re achieving a high hit ratio on both the index (e.g., 95 percent or greater) and data (e.g., 80 percent or greater) buffers to reduce the time the string is busy with a request. If, after you’ve properly tuned the LSR buffers, you continue to receive wait on strings, then you should increase the number of strings in the pool. You may, however, receive wait on strings at a file level. This occurs when the file has more strings allocated to it than the entire pool has assigned or the sum of the strings required by each individual file exceeds the total number of strings available in the pool. The first condition is an unusual situation and the solution requires increasing the number of strings in the LSR pool to match or exceed the number of strings assigned to the file. The second condition is normal in that the sum of the strings requested for all the files in the pool exceeds the total number of strings assigned to the pool. The pool strings are limited to 255. So, an LSR pool supporting many VSAM files will probably have fewer strings assigned than the sum of the strings assigned to the files. The same conditions could also exist for the buffers allocated.

How well the assigned LSR strings are working is measured by the peak number of strings used vs. the total number of strings assigned to the pool. The result can be recorded as a percentage of peak strings divided by pool strings times 100. The recommended objective for LSR strings is to have the peak number of strings used fall into the 40 to 60 percent range of the total number of strings assigned to the pool. The difference to 100 percent is a buffer to cover for unusual increase in use or growth. For example, if the peak number of strings used is 10, then the defined number of strings for the pool should fall in the range of around 16 to 25. Excessive allocation of strings in an LSR pool results in unused allocated resources, such as virtual storage that could be used to increase the number of buffers in the data or index components for improved look-aside hit ratio.

The cost for defining unused strings in LSR isn’t as high as with NSR because strings don’t have a data and/or index buffer associated directly with them. So, the virtual storage cost isn’t as high for LSR as for NSR. However, there are some extra control blocks created, such as the Request Parameter List (RPL) and Place-Holder (PLH), that won’t be used. In addition, coding the pool key length at 255 bytes will also generate unused virtual storage associated with the unused strings. The unused virtual storage could be better used by defining more buffers that could help increase the look-aside ratio.

Tuning Temporary Storage (TS-DFHTEMP) and Transient Data (TDDFHINTRA) follow the same logic. You need to reduce the number of I/O operations these files perform daily. You can do so by increasing the number of buffers allocated. However, the more recoverable resources defined to TS and TD, the more I/O is required due to recovery requirements. So, tuning strings for both these files follows the same recommendations previously discussed.