CICS / WebSphere

Tuning Temporary Storage in CICS/TS

4 Pages

Tuning Temporary Storage (TS) has become important in CICS systems; it requires attention in regions where it’s heavily used. TS comes in two flavors:

  • TS MAIN, where the information is maintained in the Extended CICS Dynamic Storage Area (ECDSA)
  • TS AUX, where the information is maintained in a VSAM file called DFHTEMP.

More applications are beginning to depend on TS and use it with permanent information that’s needed across CICS restarts. Somewhere along the line, the concept of TS was lost because it’s no longer considered temporary and has become “permanent storage” in nature. The process to tune TS MAIN is different and less complex than tuning TS AUX. A frequent question is: “With all the storage available above the line, why not convert all TS AUX to TS MAIN?” We’ll address that question and offer other tuning recommendations.

TS MAIN lets you store the necessary queue information in virtual storage. Access to the queues is fast; all that’s required is a table look-up using a Digital Tree Node (DTN) to locate the data in the ECDSA. So the first consideration when using TS MAIN is to ensure that there’s sufficient virtual storage available above the line so you don’t go Short on Storage (SOS) in the CICS region. A System Initialization Table (SIT) parameter called EDSALIM controls the amount of Extended DSA (EDSA) storage available above the line. Code an EDSALIM value sufficiently large to accommodate peak virtual storage use plus a growth factor. In Figure 1, EDSALIM is about 409MB and 289MB is remaining.

 

Associated with the use of the SIT parameter EDSALIM is the Job Control Language (JCL) parameter called REGION, which is used to control the amount of virtual storage available. As TS MAIN comes out of storage above the line, you’ll limit the review to storage above the line. You should try to REGION code the size as 0M to provide the maximum address space possible of 2GB. In Figure 1, the region size (after allocating the operating system common areas) for this address space is 1.956GB. There’s 1.515GB available after CICS is operational. That total is important because this is the amount of space remaining in the region to allocate more EDSALIM or for increasing VSAM buffers.

Coding a region size of 0M for CICS may not be easy. You may encounter some resistance from z/OS programmers who may have seen this parameter used for programs that allocate as much virtual storage as is available from the REGION parameter. Examples include compilers, assemblers and sort programs. CICS isn’t the type of program that dynamically expands to use all the virtual storage allocated; it allocates only what it needs to execute. The capacity to manually expand the EDSALIM size is a major reason to code a large region size such as 0M; it will help support unexpected virtual storage demands.

Your installation may have an IEFUSI SMF exit active that controls the size of allocated virtual storage. So, you may not get the amount of virtual storage requested on the REGION parameter because the IEFUSI exit may be in control of the virtual storage allocation in your system. Talk to your z/OS systems programmer about this exit.

The trade-off for using TS MAIN is that you can access the data quickly without an I/O operation but you may expose the system to SOS conditions or page faults due to the increased use of real storage needed to back the virtual storage TS MAIN uses. The SOS possibility especially applies in installations with many “orphaned” TS queues or queues created but not deleted when they’ve served their purpose (see Figure 2). This display identifies queues that have been in the system yet idle for a while. Although the same can be said about running out of TS space on DFHTEMP, an SOS condition probably has a greater negative effect on CICS because you may not be able to take corrective action (e.g., delete tasks and queues) and the only alternative requires recycling the CICS region. An SOS condition has a more paralyzing effect on CICS than a full DFHTEMP. Installations that have been bitten by many “orphaned” queues usually have some form of monitor program to delete old unreferenced queues.

 

4 Pages