CICS / WebSphere

CICS File Control Enhancements

6 Pages

The latest release of IBM’s CICS transaction processing software, CICS Transaction Server (TS) for z/OS V3.2, contains many enhancements and is an exciting step in the development of CICS. This article describes CICS file control component enhancements, including some that were made in the base level of CICS TS V3.2 and others that were implemented via the IBM service channel after CICS TS V3.2 became available.

Shared Data Tables Larger Than 2GB

CICS has supported data tables since CICS/MVS V2 in the 1980s. Initially, they weren’t shared between CICS systems, but existed as in-memory versions of keyed files. Basic data table support came in user-maintained data tables and CICS-maintained data tables.

User-maintained data tables, intended as scratch pads for data that was expected to be needed only temporarily, provided fast read and write access. Since they existed only with memory, they could initially be loaded from an underlying VSAM file. After loading, the data table and the file were decoupled, so any subsequent changes to one weren’t reflected in the other.

CICS-maintained data tables worked similarly, but remained associated with their underlying VSAM file. They were automatically synchronized with any changes made to the associated file; their primary use was for optimized reads and browses of the file’s records, since these could be accommodated by retrieving the record data from memory and not via an I/O request to the underlying data set. This meant they were efficient at read-only operations.

CICS/ESA V3.3 extended data table support with the introduction of shared data tables support. This was originally an optional feature, but was incorporated into the base CICS product with CICS/ESA V4.1. Shared data tables replaced the original in-memory data tables. They stored the data component in a data space associated with a CICS system. The associated table entry and index components (which CICS used to access the associated data in the data space) were stored in the address space of the File-Owning Region (FOR) CICS system that owned the data table.

By exploiting access registers to reference the data space storage, impressive performance improvements could be achieved for read-only requests. Cross-memory shared read access considerably improved read-only requests against shared data table records held in data spaces owned by remote CICS systems in the same Logical Partition (LPAR).

A particular CICS system owns each shared data table; other CICS systems can read a table by means of program calls to cross-memory routines in the owning system. Shared data tables benefit because it’s more efficient to use z/OS cross-memory services instead of CICS function shipping to share a data file between multiple CICS systems in a z/OS image, and it’s more efficient to access data from memory than from DASD.

Exploiting cross-memory techniques like this avoids the overhead of a full-function shipped request from one CICS system to another. For requests that update records (such as rewrites or deletes), however, there’s still the need to function ship the request to the remote CICS, and to make the associated changes to the underlying file, too, in a transactional environment.

As with the original basic data tables implementation, shared data tables provide the user-maintained and CICS-maintained models for the two different varieties of data table. When they were implemented, shared data tables were limited to 2GB per CICS system. This was because the real storage available per z/OS image at the time was limited to 2GB. All shared data tables a CICS region owned were held in the same data space, so the combined data table size was limited to 2GB. However, subsequent enhancements to the z/Architecture have removed this limitation, and CICS TS V3.2 has now exploited removal of this restriction.

6 Pages