Dec 27 ’05

The CICS Data Sharing Servers

by Editor in z/Journal

CICS has been continually enhanced since its inception more than 35 years ago. Support for many new features has been added, including Web access, SOAP support, Java applications, etc. 

With CICS Transaction Server, support was added for temporary storage data sharing, a new type of data sharing predicated upon a new form of server provided with CICS. The shared temporary storage server runs as a separate address space to CICS. It provides access to a named pool of shared temporary storage queues, whose data is held in a coupling facility. For each z/ OS image in the sysplex environment, one shared temporary storage server is required for each pool defined in the coupling facility accessed from that z/ OS image. The pool access occurs via cross-memory calls from CICS to the shared temporary storage server for the named pool. 

CICS Transaction Server V1.3 provides support for additional forms of data sharing servers. This release provided a coupling facility data table server and named counter server, besides the shared temporary storage server. 

This article describes the use of data sharing servers in CICS, including issues relating to the servers, the programming options available to exploit them, and their diagnostic capabilities. 

An Overview of CICS Temporary Storage  

The temporary storage facility lets application programs hold data created by one transaction for later use by the same transaction or a different transaction. Temporary storage is a scratchpad facility that user applications and CICS itself can use to obtain state data (e.g., the FROM data associated with interval control EXEC CICS START requests). The data is saved in temporary storage queues identified by one-to 16-character symbolic names. Before the introduction of temporary storage data sharing, temporary storage data could traditionally be held in two places: 

Both destinations limit the use of the temporary storage facility to one CICS system, making it impossible to share temporary storage queue data between different CICS systems. Main temporary storage resides in memory in a particular CICS system’s address space; auxiliary temporary storage resides in a DFHTEMP data set that a particular CICS system owns. To allow data access by multiple CICS systems, temporary storage requests could traditionally be function shipped from multiple Application Owning Regions (AORs) to dedicated Queue Owning Regions (QORs). 

Temporary Storage Data Sharing  

CICS Transaction Server V1.1 introduced shared temporary storage using a coupling facility. Temporary storage was further enhanced in CICS Transaction Server V1.3 to allow the definition of TSMODELs through CICS RDO and to support queue names of up to 16 characters. 

Temporary storage data sharing lets CICS applications access non-recoverable temporary storage queues from multiple CICS systems running on any z/OS image in a parallel sysplex environment. CICS stores a set of temporary storage queues to be shared across the parallel sysplex in a TS pool. Each TS pool corresponds to a coupling facility DFHXQLS list structure defined in the Coupling Facility Resource Manager (CFRM) policy. 

Having shared access to temporary storage data in this way avoids the need to function ship requests from multiple AORs to a QOR. Each AOR can reference shared temporary storage data via a temporary storage data sharing server in the same z/OS image. Shared temporary storage also helps remove potential temporary storage affinities when using local queues specific to particular CICS systems. 

Access to a pool by CICS transactions running in an AOR is via a TS data sharing server that supports the named pool. Each z/OS image in the parallel sysplex requires one temporary storage data sharing server for each pool defined in a coupling facility accessible from that z/OS image. 

Figure 1 shows a parallel sysplex environment with several CICS Terminal Owning Regions (TORs) and AORs, communicating with temporary storage data sharing servers in their z/OS image and accessing shared temporary storage data held on the coupling facility. 

The Authorized Cross-Memory (AXM) Environment  

The CICS data sharing servers use a common run-time environment. This is the AXM server environment; it provides necessary services the various servers can exploit. If CICS data sharing servers are to be used, the AXM system services must launch first. AXM system services are initialized via a z/OS subsystem definition called AXM. The subsystem interface isn’t activated or used; the definition enables scheduling of AXM initialization by the z/OS master scheduler, and so establishes AXM cross-memory connections for a given z/OS image.

The AXM subsystem may be defined both statically and dynamically. For static definitions, an entry may be added to the IEFSSN member of SYS1. PARMLIB as follows. This makes AXM services available when IPLing z/OS: 


You can also initialize AXM system services with a dynamic subsystem definition: 


After the AXM subsystem has started, z/OS ignores further attempts to do so. 

AXM modules are differentiated from traditional CICS modules by means of the three-character prefix ”AXM,” as opposed to the DFH-prefix assigned to other CICS modules. Likewise, AXM diagnostic and informational messages are prefixed by the letters AXM. As with DFH-prefixed CICS messages, this is followed by a two-letter identifier, which denotes the particular functional area of the AXM subsystem that issued the message. 

Along with various other CICS-supplied modules, AXM modules AXMSC and AXMSI are supplied in SDFHLINK and reside in an APF-authorized library in the z/OS linklist. AXMSC provides AXM server connection routines and AXMSI provides the AXM subsystem initialization function. 

Temporary Storage Pools  

Using temporary storage data sharing means replacing main or auxiliary storage destinations for temporary storage queues with one or more TS pools, where the scope and function of each TS pool is similar to that of a traditional QOR. Each TS pool is defined using z/OS Cross-System Extended Services (XES) as a keyed list structure in a coupling facility. TS pools are defined using CFRM policy statements. Using the CFRM policy definition utility, IXCMIAPU, specify the size of the list structures required, together with their placement in a coupling facility (see a sample definition statement in Figure 2). 

The name of the list structure for a TS pool is created by appending the TS pool name with the prefix DFHXQLS_. When a list structure is allocated, it can have an initial size and maximum size as specified in the CFRM policy. The CICS System Definition Guide contains information on calculating suitable structure sizing, based upon the expected storage requirements. IBM recommends using the CFSIZER tool for structure sizing calculations. Learn more at: com/servers/eserver/zseries/cfsizer/ index.html. 

Structure sizes are rounded up to the next multiple of 256K at allocation time. When defined, activate the CFRM policy using the z/OS operator command: 


If sufficient space is available in the coupling facility, a list structure can be dynamically expanded from its initial size toward its maximum size. It can also be contracted to free up coupling facility space for other purposes. If the initial structure allocation becomes full, the structure doesn’t automatically expand, even if the structure allocated is less than the specified maximum size. To expand a list structure when the allocation becomes full, it can be expanded (up to its maximum size), using the following SETXCF command: 



The CICS System Definition Guide describes the shared temporary storage server parameters you can use to provide early warning about when a structure is nearing capacity. 

Resource Definition for Temporary Storage Data Sharing  

Having defined a TS pool via IXCMIAPU, a TSMODEL needs to be defined using CEDA to use shared temporary storage. Figure 3 shows a TSMODEL. The PRefix specifies the character string used for matching queue names. It can contain wildcard characters; it isn’t limited to the leading characters for a queue name. It’s also possible to specify a generic name by using the plus (+) sign in a character string to indicate any valid character is possible in that position. PRefix is the RDO equivalent of the DATAID parameter provided with the TST. 

The location of a temporary storage queue can be specified as auxiliary or main (for traditional temporary storage queues). This attribute is ignored if a shared attribute of POolname is provided, too. Whatever location is given on an Application Program Interface (API) command is ignored, and that given in the TSMODEL is used. Also, the RECovery and POolname attributes are mutually exclusive as recoverable shared temporary storage queues aren’t supported. 

Although shared temporary storage isn’t recoverable, shared data will persist across CICS restarts (including initial starts). This differs from main and auxiliary temporary storage data (which doesn’t survive an initial restart of CICS). The shared data resides on the coupling facility, independent of a given CICS system. As such, there are housekeeping implications in tidying up unwanted shared temporary storage queues when appropriate. 

Defining a Shared Temporary Storage Server Region  

A shared temporary storage pool consists of an XES list structure, which is accessed through a cross-memory queue server region. A shared temporary storage pool is made available to a z/OS image by starting up a shared temporary storage queue server region for that pool. This invokes the shared temporary storage server region program DFHXQMN, which must reside in an APF-authorized library. A shared temporary storage server region must be activated before the CICS region attempts to use it. 

DFHXQMN requires some initialization parameters, of which the pool name, a SYSPRINT DD statement for the print file, and a SYSIN DD statement for the server parameters are mandatory. Other optional parameters may be specified for debugging and tuning purposes. 

Figure 4 shows a sample start-up job for a shared temporary storage server region, showing some of the parameters you can specify. 

Descriptions of the more commonly specified parameters and their defaults are: 

More detailed information appears in The CICS System Definition Guide.  


Should you need to obtain trace data for debugging, two parameters can be specified. However, be aware that use of debugging options may significantly impact performance and cause the print file to grow rapidly, using up spool space. The two available trace parameters are:

Additionally, a dump of the coupling facility list structure for the shared temporary storage pool may be obtained. To do this, use the MVS DUMP command: 


In response, the system prompts with a reply number for the dump options to be specified. When prompted for dump parameters in response to the DUMP COMM command, enter the reply: 



The parameter ACCESSTIME=NOLIMIT lets XES override server access time limits, to obtain serialization to take the dump. Without this parameter, no dump is taken if any server is active. The parameters ADJUNCT=DIRECTIO and ENTRYDATA=UNSERIALIZE notify XES not to keep the serialization while dumping adjunct areas and entry data. If servers are active but it’s considered important to obtain a serialized dump, to show the list structure at a moment in time, replace these parameters with ADJUNCT=CAPTURE and ENTRYDATA=SERIALIZE. Note that this will lock out server access until the dump is completed. 

Formatting a Shared Temporary Storage Pool Dump                                                 

The IPCS STRDATA subcommand can be used to format a shared temporary storage pool dump. To display the queue index list, use the STRDATA subcommand as follows: 


The list structure uses the third list (list two) as the queue index. Each entry on this list represents a queue. The key of each entry in this list is the queue name. Each entry has an adjunct area. Queues are regarded as “small” or “large.” Small applies to queues where all the data fits into 32K of structure space; a large queue is one for which the total size of the data items exceeds 32K. It’s stored in a separate list in the structure. 

For small queues, the adjunct area +0 has their total length in its first full word. The queue records are then shown as one contiguous block of data in the associated entry data. For large queues, the second word of the adjunct area is the number of the corresponding data list for the queue. The data for a large queue can be displayed by converting the data list number to decimal and specifying it on another STRDATA subcommand. If the second word of the adjunct area is X’0000000C,’ the command to display the queue data is: 


In the data list, the key of each entry is the item number, and the data portion contains the item data with a 2-byte length prefix. The rest of any data area is not initialized and may contain residual data up to the next 256-byte boundary.


There are several parameters that can be specified in the start-up JCL for tuning purposes. It’s normal to let them assume their default values as described in The CICS System Definition Guide.

The parameters are:

Nearly all the parameters specified for the coupling facility structure the shared temporary storage server uses can be altered and new values then loaded and picked up without having to recycle the server region. For example, the MAXSIZE parameter of the structure may be increased to allocate more space in the structure. To do this, create a new policy (e.g., POL2), specify the larger MAXSIZE value, then activate that policy with this command:


The shared temporary storage server won’t start to use the values specified in the new policy until an additional command is issued:


To check whether the policy has been loaded into the coupling facility and that the shared temporary storage server is using it, further commands may be issued. To check that the policy was loaded, issue the command:


Figure 5 shows an example of the display this produces on the console. In this example, the new policy has a larger size (of 131,072K).

The following z/OS modify command against the server address space can be used to display the server’s view of the coupling facility structure: /F tsserver,DISPLAY POOLSTATS

Figure 6 shows an example of the display this produces. The attributes are described in the CICS Messages and Codes manual.

That the policy size of 131,072K from the first command matches the Max size from the second command shows that the new policy has been recognized by the shared temporary storage server.

Other CICS Data Servers Besides supporting the shared temporary storage data server environment, CICS also provides servers for both Coupling Facility Data Table (CFDT) exploitation and named counter use: The named counter server provides a way of generating unique sequence numbers for use by application programs executing within a parallel sysplex environment. A named counter server maintains each of the sequences of numbers as a named counter. As numbers are assigned, the corresponding named counter is automatically incremented to ensure the next request for a unique number is given the next number in its sequence.

A command level API for the named counter facility is provided for use by CICS applications. There’s also a call interface that makes the named counters available for use in batch. Named counters are stored in a pool of named counters, where each pool is a list structure held on the coupling facility. As with the shared temporary storage server environment, within each z/OS image, there must be one named counter server for each named counter pool accessed by CICS regions (and batch jobs) in that image.

The pool name, used to form the server name with the prefix DFHNC, is specified in the start-up JCL for the server. The server needs to be authorized to access the coupling facility list structure in which the named counter pool is defined.

CFDTs allow data sharing in a sysplex environment, coupled with update integrity. They provide a mechanism of data sharing for file data without the prerequisite of a CICS File Owning Region (FOR) or the use of VSAM Record Level Sharing (RLS). They can be used almost continuously. The data resides in a table held within a coupling facility list structure. The table is similar in concept to a sysplexwide User Maintained Data Table (UMDT), although the data isn’t kept in a dataspace in a z/OS image; neither is it controlled by a single CICS region. Another difference is that the initial loading of a CFDT is optional. If LOAD(NO) is specified on the file definition, the table can be loaded directly from an application program. The CICS file control API can be used to access the data held in the CFDTs.

As with shared temporary storage and the named counter services, access to a CFDT is via another server address space. Groups of related CFDTs may be managed in separate pools, held as list structures in the coupling facility. Within each z/OS image, there must be one CFDT server address space for every CFDT pool accessed by CICS. The pool name is specified in the JCL submitted to start the CFDT server. A CICS region communicates with a CFDT server region running in the same z/OS image, using the AXM server environment. The CFDT server controls the coupling facility list structure and the data tables held there.

CICS and the CFDT server produce statistics. The CICS file control statistics record the types of requests made against each CFDT, along with information such as when a table becomes full. The CFDT server records statistics on its list structure and data tables.

Read and write access to data on CFDTs has comparable performance. For a CFDT, any request is allowed during table loading, but requests succeed only for those records within the range of record keys already loaded into the table. A CFDT is restricted to a maximum of 16 bytes for its key length. Z