CICS / WebSphere

The CICS Data Sharing Servers

6 Pages

The parameter ACCESSTIME=NOLIMIT lets XES override server access time limits, to obtain serialization to take the dump. Without this parameter, no dump is taken if any server is active. The parameters ADJUNCT=DIRECTIO and ENTRYDATA=UNSERIALIZE notify XES not to keep the serialization while dumping adjunct areas and entry data. If servers are active but it’s considered important to obtain a serialized dump, to show the list structure at a moment in time, replace these parameters with ADJUNCT=CAPTURE and ENTRYDATA=SERIALIZE. Note that this will lock out server access until the dump is completed. 

Formatting a Shared Temporary Storage Pool Dump                                                 

The IPCS STRDATA subcommand can be used to format a shared temporary storage pool dump. To display the queue index list, use the STRDATA subcommand as follows: 

STRDATA DETAIL LISTNUM(2) ENTRYPOS(ALL) 

The list structure uses the third list (list two) as the queue index. Each entry on this list represents a queue. The key of each entry in this list is the queue name. Each entry has an adjunct area. Queues are regarded as “small” or “large.” Small applies to queues where all the data fits into 32K of structure space; a large queue is one for which the total size of the data items exceeds 32K. It’s stored in a separate list in the structure. 

For small queues, the adjunct area +0 has their total length in its first full word. The queue records are then shown as one contiguous block of data in the associated entry data. For large queues, the second word of the adjunct area is the number of the corresponding data list for the queue. The data for a large queue can be displayed by converting the data list number to decimal and specifying it on another STRDATA subcommand. If the second word of the adjunct area is X’0000000C,’ the command to display the queue data is: 

STRDATA DETAIL LISTNUM(12) ENTRYPOS(ALL) 

In the data list, the key of each entry is the item number, and the data portion contains the item data with a 2-byte length prefix. The rest of any data area is not initialized and may contain residual data up to the next 256-byte boundary.

Tuning

There are several parameters that can be specified in the start-up JCL for tuning purposes. It’s normal to let them assume their default values as described in The CICS System Definition Guide.

The parameters are:

  • ELEMENTSIZE=number: This specifies the element size for structure space, which must be a default of two. For current coupling facility implementations, there’s no reason to deviate from the default value of 256. The range is from 256 through to 4,096.
  • ELEMENTRATIO=number: This specifies the element side of the entry/ element ratio when the structure is first allocated. This determines the proportion of the structure space initially set aside for data elements and is valid only at server initialization. The range is from one through to 255 and the default is one.
  • ENTRYRATIO=number: This specifies the entry side of the entry/element ratio when the structure is first allocated. This determines the proportion of structure space initially to be set aside for list entry controls. As with the ELEMENTRATIO parameter, this is valid only at server initialization. Again, the range is from one through to 255; the default is one.
  • LASTUSEDINTERVAL=time: This specifies how often the last used time for large queues is to be updated. For small queues, the last used time is updated on every reference. For large queues, however, updating the last used time requires an extra coupling facility access, so that it’s done only if the queue hasn’t previously been current time. Since the main purpose of the last used time specification is to determine whether the queue is obsolete, an interval of a few minutes should be sufficient. The format is hhmm. Valid range is 00:00 to 24:00; default is 00:10.
  • SMALLQUEUEITEMS=number: This specifies the maximum number of items that can be stored in the small queue format in the queue index entry data area. This parameter can force a queue to be converted to the large queue format if it has a large number of small items. Valid range is one to 32767; the default is 9,999.

Nearly all the parameters specified for the coupling facility structure the shared temporary storage server uses can be altered and new values then loaded and picked up without having to recycle the server region. For example, the MAXSIZE parameter of the structure may be increased to allocate more space in the structure. To do this, create a new policy (e.g., POL2), specify the larger MAXSIZE value, then activate that policy with this command:

SETXCF START,POLICY,TYPE=CFRM,POLNAME=POL2

The shared temporary storage server won’t start to use the values specified in the new policy until an additional command is issued:

SETXCF START,REALLOCATE

To check whether the policy has been loaded into the coupling facility and that the shared temporary storage server is using it, further commands may be issued. To check that the policy was loaded, issue the command:

D XCF,STR,STRNUM=DFHXQLS_poolname

Figure 5 shows an example of the display this produces on the console. In this example, the new policy has a larger size (of 131,072K).

The following z/OS modify command against the server address space can be used to display the server’s view of the coupling facility structure: /F tsserver,DISPLAY POOLSTATS

Figure 6 shows an example of the display this produces. The attributes are described in the CICS Messages and Codes manual.

That the policy size of 131,072K from the first command matches the Max size from the second command shows that the new policy has been recognized by the shared temporary storage server.

Other CICS Data Servers Besides supporting the shared temporary storage data server environment, CICS also provides servers for both Coupling Facility Data Table (CFDT) exploitation and named counter use: The named counter server provides a way of generating unique sequence numbers for use by application programs executing within a parallel sysplex environment. A named counter server maintains each of the sequences of numbers as a named counter. As numbers are assigned, the corresponding named counter is automatically incremented to ensure the next request for a unique number is given the next number in its sequence.

A command level API for the named counter facility is provided for use by CICS applications. There’s also a call interface that makes the named counters available for use in batch. Named counters are stored in a pool of named counters, where each pool is a list structure held on the coupling facility. As with the shared temporary storage server environment, within each z/OS image, there must be one named counter server for each named counter pool accessed by CICS regions (and batch jobs) in that image.

The pool name, used to form the server name with the prefix DFHNC, is specified in the start-up JCL for the server. The server needs to be authorized to access the coupling facility list structure in which the named counter pool is defined.

CFDTs allow data sharing in a sysplex environment, coupled with update integrity. They provide a mechanism of data sharing for file data without the prerequisite of a CICS File Owning Region (FOR) or the use of VSAM Record Level Sharing (RLS). They can be used almost continuously. The data resides in a table held within a coupling facility list structure. The table is similar in concept to a sysplexwide User Maintained Data Table (UMDT), although the data isn’t kept in a dataspace in a z/OS image; neither is it controlled by a single CICS region. Another difference is that the initial loading of a CFDT is optional. If LOAD(NO) is specified on the file definition, the table can be loaded directly from an application program. The CICS file control API can be used to access the data held in the CFDTs.

As with shared temporary storage and the named counter services, access to a CFDT is via another server address space. Groups of related CFDTs may be managed in separate pools, held as list structures in the coupling facility. Within each z/OS image, there must be one CFDT server address space for every CFDT pool accessed by CICS. The pool name is specified in the JCL submitted to start the CFDT server. A CICS region communicates with a CFDT server region running in the same z/OS image, using the AXM server environment. The CFDT server controls the coupling facility list structure and the data tables held there.

CICS and the CFDT server produce statistics. The CICS file control statistics record the types of requests made against each CFDT, along with information such as when a table becomes full. The CFDT server records statistics on its list structure and data tables.

Read and write access to data on CFDTs has comparable performance. For a CFDT, any request is allowed during table loading, but requests succeed only for those records within the range of record keys already loaded into the table. A CFDT is restricted to a maximum of 16 bytes for its key length. Z

6 Pages