CICS / WebSphere

CICS and the Open Transaction Environment

3 Pages

Any program using a dummy TRUE to access the OTE should be carefully documented; procedures should be applied to ensure that subsequent maintenance doesn’t inadvertently result in an unsuitable TCB switch. For example, a non-threadsafe command inserted between the call to an OPENAPI TRUE and the application code that must run in an open environment would result in this code running on the QR TCB and could de-stabilize the region.

Besides the application coding changes to call the TRUE, and building the TRUE and stub programs, control programs must be created to activate and de-activate the TRUE.

The second option to run application code in the OTE is to modify the program’s API attribute in its RDO PROGRAM definition to OPENAPI. A threadsafe program that’s defined as OPENAPI will be initialized and run on an open TCB—an L8 TCB for CICS key programs, or an L9 TCB for USER key programs—without having to issue any TRUE calls. In this manner, a program can take advantage of the OTE with just a simple RDO change; no system changes or application code changes are required. There’s no way to run part of the code in an OPENAPI program under the QR TCB; if the program issues a non-threadsafe command, CICS returns it to its open TCB before giving control back to the program. Unlike using the dummy TRUE, there’s no concern regarding which environment the program is in at any point because all the application code will run on the open TCB.

There are some performance concerns to consider before defining a program as OPENAPI. First, all OPENAPI programs incur TCB spin overhead whenever a non-threadsafe CICS command is issued. CICS must spin the task from the open TCB to the QR TCB to process the command, but then must immediately spin the task back to its open TCB as soon as the task is completed. If a large number of non-threadsafe commands are issued, CPU utilization can increase dramatically.

Another performance concern is related to OPENAPI programs that run in USER key. USER key OPENAPI programs run on an L9 TCB, but all OPENAPI TRUE activity occurs on an L8 TCB. If an OPENAPI USER key program (L9) issues a DB2 call (L8), it will hold both the L8 and the L9 TCB instances until task termination. You should review TCB limits to ensure a sufficient number of TCBs will be available to meet these requirements; an insufficient number will result in tasks waiting on TCB availability and can produce a deadly embrace. If a task requires an L-series TCB, and the only available L-series TCBs are in the wrong key, CICS will “steal” an available TCB, tear it down, and rebuild it in the needed key, incurring additional overhead. USER key tasks running on an L8 TCB via OPENAPI TRUE call aren’t subject to this overhead. CICS will run USER key application code in key 9 on an L8 series TCB, if called via OPENAPI TRUE.

IBM has been actively enhancing its CICS-based products to take advantage of the OTE through the use of OPENAPI TRUEs (which use L8 TCBs), so this performance issue extends beyond DB2 and covers products such as WebSphere MQ, CICS sockets, and the XML parser. If OPENAPI USER key programs are installed, be careful when upgrading IBM system software to ensure that no unanticipated L8-L9 TCB conflicts arise.

Law of Unintended Consequences and OTE

While the OTE provides tremendous new opportunities to extend and enhance CICS transaction capabilities, you should consider the possible negative side effects of any change. For example, it’s common to find shops running critical CICS regions that are CPU-constrained on the QR TCB. In such situations, even a short delay by z/OS in re-dispatching the QR can result in a significant backup in workload, so the CICS in question is set to an unusually high dispatching priority. This CICS region would receive considerable benefit from the multiple CPU exploitation achieved when running user programs in the OTE. Transaction response time would be improved, obviously, by reducing wait on QR dispatch. However, consider the case of such a situation on a production processor running two Central Processors (CPs). Currently, CICS is taking 60 to 80 percent of one processor, leaving 20 to 40 percent of that processor and 100 percent of the second processor available for everything else. Once application workload moves to the OTE, this scenario changes drastically. In addition to the QR, there are now multiple L8 TCBs competing for available processor, all of which are set to run at a higher dispatching priority than anything else on the system. Non-DB2-related open TCBs always run at the same dispatching priority as the QR.

With the QR CPU bottleneck removed, total CICS CPU utilization can easily increase to 100 to 150 percent of a processor. With such a reduction in CPU available for non-CICS work, the high-priority system services that CICS requires are now crowded out by application tasks running on higher-priority open TCBs. Transaction response time consequently increases. This is avoidable if anticipated and with an adjustment to CICS’ dispatching priority to a reasonable level. However, it’s difficult to anticipate because it’s assumed that CICS CPU utilization can’t exceed 100 percent of a single processor.

Similarly, the OTE supports use of z/OS storage services (GETMAIN, etc.). Because z/OS GETMAIN offers significant advantages over CICS GETMAIN (supporting use of subpools, for example), it’s tempting to consider converting CICS programs to use them. The unanticipated consequence here is that CICS is totally unaware of any z/OS storage your transaction acquires. It isn’t reflected in the SMF 110 performance statistics, doesn’t show in a CICS monitor, isn’t collected in a CICS transaction dump, and must be manually freed at task termination. Moreover, the GETMAIN and FREEMAIN activity doesn’t appear in the CICS trace table. Drawbacks such as these must be considered before implementing programs using non-CICS services production.

Running batch COBOL programs in the OTE also raises LE run-time issues that you should review before implementation. When running in a CICS address space, LE uses its CICS support routines to process requests:

  • COBOL DISPLAY statements will be dynamically converted to EXEC CICS WRITEQ TD commands. WRITEQ TD isn’t a threadsafe command.
  • Dynamic calls are converted to use CICS services to load the called program and initialize the LE environment.
  • Although use of native I/O commands is supported in the OTE, LE doesn’t support COBOL OPEN/CLOSE in CICS. 

CICS OTE is a powerful tool that eliminates or minimizes some of the most significant limitations of the CICS run-time environment; however, it’s also a double-edged sword with the potential to cause serious performance problems or unanticipated run-time errors if used improperly. Careful consideration of these issues is critical to a successful implementation of any OTE project.

3 Pages