Does CICS Still Love Fast Engines?

4 Pages

Once upon a time, there was an article in the old and cherished Mainframe Journal* titled “CICS Loves Fast Engines,” and Bob Thomas published that edition of the magazine with the picture of a Ferrari on the front cover with the article title on the car’s hood.

Bob was overwhelmed by the response to this article, not because of its content, but because of the CICS community’s clamor for the double-size posters he produced of the front page of namely, the Ferrari.

Now I’m back, some 17 years later, and I’m asking IBM to give us a 500-engine, multi-processor in the next few years, and I couldn’t care less how fast its base engine speed is. Why is that?

Open Transaction Environment (OTE) History

Historically, CICS provided an Application Programming Interface (API) that ran almost everything on a single Task Control Block (TCB). Originally, this was the main job step TCB in old architecture CICS (up to and including CICS/MVS V2.1.2).

Versions 3 and 4 did much the same, except the TCB that did virtually all the work for CICS transactions became known as the Quasi-Reentrant TCB or QR TCB. In these releases, certain discreet pieces of work could be offloaded to another TCB, such as the Resource Owning or RO TCB to do, for example, program loads, or to the Concurrent TCB or CO TCB (if it existed) to offload VSAM I/O operations (later releases still do this). So, this was the beginning of a little bit of parallelism, if you will. One thing that was completely taboo was if a CICS application made a non- CICS call for service, the chance of “blocking” the QR TCB would have a disastrous effect on the entire CICS region.

However, it wasn’t until CICS TS 1.3 that the beginning of true parallelism arrived in the shape of the J8 TCB for running Java transactions, each in their own Java Virtual Machine (JVM); there used to also be H8 TCBs for “hotpooled” Java.

Because the Java implementation in CICS was fully “threadsafe,” it became possible for the first time to even consider a parallel world (by the way, by the time we get to CICS/TS 2.3, a lot of Java class data resides in something called “Shared Class Cache,” lightening the load for user JVMs).

Simultaneously in CICS TS 1.3, some parts of CICS internal components (that support the API) were made threadsafe (releases since have continued that effort, so many more CICS API functions have subsequently become threadsafe).

CICS TS 1.3 also introduced a new parameter on a program definition (see Figure 1) CONCURRENCY(QUASIREENTRANT | THREADSAFE) and a SIT parm FORCEQR(NO|YES). These parameters had no effect, however, until CICS TS 2.2 onward. They were made available in V1.3, so customers could start on the long haul to a threadsafe, parallel world.

4 Pages