CICS / WebSphere

This series of articles is intended to help new CICS support people understand the basics of the product and how it has evolved into the robust software that it is. Besides the basics, these articles will examine underlying components that may not be obvious and daily issues that face those who support the product. This article targets anyone who works with distributed systems (UNIX, etc.), recent college graduates with no or limited z/OS experience, and applications developers moving into the systems arena.

For a complete description of these articles and a list of basic prerequisites readers should already possess, see the first article in the series, which appeared in the March/April 2012 issue of z/Journal and is available at

MRO Background

In the early days of CICS, customers used CICS for basic functions and the volume of transactions was relatively low. It was a new product and many installations still had most of their business workload running in batch. The Internet didn’t exist and typical processing consisted of “batching” updates or other requirements and running them in a single cycle. Computer terminals were the old “green screens” and had little capability to display company information. Since CICS was relatively new, there were a limited number of programmers who knew how to write an application to run in the online environment. They had spent most of their time writing batch programs.

As interest in CICS grew and the product evolved, more processing began moving into existing CICS regions. They were standalone regions that consisted of components to support sign-on to the region, processing of the required workload, and then displaying the output to the terminal screen. All resources required to support the application were contained in that region, including files, programs, and physical definitions. Unfortunately, memory was a precious commodity and storage availability was limited. Virtual storage was limited to 16MB, internal computer memory was expensive, therefore scarce, and disk space was also expensive and limited. Many of the company files were stored on magnetic tape, which was cheap but not practical for CICS files since it was sequential access, not direct. Databases didn’t exist as we know them now, so accessing data in CICS was a challenge.

Even so, CICS became more popular. As CICS popularity grew, so did the workload that ran in these regions. As the workload increased, so did the requirement for virtual storage to run the applications. Even back then, CICS could process multiple transactions concurrently. As the number of transactions grew, so did the requirements for storage. It was common for CICS to go Short on Storage (SOS), so installations were forced to increase the size of the region or decrease the number of transactions that ran concurrently. Installations wanted to increase the now-popular online access to company information, so they increased the region size or built another and separated applications into the multiple regions. That was unpopular since the user would have to log off the region and log onto the other one. Back then, products called session managers, which allowed you to jump from application to application from a common menu, didn’t exist.

So, IBM introduced Multi-Region Operation (MRO) in 1980 with CICS VS 1.5 and a solution that would solve the vertical problem of virtual storage and provide a horizontal solution of multiple CICS regions that were connected. Each region would, therefore, use the virtual storage for different purposes. The first evolution of this new configuration was removing the control blocks and program storage needed to log on from the region that ran the application. Terminal control, back then, required a significant amount of virtual storage to sustain the network requirements. Terminal “auto-install” didn’t exist (it was introduced in CICS VS 1.7 in 1986), so customers were required to assemble a Terminal Control Table (TCT) for every terminal in the network.

This process was laborious and painful. All new devices installed needed to be added to this TCT and the table assembled. When the region initialized, the TCT was loaded into the region with all virtual storage required whether or not each terminal actually logged into the region. As networks grew, so did the virtual storage required for each region to support the network. The solution was to build a Terminal-Owning Region (TOR) that supported this TCT and was separate from the Application-Owning Region (AOR) where the application actually ran.

Removing these terminals and their storage significantly freed up storage for the programs and transactions. This horizontal configuration also allowed the TOR to be connected to multiple AORs and route any transaction to the appropriate region. Transactions used a parameter (SYSID) to identify the AOR that was the target. After the transaction completed, control was returned to the TOR that originated the request. This also became an availability advantage since if an AOR had problems or came down, the TOR was connected to other AORs and could route workload to them.

TORs, AORs, and FORs

So, installations started building separate regions that were connected and using more virtual storage for each. File-Owning Regions (FORs) were created for the files that the applications used. By putting all files in a single CICS region, multiple AORs could access the same files whenever they needed to access data. That further freed up virtual storage in the AORs for transaction execution. Since there only needed to be a single TOR, that region tended to be fairly static and needed little maintenance or updates. The only support required for that region was the TCT updates when additional terminals needed to be added (see Figure 1).

2 Pages