Apr 11 ’12
CICS 101: Multi-Region Operation
This series of articles is intended to help new CICS support people understand the basics of the product and how it has evolved into the robust software that it is. Besides the basics, these articles will examine underlying components that may not be obvious and daily issues that face those who support the product. This article targets anyone who works with distributed systems (UNIX, etc.), recent college graduates with no or limited z/OS experience, and applications developers moving into the systems arena.
For a complete description of these articles and a list of basic prerequisites readers should already possess, see the first article in the series, which appeared in the March/April 2012 issue of z/Journal and is available at www.mainframezone.com/it-management/cics-101-the-starting-pointcics-initialization.
In the early days of CICS, customers used CICS for basic functions and the volume of transactions was relatively low. It was a new product and many installations still had most of their business workload running in batch. The Internet didn’t exist and typical processing consisted of “batching” updates or other requirements and running them in a single cycle. Computer terminals were the old “green screens” and had little capability to display company information. Since CICS was relatively new, there were a limited number of programmers who knew how to write an application to run in the online environment. They had spent most of their time writing batch programs.
As interest in CICS grew and the product evolved, more processing began moving into existing CICS regions. They were standalone regions that consisted of components to support sign-on to the region, processing of the required workload, and then displaying the output to the terminal screen. All resources required to support the application were contained in that region, including files, programs, and physical definitions. Unfortunately, memory was a precious commodity and storage availability was limited. Virtual storage was limited to 16MB, internal computer memory was expensive, therefore scarce, and disk space was also expensive and limited. Many of the company files were stored on magnetic tape, which was cheap but not practical for CICS files since it was sequential access, not direct. Databases didn’t exist as we know them now, so accessing data in CICS was a challenge.
Even so, CICS became more popular. As CICS popularity grew, so did the workload that ran in these regions. As the workload increased, so did the requirement for virtual storage to run the applications. Even back then, CICS could process multiple transactions concurrently. As the number of transactions grew, so did the requirements for storage. It was common for CICS to go Short on Storage (SOS), so installations were forced to increase the size of the region or decrease the number of transactions that ran concurrently. Installations wanted to increase the now-popular online access to company information, so they increased the region size or built another and separated applications into the multiple regions. That was unpopular since the user would have to log off the region and log onto the other one. Back then, products called session managers, which allowed you to jump from application to application from a common menu, didn’t exist.
So, IBM introduced Multi-Region Operation (MRO) in 1980 with CICS VS 1.5 and a solution that would solve the vertical problem of virtual storage and provide a horizontal solution of multiple CICS regions that were connected. Each region would, therefore, use the virtual storage for different purposes. The first evolution of this new configuration was removing the control blocks and program storage needed to log on from the region that ran the application. Terminal control, back then, required a significant amount of virtual storage to sustain the network requirements. Terminal “auto-install” didn’t exist (it was introduced in CICS VS 1.7 in 1986), so customers were required to assemble a Terminal Control Table (TCT) for every terminal in the network.
This process was laborious and painful. All new devices installed needed to be added to this TCT and the table assembled. When the region initialized, the TCT was loaded into the region with all virtual storage required whether or not each terminal actually logged into the region. As networks grew, so did the virtual storage required for each region to support the network. The solution was to build a Terminal-Owning Region (TOR) that supported this TCT and was separate from the Application-Owning Region (AOR) where the application actually ran.
Removing these terminals and their storage significantly freed up storage for the programs and transactions. This horizontal configuration also allowed the TOR to be connected to multiple AORs and route any transaction to the appropriate region. Transactions used a parameter (SYSID) to identify the AOR that was the target. After the transaction completed, control was returned to the TOR that originated the request. This also became an availability advantage since if an AOR had problems or came down, the TOR was connected to other AORs and could route workload to them.
TORs, AORs, and FORs
So, installations started building separate regions that were connected and using more virtual storage for each. File-Owning Regions (FORs) were created for the files that the applications used. By putting all files in a single CICS region, multiple AORs could access the same files whenever they needed to access data. That further freed up virtual storage in the AORs for transaction execution. Since there only needed to be a single TOR, that region tended to be fairly static and needed little maintenance or updates. The only support required for that region was the TCT updates when additional terminals needed to be added (see Figure 1).
This original configuration, as previously explained, allowed transactions to be defined in the TOR, which contained an identifier to route the task to the appropriate AOR. This was called static routing since the target AOR was pre-defined. An availability issue arose because, if that specific AOR wasn’t available, the task couldn’t be routed, and therefore was abnormally terminated. What if this task could run in any AOR? If the installation defined AORs that all contained similar resources, there was no reason the task couldn’t run in another AOR. It significantly increased availability and flexibility since the installation and the respective workload weren’t dependent on any particular AOR.
This was the birth of dynamic routing. It removed the requirement of specific identifiers in the transaction definition and created a target group of AORs that could be acceptable for the routing. The prerequisite, however, was that every AOR must contain the resources required for any of these transactions to run. The term cloning was adopted to describe all the AORs that would need to be built to support this process. This is now a popular configuration since it significantly increases availability and removes the dependency on a single AOR for any task to execute. The typical process that produces AOR cloning includes:
• All AORs contain the exact same DFHRPL concatenation so all programs are available in the same sequence.
• All AORs contain the exact same DFHCSD grouplist so all resource definitions are available.
• All AORs contain the same DFHSIT so all system parameters, including storage sizes, and system definitions are the same.
• All AORs contain the same DB2 connection definition since most CICS tasks now use DB2 resources, not VSAM files.
Once the configuration that allows dynamic routing has been built, there needs to be some mechanism to allow routing from the TOR without a static identifier. There are several ways to do this. One way is to use a sample program that IBM ships with the product. The module name is DFHDYP; the source can be found in the DFHSAMP library. Customers can use this or modify it to route the transactions from the TOR to the cloned AORs. Another alternative is to install and implement CICSPlex SM (CPSM). This product, which is shipped with every new release of CICS, has a complex, sophisticated process of defining transactions and the target AORs that would be available for dynamic routing. CPSM is covered in the next section.
An additional consideration in dynamic routing is that the application must be able to execute in any of the AORs in the configuration in multiple executions. In the past, many applications had affinities that dictated they always execute in the same AOR. Typically, these applications would leave data or other information behind in the AOR they initially executed in and expect that data to be available in the next cycle.
These affinities would preclude them from being dispatched again into a different AOR. IBM recognized this limit to dynamic routing and initially introduced The CICS Affinity Utility, which could be run against CICS programs to identify the Application Program Interface (API) commands that produced affinities. This utility has evolved into the CICS Interdependency Analyzer for z/OS. The CICS Publications Library offers more information about affinities and this product.
Once a multi-region configuration is built, it can be used to take advantage of new hardware configurations as the mainframe evolves. While earlier mainframes may have had a small number of Central Processors (CPs), the new machines have dozens of CPs that can be used to process workload. All these processors are available for executing work, which allows processing to be spread out across the Plex. Since CICS can dispatch multiple Task Control Blocks (TCBs) on these CPs, a multi-region configuration lets the installation use the hardware available and distribute workload. This balances processing and increases throughput, even creating a higher volume of transactions during any interval. Some large installations can sustain thousands of transactions per second during their peak interval.
Installations that require a great deal of flexibility and complexity for their dynamic routing requirements usually install and implement CICSPlex SM, which is shipped with every new CICS release. This product has complex algorithms to route workload through specifications set up for each transaction group. CICS doesn’t control it; it’s managed via a Web User Interface (WUI) external to CICS. As their configurations grow, many customers find that a tool such as CPSM is required to handle the complex routing algorithms of multiple business processes. CPSM is powerful but also requires a great deal of administration. Some customers find that support of the complex environment requires a full-time resource.
Each installation must consider its own needs in evaluating CPSM. To learn more about CPSM, see the IBM CICS library of publications at http://www-01.ibm.com/software/htp/cics/library/.
MRO has been a great asset to installations that have a large, complex CICS configuration. Even though virtual storage constraint isn’t as much a factor now as it was in the past, MRO continues to be a high-availability advantage for most installations that require non-stop processing. IBM continues to deliver enhancements to CICS with every release. MRO may now be taken for granted, but it was a major plus back in 1980 and is used extensively today.
The third article in this series, "CICS 101: Debugging Problems," is available here.