Apr 6 ’11

Preparing for IMS 24x7 Availability

by Editor in z/Journal

Globalization has presented many challenges to the IT community, not least of which is providing continuous system availability. Where once an East Coast-based information center only needed to adjust for a West Coast transactional processing window, it now must deal with transactional windows half way around the world. Still, viable legacy systems were rarely designed for this, yet accommodations must be made, since many of those systems aren’t easy to replace.

IBM’s Information Management System (IMS) started out as Information Control System (ICS) in 1966 to manage data generated by the Apollo space project. ICS was renamed IMS 360 V1 in 1969 and was made available to the business community. Many changes have been made to IMS since, yet the basics of being both a transaction and (hierarchical) database manager remain. The system design can support continuous availability for IMS transactional applications while handling existing batch processing requirements.

IMS is a complex software family. This article examines how to convert batch jobs that access IMS databases to run as Batch Message Programs (BMPs). It’s not an IMS tutorial, so other aspects of IMS will be covered only to the extent they relate to this conversion process. The examples will refer to the z/OS (vs. z/VSE) operating system with COBOL as the primary mainframe language.

Basic IMS Further Simplified

An IMS subsystem consists of multiple address spaces running as jobs or started tasks on a mainframe operating system. There are two primary products, the database, known as IMS/Database Manager (IMS/DB), and the transaction processor, IMS/Transaction Manager (IMS/TM). These can be implemented together or separately. Here, we assume both components are available in the subsystem. All IMS address spaces, or regions, operating under an IMS control region are known as “dependent regions.” The IMS control region provides the interface of IMS to the operating system, network and other subsystems, schedules and dispatches programs running in the dependent regions, and supports logging and recovery functions.

Three types of batch jobs can access IMS databases:

This article will spotlight the more common DLIBATCH jobs, but most of the information will also apply to DBBBATCH jobs. What’s the difference between DLI/DBB and BMP jobs? The most striking is with IMS database access. IMS requires exclusive control of updatable databases to assure integrity. An IMS database must be taken offline—removing it from control of the IMS/TM—for a DLI job to add, delete, and update entries. Batch jobs that execute as BMPs run under the control region, which coordinates access, including locking, to the databases. This lets them remain available to the online IMS transactions. The control region will also manage logging and back out in case of an abend, removing that requirement from the batch jobs.

Why Convert?

Our strongest reason for moving away from DLIBATCH jobs was the projected need for continuous availability of the IMS databases. Shutting down databases for six to 10 hours each night was becoming less acceptable as Web application access increased. We had complex scripts to automate stopping and starting of databases and disabling and enabling IMS transactions. Converting eliminated tracking, maintaining and scheduling of those scripts, and simplified recovery and restart. There were no execution time changes when the DLI jobs became BMP jobs.

What’s Needed to Convert

There are six considerations to change a DLI batch job into a BMP job:

The “may needs” depend on how the application’s components are currently configured. Some jobs only required JCL modifications because the other items were already compatible with the BMP requirements. A job trail can have a mix of DLI and BMP jobs, so not every job must be converted simultaneously and not every required change must be made in the aforementioned order.

The JCL

IMS batch jobs, whether DLIs or BMPs, execute the IMS region controller program DFSRRC00. The parameters passed to that program identify the type of job to be executed.

Figure 1 shows the IBM-provided sample JCL for DLI and BMP jobs. The first parameter in the PARM for DFSRRC00 identifies the type of job, either DLI or BMP. (It can also indicate a transactional message region, MSG, or DBB.) Changing that parameter, currently “DLI” to “BMP,” is the first step. This can be specified with a symbolic override, but more commonly, separate procedures are created because the validity of the remaining parameters depends on the job type. Therefore, permitting an override of that parameter would cause the job to fail unless other parameters are changed or removed.

 

There are 20 parameters that need to be removed from the DLI JCL and 11 that need to be added, either explicitly or by default. Some of those 11 aren’t specifically required for a BMP, but must be accounted for in the PARM for any dependent region. A few of the others will change meaning when specified in a BMP. For example, the IMSID in a DLI job is an identifier of messages written to the system log. In a BMP, it’s the four-character IMS subsystem id specified in the SYS1.PARMLIB member and identifies the IMS subsystem that the BMP will connect to.

PARM values are positional, so they must be accounted for by commas, if not specified. If defaults are taken, the values specified for the control region may become the defaults rather than what’s stated in the manual, so review those values. Also, each BMP job JCL is independent of every other one, so the parameters can be defined differently for each, if necessary.

Due to the large number of parameters, you may wish to review the detailed descriptions, which can be found in IMS Installation Volume 2: System Definition and Tailoring.

The other job change is to remove the Data Definition (DD) card for the IEFRDER (and IEFRDER2 if there is one) log data set and any database recovery steps. Since the control region will manage logging, backout and recovery, neither the log data set nor the backout steps are needed in the BMP job.

The PSB

The PSB describes a program’s use of logical terminals and databases through the use of one or more Program Communication Blocks (PCBs), which describe the logical terminals (LTERM) and IMS databases the program can access. PSBs are defined in macros assembled using a PSBGEN process.

Of the different types of PCBs, a TYPE=TP, known as an I-O PCB, is usually used to receive and send information to a terminal. For BMPBATCH programs, it will be used for issuing checkpoints. An I-O PCB will always be provided to a BMP program, regardless of what’s specified in the PSB, so it isn’t necessary to modify the PSB except as documentation or as part of a company standard. If the PSB has an I-O PCB already defined, or COMPAT=YES is specified on the PSBGEN macro, IMS already sends an I-O PCB as the first storage area to the batch program, so no program changes are required. The COMPAT= parameter is used only by batch jobs; BMP jobs ignore it. Figure 2a shows an example of a PSB for a typical DLIBATCH program without an I-O PCB. Figures 2b and 2c show sample PSBs with I-O PCBs.

We had a secondary issue involving the PROCOPT parameter of a database PCB. PROCOPT= defines the processing options associated with the PCB. Because the batch trail included steps to delete, define and reformat the database, a PROCOPT=L (for INITIAL LOAD) was specified. PROCOPT=L isn’t permitted for a BMP, so the processing was changed to delete just the segments in the database, not delete and define a new database, and the PARM was changed to PROCOPT=A (meaning GET, INSERT, REPLACE DELETE). You should review the PCBs for other similar incompatible macro parameters.

 

The Programs

The MBR= parameter of the BMP proc PARM identifies the executable load module. The PSB= parameter identifies the PSB used by the load module. Since IMS will always send an I-O PCB to the program, the storage area for that block must be accounted for as the first data area in the program’s LINKAGE SECTION and with an ENTRY statement in the beginning of the PROCEDURE DIVISION. Although including the data area names on the PROCEDURE DIVISION is documented, use of the ENTRY statement is a more common coding technique (see Figure 3).

 

Checkpoints are used to commit updates and free segment locks for access by other programs. Since DLIBATCH jobs had exclusive control of the databases, they may not have had checkpoints coded. BMPs will be sharing resources with online IMS transactions and, possibly, other BMPs, so periodic checkpoints are required to avoid database locking and unnecessary resource retention. If checkpoints already exist, the frequency may need to be adjusted to release resources more often for a BMP. Guidelines for setting checkpoints can be found in the IMS Application Programming: Design Guide.

The APPLCTN Macro

The APPLCTN macro describes to the IMS control program the resources for the program running as a BMP. It’s assembled as part of the IMSGEN STAGE 1 process and becomes part of the input to the IMSGEN STAGE 2 process. An entry is required for each BMP:

APPLCTN   PSB=psbname,TYPE=BATCH

Notice that the PSB name is used instead of the program name. This macro can be defined and assembled any time before the BMP is scheduled; a TRANSACT macro isn’t needed.

Database Dynamic Allocation

IMS databases can be defined as DD cards in the DLIBATCH job JCL or dynamically allocated when the job starts. Either way, they’re allocated at the beginning of the jobstep, before the application starts. If the databases aren’t JCL allocated, dynamic allocation is always attempted unless it’s disabled by a NODYNALLOC statement in the DFSVSMxx PROCLIB member. Here ‘xx’ is the VSPEC parameter of the job start-up procedure DFSPBxxx PROCLIB member, and ‘xxx’ is the RGSUF symbolic in the PARM passed in the start-up JCL to the IMS control region program DFSMVRC0. The DFSMDA macro is used to define databases that can be dynamically allocated. It’s instream data to an Assembler job followed by a linkedit to the *.DYNALLOC loadlib in the JOBLIB or STEPLIB. The same job can have DD card-specified databases and dynamically defined databases. Figure 4 shows a sample dynamic allocation macro.

  

The ACB

While the PCB describes the logical organization of a database, a Database Descriptor Block (DBD) describes the physical aspects (device type, database type, field and segment lengths, etc.) of a database. It’s created through a DBDGEN process. To be useful, the information in the PSB and DBD must be combined into an Application Control Block (ACB) in an ACBLIB. The primary difference between a DLIBATCH and DBBBATCH job is how the ACB is generated. For a DLIBATCH job, the ACB is built dynamically at step initialization. For a DBBBATCH job, the ACB is created offline by running an ACB maintenance utility, which makes a DDBBATCH job somewhat more efficient than a DLIBATCH job.

IMS won’t dynamically create an ACB for a BMP job, so one would have to be created offline and provided in an ACBLIB, using the ACBGEN procedure to create an ACBLIB entry with the same name as the PSB. The ACBLIB doesn’t need to be in the BMP procedure; it can be defined in the control region. Since the DLI job doesn’t use the ACB, it can be created any time before the conversion.

Other Subsystem Considerations

If the DLI program accesses other mainframe subsystems such as WebSphere MQ and DB2, additional modifications are needed. For WebSphere MQ access, the object module must be re-linkedited using the WebSphere MQ for z/OS IMS stub CSQQSTUB instead of one of the batch stub programs. In addition, if not already done, an IMS PROCLIB Subsystem Member (SSM) entry must be created. A more detailed review of the requirements can be found in the article “The WebSphere MQSeries-IMS Interface” in the December 2007/January 2008 issue of z/Journal and the IBM MQSeries for z/OS Application Programming Reference and Guide manuals.

There’s more than one way to establish a connection between a DLI batch program and a DB2 subsystem. How extensive the changes will be to convert to a BMP depends on how that connection is currently defined. In a DLI batch job, the DB2 subsystem is usually specified through an input data set, DDITV02, but an SSM member and a Resource Translation Table (RTT) created in the IMSGEN process may have been used. A DDITV02 data set entry will override the SSM member if both are specified. If MBR=DSNMTV01 is coded in the DLI job, the actual executed application program will be the specified PROG value in the DDITV02. Figure 5a shows a typical JCL reference to the DDITV02 data set entry.

 

Entries for CONN_NAME,PLAN, PROG aren’t valid in the SSM, so a combination SSM member and RTT are required for a BMP. A sample SSM with two DB2 subsystems using the same RTT is shown in Figure 5b. The proper DB2 subsystem is determined by the LIT value specified in the DFSLI macro. Figure 5c shows the associated RTT table, which relates the executing program to the DB2 plan in the DB2 subsystem specified in the SSM. The program name specified by the APN= parameter must match the MBR= name in the BMP job.

The SSM and the RTT can be in either the control region or the dependent regions. Be careful to ensure the proper rules are followed to reference the correct members if they reside in multiple loadlibs or multiple regions. You may have established site standards for specifying the libraries they reside in. Additional information on loadlib search order can be found in the IMS Installation Vol. 2: System Definitions and Tailoring manual.

Figure 6 shows a sample implementation of how the IMS program, JCL and macros go together.

 

Conclusion

Extending the availability of mainframe subsystems, and the applications with proven value that run on them, is a significant technical challenge. That challenge can be met through careful analysis, planning, and cooperation among the responsible project teams. The challenges aren’t insurmountable; this article can help you identify what to look for in undertaking a project that will bring additional mainframe applications to the modern age.