Wachovia Achieves High Availability With CICS

3 Pages

In the June/July 2007 issue of z/Journal, we explained the extensive design and rebuild that Wachovia decided was necessary to put its critical applications into an environment that guaranteed high availability and performance. Many months of design, planning, and configuration were required before the first application could be moved and tested.

Many companies will find that building a new configuration would be the best option when placed in the position of designing a highly available CICS environment. Unfortunately, most CICS environments were built many years ago, and even though regions have been separated into Terminal Owning Regions (TORs), Application Owning Regions (AORs), and even File Owning Regions (FORs), they weren’t built to support today’s high-availability model. When companies upgrade to new releases or versions, the underlying software and libraries may change but systems programmers still put it all into the existing CICS regions. The transactions, programs, etc. still execute in the same scenario, route to the same AORs, and expect the same resources to be available as before the new software was installed. Some customers actually build CICS regions for a specific application and therefore limit the availability of the application to that region being available. Ask technical folks why their environment is built the way it is and you’ll probably get the standard answer, “We’ve always done it this way.”

Well, that’s not always good. It likely limits the customer’s ability to use new functions and enhancements available in new releases and it certainly makes moving to a highly available cloned environment cumbersome. Wachovia decided to “bite the bullet” and build its new environment on what IBM and many customers know as best practices. The previous article covered how new names, new regions, and all resources going into this new environment were standardized and consistent. It’s not easy and took considerable time and energy, but the investment paid off. This article examines how Wachovia took the new environment that was built, moved applications into it, and began testing. The results are positive.

Moving Applications Into the New Configuration

After Wachovia’s new CICS parallel sysplex environment was created, the next step was to move applications into it. Most of the actual “plexing” work fell into the various IBM systems programmers’ hands, but it was critical for Wachovia to make each application’s production cloning implementation a non-event. It took teamwork and familiarity with the new parallel sysplex principles to achieve this goal. Application programmers became familiar with the new parallel sysplex configuration (see Figure 1) and began to understand the new online CICS environment as they learned how implementing high-availability techniques affected their online applications. This article will share the setup and learning experiences encountered in moving to a CICS high-availability environment.

Here are the steps to CICS high availability that worked effectively from an implementation perspective. This article will offer more detail about many of these steps:

  • The parallel sysplex technical team trained the application groups involved in the plexing process so everyone understood the technical pieces of a parallel sysplex and the responsibilities that belonged to each team.
  • Teams met weekly to track progress and work through issues.
  • The CICS group ran the affinity utility to find all the affinities that needed to be individually addressed for each application.
  • The CICS group created a list of shared Temporary Storage Queue (TSQ) patterns and provided them to the application group for validation.
  • Other affinities that could be fixed by CICS definitions were defined to the new CICS cloned regions.
  • If any affinities needed to be fixed by the application through coding changes, the affinities were removed and the application changes were moved to production.
  • If any affinities couldn’t be fixed, those transactions still attained higher availability by creating a special CICSPlex definition with a userid affinity.
  • If the data for an application was in DB2, the DB2 group moved the DB2 data to a shared DB2 environment while the application still resided in the existing single CICS region.
  • If the data for an application was in VSAM, the application group prepared the VSAM files by changing them from regular VSAM to VSAM Record Level Sharing (RLS) by changing to new Storage Management System (SMS) storage and data class parameters, using IDCAMS delete/define/ repro jobs.
  • After the files were defined with the appropriate parameters to invoke SMSVSAM, the CICS group changed them to VSAM RLS in the CICS File Control Table (FCT) in the existing single CICS region.
  • The CICS team defined the application transactions to CICSPlex SM, pointing them to the proper cloned set using a standard CICSPlex setup script.
  • The CICS team created and installed the application objects in the new CICS cloned set.
  • The MQ team worked with the application group to implement the changes necessary to invoke MQ shared queues while the application was still running in the single existing CICS region.
  • CICS and communication groups configured TCP/IP ports as shared, using DDVIPA.
  • The application group and its business unit set up reusable scripts to test different failure scenarios. These scripts were rerun until everyone felt comfortable that the application performed suitably and continued to work without any errors during subsystem failures.
  • The applications were implemented in the new cloned environment in production by moving the MQ triggers and bridges as well as the TCP/IP ports and green-screen transactions on implementation day.

Removing Affinities

Before an application can run in a parallel sysplex environment, the application affinities must be addressed and/ or removed. The first step to cloning a CICS application is to identify the application’s affinities. An affinity is an object in an application that’s found only in one region. From an application perspective, a transaction can span several touches of the enter key or clicks of a mouse before the entire logical unit of work is complete. If the first part of an application transaction runs in one region, and the second part of that same application’s logical unit of work runs in a second region, the second region must be able to get to the “save areas” that the first transaction may have left behind to pass to the second one. This is an affinity.

If an application you’re considering cloning is a package, it’s wise to talk to the vendor to see if a newer version of the software will take care of any affinities. Most applications, however, can be moved to the new cloned parallel sysplex environment with no application coding changes; just some new VSAM IDCAMS parameter settings if the application uses VSAM and lots of testing and change coordination when the application moves into production.

3 Pages