Workload batching is the term applied to the effect seen in heavy workloads in multiple CMAS environments, where dynamic distributed (DSRTPGM) routing requests are being processed. A target region may be managed by a different CMAS to the routing region typically because they reside in different LPARs. In that circumstance, the router is using a copy of the descriptor structure to evaluate the target status from the actual structure employed by the target itself.
The copied target descriptor being reviewed is synchronized with the actual descriptor in 15-second intervals. Between these15-second heartbeats, the router will have a less accurate status compared to other potential target regions in the workload and will continue to base its routing decisions on the last known valid data. Eventually, a heartbeat will occur and the data is refreshed. Compared to other regions, the target could now be either extremely busy or completely unexploited. The router reacts to this by appearing to be more aggressive in routing work toward or away from the target. This can cycle the region from a high throughput to a low throughput on this heartbeat boundary. This workload batching state will continue until there’s a genuine lull in the workload throughput, which will settle the batching down until the throughput picks up again.
A user watching the task loading across the CICSplex will see some regions running at their MAXTASK limits and being continually fed with dynamically routed traffic while others remain unused. A snapshot 15 seconds later will probably see a reversal of utilization—the busy regions will be idle, and the idle regions will be at their MAXTASK limit. The users most susceptible to these events are those who use MQ triggers to feed transactional data into their CICSplexes, where the trigger regions tend to be managed by different CMASs. Those users would see the greatest benefit of Sysplex optimized workload routing.
Sysplex Optimized Workloads
When CICSPlex SM was originally conceived, a single data space was considered to be a wide enough scope to provide a common data reference point for all regions in the CICSplex. Today, that’s no longer true. The mechanism chosen to broaden the scope of these common points of reference is the z/OS coupling facility. However, the content of the WLM data space hasn’t simply been migrated into the coupling facility; some internal re-engineering was also undertaken.
Routing regions are currently responsible for adjusting the target region load counts WLM uses to determine task loads. On every heartbeat, the CICSPlex SM agent in the user CICS region reports its task count to its owning CMAS. The CMAS will then update the load count in the target region descriptor of its WLM data space and broadcast that value to other CMASs participating in workloads associated with the user CICS region.
For Sysplex optimized workloads, this is turned around. When a target region runs in optimized mode, the target region is responsible for maintaining the reported task count. CICS does this counting in the transaction manager; the count includes instances of all tasks in the CICS region, not just those that are dynamically routed. This load value for the CICS region, along with its basic health status, is periodically broadcast to the coupling facility where other CICS components can interrogate it.
At the CICSPlex SM level, a router will know whether this region status data will be available or not, and will factor this data into its dynamic routing decision, in preference to its original data space references. This means routing regions are reviewing the same status data for a potential target region, regardless of which CMAS manages it. Therefore, the routing region is always using current status data to evaluate a target region rather than status data that could be up to 15 seconds old. In an environment where all routing targets are in a similar health and connectivity state, this means the spread of work across the workload target scope is far more even than in non-optimized mode. However, all the original data space processing remains intact. This is necessary to maintain a seamless fallback mechanism should the coupling facility become unavailable.
Switching Workload to Optimized State
For a workload to operate in a fully optimized state, all regions in the workload must be at the CICS TS V4.1 level or higher and a CICS region status server must be running in the same z/OS image as each region in the workload in the CICSplex. This is a batch address space running a specialized CICS Coupling Facility Data Table (CFDT) server that is properly configured. This server must be managing the same CFDT pool name as that identified in the CICSplex definition (CPLEXDEF) for the CICSplex that will encompass your workload. The default pool name is DFHRSTAT. You may choose a different pool name or even a pool name that already exists in your z/OS configuration. However, a discrete pool name for dedicated region status exploitation is highly recommended. Otherwise, access to user tables in the pool may be degraded by WLM operation, and vice versa.