With the ability to intelligently, quickly, and efficiently manage workloads, reprioritize work, and dynamically reallocate system resources between applications, the z/OS Workload Manager (WLM) and IBM System z can handle unexpected workload spikes and improve system efficiency while meeting your application and business priorities.
This occurs by communicating workload goals to WLM through your WLM service policy. Inside this policy are several classification rules that sort your work into service classes. The service class periods provide WLM with the goals and importance of your workloads. Goals are expressed in terms of response time (average or percentile) and velocity for long-running work. The importance of the service class period—used to determine how quickly to respond to service class periods that are missing goals—is expressed as a value from one through five, where one is high importance and five is low.
Today, your z/OS system is filled with transactions and server address spaces of all types—remote DB2 queries, stored procedures, WebSphere Application Server (WAS), CICS, IMS, WebSphere MQ, and several UNIX daemons. Let’s explore how your WLM service policy is used to manage these transaction and server workloads.
WLM manages three kinds of z/OS workloads: address spaces, enclaves, and CICS and IMS transactions, which fall into a category by themselves. Everyone understands z/OS address spaces, but what about enclaves? They’re a type of z/OS work where transactions are managed independently of the address space in which they’re running. Transactions that run in enclaves let WLM assign them a goal that results in a dispatching priority separate from the address space’s goal and dispatching priority.
Task Control Block (TCB) work and pre-emptible Service Request Block (SRB) work (called enclave SRBs) are scheduled into an enclave and associated with its goal. These enclave SRBs are pieces of work z/OS can separately dispatch. Distributed DB2 transactions, DB2 stored procedures, and WAS transactions all use enclave SRBs; CICS and IMS subsystems don’t use enclaves. CICS and IMS transactions aren’t pieces of work that z/OS can separately dispatch.
When WLM manages workloads such as distributed DB2, WAS, and CICS and IMS, it sees two kinds of work—the address spaces and the transactions that are inside (see Figure 1). For all these subsystems, the address spaces are started tasks. In the WLM classification rules (part of your service policy), these started task address spaces are classified using the STC subsystem rules. Long-running started task address spaces are assigned to service classes containing a velocity goal. Some installations start CICS regions as batch jobs. In this case, the Job Entry Subsystem (JES) rules would be used to assign a service class to the CICS region address spaces.
Let’s apply these principles to the distributed DB2 transactional workload, also known as the Distributed Data Facility (DDF). The address space that initially handles these remote queries is usually named xxxxDIST. So, initially, you start the xxxxDIST address space as a started task. The WLM STC classification rules assign this new started task to a service class with a velocity goal. Now the remote queries start to arrive off the network. Each of these remote queries will run in an enclave, and each of these enclaves needs to be assigned a goal taken from an appropriate service class.
For this assignment to occur, WLM uses DDF classification rules. The DDF rules can be used to sort the remote queries into one or more service classes, where the stated goal results in WLM giving each enclave a dispatching priority, which is separate from the dispatching priority that was assigned to the started task address space. Because the enclave that’s created at this point is for new work arriving to z/OS and needs to be classified into a service class, it’s known as an independent enclave. At this point, the remote query running in an enclave in the xxxxDIST address space makes an “address space-toaddress space” program call to the DBM1 address space, where the query actually runs. The query running in DBM1 runs under the goal assigned to the enclave (see Figure 2).