The downside of dynamic addition of resources from a pool is that other services also experiencing peak hour issues may be attempting to obtain pooled resources. This will lead to resource pool exhaustion and unhappy services users, as the services can’t meet their SLAs.
Let’s consider the mainframe as it might play in this arena. If you’re running z/OS on a z9, you can use Intelligent Resource Director (IRD) to cause a “donation” of CP cycles from clustered Logical Partitions (LPARs). You can accomplish this by having PR/SM and Workload Manager (WLM) converse when Suffering Service Class Periods (SSCP) occur. The SSCP occurs when CPU delays are detected and WLM can’t resolve the delay by adjusting dispatch priorities in an LPAR.
The IRD also provides Dynamic Channel-path Management (DCM), which moves channel bandwidth where needed. In simplest terms, the goal of the IRD’s DCM is to equally distribute I/O activity across all DCM-associated channel paths attached to the Central Processor Complex.
The IRD will use a third feature, Channel Subsystem Priority Queuing, designed so the important work that really needs it receives the additional I/O resource, not necessarily the other work that happens to be running in the same LPAR cluster.
Another I/O feature providing parallelism and bandwidth is DYNAMIC Parallel Access Volume (PAV), which supports more than one I/O to a single device at a time. Multiple I/O control blocks are assigned to a single physical disk drive. The primary and alias control blocks allow multiple I/O operations to be started and be in execution to the single physical arm of the disk drive. This reduces and sometimes eliminates queuing. This is similar to aspects of Small Computer System Interface (SCSI) in smaller systems.
CICS Version 3.2 provides enhancements and support levels for many Web services-related protocols and standards. This lets CICS participate in a company’s Web business functions using the full gamut of the most current exploitation, security, and provisioning capabilities.
The other approach to peak hour provisioning is to over-provision the required resources at first initiation of the service. While this is the simplest technique, it’s also the most expensive and will require human monitoring over time.
The over-provisioning approach will cause the entity paying for the service to buy resources that are needed only for the peak demand and pay for them even when demand is trivial. The service provider also will have to monitor this scenario as more parties begin to use this service to see if anyone else is interfering with and grabbing the over-provisioned resources because of their own peak demands.
A second concept would be to overprovision at a specific time of day in anticipation of the peak demand period. This can be done via peak demand templates that define the average asset and resource demands that may occur. These templates can be invoked by time-of-day triggers. But again, the service requester is now paying for resources that may or may not be required on a specific day. The difference here is that they’ve agreed to it as part of the SLA that was used to define the template.
The mainframe is the oldest, wisest architecture in its use of parallel multiprocessing workloads. With the bells and whistles in current products of the System z hardware, z/OS, and strategic business function delivery platforms such as CICS, the mainframe can ensure proper provision levels of any Web service it must provide to a business solution.
The use of dynamic provisioning concepts such as those used by System z IRD and CICS management of resources for legacy and Web services using its proven architectures should appeal to both service providers and service requesters as economically viable methods of controlling and addressing provision and provisioning resources.
Let’s stop thinking of the mainframe in terms of “legacy” applications and start thinking of it in terms of the Web services it can provide at the cheapest possible level of resource pools for provisioning peaks and valleys of demand. Z