Channel Subsystem Priority Queuing
Prioritizing I/O requests isn’t a new feature for z/OS. I/O requests could be prioritized on device queues in the operating system way back in MVS. Channel Subsystem Priority Queuing (CSSPQ) is an extension of I/O priority queuing, a concept that has evolved from MVS and into OS/390 and z/OS over the past several years. Since the introduction of the Enterprise Storage Server (ESS), WLM has been able to set priorities on I/O requests, which are then honored by the control unit. CSSPQ extended the ability to prioritize I/O requests by addressing one more place where queues could form: the channel subsystem.
In an LPAR cluster, if important work is missing its goals due to I/O contention on channels shared with other work, it will be given a higher channel subsystem I/O priority than the less important work. This function works together with DCM. As additional channels are moved to the partition running the important work, channel subsystem priority queuing is designed so the important work that really needs it receives the additional I/O resource.
WLM can set priorities on I/O requests. These priorities are then used by the host to schedule the work to channel subsystems resources. This lets the user identify their most mission-critical workloads, and lets z/OS work with a CPU to allow this critical work to have greater access to channel subsystem resources.
We’ve reviewed the basic concepts behind QoS and discussed some of the ways QoS is currently being addressed in FICON storage networks. While these mechanisms are sound, they address QoS in only one small segment of the configuration—typically between cascaded FICON directors. What’s needed is a QoS mechanism that enables end-to-end QoS functionality from host to storage control unit. A follow-up article will take a detailed look at DCM and CSSPQ and how they could be adapted for FICON.
Fast Forward to 2014
Fast forward now to September 2014. Let’s briefly look at what has changed and brought us to where we want to be. First, and most important, support for FICON DCM was announced by IBM in fall 2010. All of the functionality described in the original 2008 article that existed for ESCON DCM is now supported for FICON, with the caveat that the FICON switching devices in the configuration have FICON Control Unit Port (CUP) on the directors.
Second, additional CUP commands/programming has been added to the FICON Director Programming Interface. This added CUP functionality gives the zEnterprise added insight into the performance characteristics of the FICON SAN.
Third, improved functionality with FICON director buffer credit configuration has allowed for more consistent performance on FICON interswitch links (ISLs).
Fourth, virtual channel technology on FICON SAN switching devices has improved significantly over the past six years.
Finally, Class-Specific Control (CS-CTL)-based frame prioritization as a QoS option in a SAN is now a reality. CS_CTL-based frame prioritization allows you to prioritize the frames between a host and a target as having high, medium or low priority, depending on the value of the CS_CTL field in the FC frame header. The CS_CTL field in the FC header can be used to assign a priority to a frame. This field can be populated by selected end devices (storage or host) and then honored by the switch, which assigns the frame, based on the value in the CS_CTL field, to allocate appropriate resources throughout the fabric. This method of establishing QoS is an alternative to the switch-controlled assignment that uses zone-based QoS. In other words, host-controlled QoS.
We’ve come a long, long way since 2008. It’s taken awhile to get here. My good friend who helped me write the original article, Dennis Ng, retired from IBM earlier this year. Dennis and I presented this at SHARE in 2008 and 2009 as well. A future article in Enterprise Tech Journal will discuss the technical aspects of the new developments outlined above in more depth.