To Switch or Not to Switch? That’s the Question!

2 Pages

“What goes around comes around” is an old saying that’s especially true for information technology, particularly for data center management philosophies. We’ve gone from centralized to distributed and back again. As far as storage connectivity goes, we’ve gone from direct-attached to networked, and, in the mainframe space, we have a recurring question, “Do I need FICON switching technology, or should I go direct-attached?” With up to 288 FICON Express8 channels supported on a System z196, why not just direct-attach the control units?

With all the I/O improvements, now more than ever, you really need switching technology. This article will address the technical reasons why switched FICON is your best choice. A future article, focusing on the business case merits of switched FICON, will appear in an upcoming issue of Mainframe Executive.

There are five primary technical reasons why it’s wise to implement switched FICON rather than use point-to-point (direct-attaching) FICON for connecting the storage control units:

  • IBM changes in buffer credits available on FICON channels and this change’s impact on distance/performance
  • Channel consolidation and utilization
  • Improved Reliability, Availability, and Serviceability (RAS)
  • Greater flexibility for future scalability and multi-site connectivity
  • IBM has introduced several new technologies for FICON that require the user to have a switched architecture.

Let’s explore each of these.

FICON Channel Buffer Credits

Among the important changes made to FICON Express8 channels in summer 2009 involves the number of buffer credits on each port per four-port FICON Express8 channel card. FICON Express4 channels had 200 buffer credits per port on a four-port FICON Express4 channel card. With FICON Express8 channels, this changed to 40 buffer credits per port on a FICON Express8 channel card.

The number of buffer credits required for a given distance varies directly in a linear relationship with link speed. If you double the link speed, you double the number of buffer credits required to achieve the same performance at the same distance. Now, recall the IBM System z10 Statement of Direction concerning buffer credits from 2008:

“The FICON Express4 features are intended to be the last features to support extended distance without performance degradation. IBM intends to not offer FICON features with buffer credits for performance at extended distances. Future FICON features are intended to support up to 10km without performance degradation. Extended distance solutions may include FICON directors or switches (for buffer credit provision) or Dense Wave Division Multiplexers (for buffer credit simulation).”

IBM held true to this, and the 40 buffer credits per port on a four-port FICON Express8 channel card support up to 10km of distance for full frame size (2kb frames) I/Os. What happens if you have I/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It’s also likely that, at future faster link speeds, this distance supported will decrease to 5km or less. Depending on the specific model, FICON directors and switches typically can have more than 1,300 buffer credits available per port for long-distance connectivity. FICON Express4 channels with 200 buffer credits per port could support 100km distance direct-attached to a control unit.

A switched architecture lets you overcome the lower number of buffer credits available on a FICON Express8/FICON Express8S channel card.

Channel Consolidation and Utilization

In the distributed, open systems world, what we call point-to-point FICON is known as direct-attached storage. In the late ’90s, the open systems world started to implement Fibre Channel Storage Area Networks (SANs) to overcome the low utilization of resources inherent in a direct-attached architecture. Storage networks overcome the low utilization of resources via what’s known as fan-in and fan-out storage network designs. These same principles apply for a FICON storage network.

2 Pages