The consolidated data center in Figure 3 has become a unified architecture that’s adaptable and efficient. The System z9 supports mixed applications and is managed from a single screen or multiple screens. The director has been carved up into Fibre Channel virtual switches that suit the needs of each application today and quickly scale to about any level. The links in Figure 3 are black because they could be attached to any of the virtual fabrics. The combination of more powerful hardware and virtual processors and switches has led to a simpler architecture than the distributed storage network.
Figure 3 shows details of the virtualization techniques between the System z9 and the director. The mainframe applications have traditional direct links to the Fibre Channel virtual switches that connect the mainframe to the DASD. These links are typically running at 4.25 Gbits/second and can support high I/O Per Second (IOPS). The open systems applications are using NPIV to let open systems applications access the open systems storage. Figure 4 shows how one physical link to Virtual Fabric 1 supports eight open systems applications. Each open systems application has a zLinux server, Worldwide_Name and N_Port_ID for management purposes. With multiple open systems applications using a single link, the usage rates have increased to the levels shown in Figure 5. This table shows how fewer ports were used more efficiently in the consolidated storage network than in the distributed storage network. Even after several applications were added to the data center, the number of ports in the fabric decreased from 199 to 127.
With the link speed increasing to 4GFC, the utilization rate increased by more than three times to an average of 61 percent from 20 percent. With virtual fabrics, the number of ports used in each fabric is flexible and switch ports can be added to the fabric without having to buy another switch. Other Fibre Channel virtual switches also could be carved out of the director that has a higher reliability than multiple small switches. The benefits of the consolidated approach include efficiency, cost, and manageability.
IBM has worked with several companies to develop the standards to support these virtualization techniques for storage networking. These virtualization techniques were standardized in T11 so the System z9 could replace multiple open systems servers with Linux on System z servers. NPIV lets these servers use open systems storage to yield a low-cost solution with better properties. The Fibre Channel virtual switches that create virtual fabrics are another technique to increase manageability and let the physical switches be more adaptable.
NPIV and virtual fabrics will play into near-term solutions such as grid computing and computing on demand. To support automated computing power in these environments, the processors, storage, and storage networking must be driven by policy-based applications. The administrator establishes the performance policies and enables the soft or virtual layers on top of the hardware to automatically manipulate the resources to meet data center demands.
An aspect of virtual fabrics not covered here is virtual fabric tagging. Virtual fabric tagging lets multiple Fibre Channel virtual switches use the same physical link. Mixing of fabric traffic virtualizes the link and increases utilization of an InterSwitch Link (ISL). The virtual fabric tagging is highly effective for optimizing expensive, long-distance links.
A follow-up article will explore virtual fabric tagging and inter-fabric routing over large geographical areas. Storage networks offer much more than physical pipes. The intelligence being incorporated into virtualization is making the storage network an integral aspect of on-demand computing infrastructure. Z