While LPARs consumed one or more Fibre Channel ports, multiple Linux on System z servers can share a single Fibre Channel port. The sharing is possible because most open systems applications require little I/O. While the actual I/O data rate varies considerably for open systems servers, a rule of thumb is they consume about 10MB per second. Fibre Channel has reduced the I/O latency by scaling Fibre Channel speeds from 1Gigabit/second Fibre Channel (1GFC) to 2GFC, and 4GFC, with 8GFC expected in 2008. With each gigabit/second of bandwidth, one Fibre Channel port supplies 100 MB/s of throughput. A 4GFC link should be able to support about 40 Linux on System z servers from a bandwidth perspective.

To aggregate many Linux on System z servers on a single Fibre Channel port, the Fibre Channel industry has standardized NPIV, which lets each Linux on System z server have its own 3-byte Fibre Channel Address or N_Port_ID. After N_Ports (servers or storage ports) have acquired an N_Port_ID from logging into the switch, NPIV lets the port request additional N_Port_IDs—one for each Linux on System z server running z/VM Version 5.1 or later. With a simple request, the switch grants a new N_Port_ID and associates it to the Linux on System z image. The Worldwide_Name uniquely identifies the Linux on System z image and lets the mainframe Linux servers be zoned to particular storage ports.

Virtual Fabrics

Considering that a System z9 can have up to 336 4GFC ports, thousands of N_Port_IDs can be quickly assigned to a mainframe. Managing this many ports in a single fabric can become cumbersome and cause interference. To isolate applications running behind different N_Ports attached to the same switch or director, the T11 Fibre Channel Interfaces technical committee (www.t11.org) has standardized virtual fabrics. In a similar way to how the System z9 has multiple virtual servers, a physical switch chassis may support up to 4,095 Fibre Channel virtual switches.

Fibre Channel virtual switches relieve another difficulty shown in Figure 1, which is managing several distributed fabrics. The storage network has grown organically with different applications and the physical limitations of the switches. The table in Figure 2 shows the port counts for each fabric on each site. Fabrics 1 and 2 each have 64 port directors with 28 to 48 of the ports being used. The backup fabric has only 24 port switches, though, and only a single port is available on each of these switches. While Fabrics 1 and 2 have a considerable number of unused ports, the switches don’t have the capability to offer ports to the backup fabric. The usage rates of the fabrics also are relatively low and the mixture of products, firmware releases, and management applications makes the distributed fabric rather complex.

A better solution that meets all the needs of the data center is available. Figure 3 depicts the consolidated storage network where the physical configuration of the data center can be consolidated on two 256-port directors that can be divided into multiple virtual storage networks. These networks can be large or small and offer independent, intelligible management.

When large corporations require more than a thousand servers in their data center, the manageability and accountability for controlling storage networks can become unruly. Breaking the fabrics into small, virtual fabrics increases the manageability of the storage network. Instead of coordinating a team of administrators to manage one large storage network, individuals can manage a comprehensible piece of the solution after it’s broken down. The virtual nature of the new data center creates a management hierarchy for the storage network and enables administration to be accomplished in parallel.

A further explanation of the differences between the distributed storage network and the consolidated storage network illuminates the benefits of the new approach. In Figure 1, the distributed storage network has become a management nightmare where 58 open systems servers are consuming 58 Fibre Channel ports and the corresponding cabling and rack space. Each open systems server is using only a fraction of the link’s bandwidth as the speed of the link increases to 4GFC. The reliability of the variety of servers that have accumulated over the years is significantly less than the tested reliability of the System z9. The administrative costs of adding and repairing servers that change every quarter leads to complexity and inefficiency. The organic growth of the open systems servers has led to a population problem.

3 Pages