While great power brings great responsibilities, great computing power brings virtualization. Virtualization brings finer control to powerful systems that can be logically divided into multiple processes that can be run simultaneously. With the System z9 running z/VM and up to 60 Logical Partitions (LPARs) that each support many Linux on System z servers, it needs a storage network that can effectively handle this power. The System z9 uses new Fibre Channel techniques of N_ Port_ID Virtualization (NPIV) and Fibre Channel virtual switches in virtual fabrics to support Linux on System z under changing situations. This article explores these standardized storage networking virtualization techniques in a practical application.

Storage networks are expanding into virtual realms that have been widely accepted in mainframe environments for decades. Mainframes have housed many virtual processes to increase performance, utilization, and manageability. The same transformation is occurring in storage networks at two levels. NPIV lets individual Linux on System z servers access open systems storage that’s usually much lower cost than DASD associated with mainframes. Virtual fabrics let the storage network create multiple Fibre Channel virtual switches using the same physical hardware. A good way to get to know these virtualization techniques is through a typical implementation that has problems which are relieved with these storage virtualization techniques.

The Distributed Storage Network

Figure 1 shows a typical data center that’s spread across two sites. Each site has a mix of open systems applications running on open systems servers and mainframe applications running on Linux on System z servers. The mainframe applications are mirrored across both sites, but only a small percentage of the open systems applications are located on both sites. This mixed environment has two fabrics with two 64-port directors that connect to open systems storage and DASD, and a third fabric that connects backup applications to tape on two 24-port switches. A redundant set of fabrics isn’t shown in the drawing for simplicity. These fabrics have grown organically, and the multiple types of open systems servers have become difficult to manage and continue to grow faster than the mainframe applications.  

Many open systems servers have sporadically popped up in the data center with random business application requirements. Every time a department finds a new application, servers sprout like weeds in spring and require new cabling, switching, storage, and management. The open systems servers also bring administrative baggage such as procurement, maintenance, and inventory. A few servers a week, month or quarter turn into racks of servers over the years. The System z9 has targeted these racks of servers with Linux on System z virtual servers. Instead of acquiring new hardware for new or overloaded applications, IBM would like to solve the problem of multiplying open systems servers with virtual Linux on System z servers.

Linux on System z servers can replace open systems servers and be up and running in a matter of minutes or hours instead of the days or weeks it takes to acquire and install physical open systems servers. The Linux on System z servers offer cost savings by using low-cost open systems storage instead of the expensive DASD, but many deployments may still use their existing DASD with NPIV.

The FICON adapter is made for Fibre Channel, which supports multiple upper-level protocols such as FCP and FICON. So the same adapter can use the FCP devices with a different code load. A new adapter would probably be needed for the new application, but the cost of the HBA shouldn’t be more than the cost of the FICON adapter.

World-class computing resources can now use low-cost open systems storage with Linux on System z and NPIV. The Linux on System z servers scale predictably and quickly and offer many benefits such as consistent, homogenous resource management.

N_Port_ID Virtualization

3 Pages