What’s old yet new, black and not blue, supports Linux, FICON, and open storage interfaces, too? Well that would be an IBM zSeries mainframe. I recently saw an interesting cartoon that showed Unix being squeezed out by Windows and Linux environments. Also in this cartoon were the banners of other ousted systems and technologies, including mainframes. While Unix was long hailed as the operating system that would put the final nail into the mainframe’s coffin, it is clear that the zSeries processor family is very much alive as are z/OS and Linux. This article examines how various open interfaces and technologies coexist and are part of a zSeries environment today that is very much alive.
Has open storage finally arrived for the zSeries? That depends on your definition of open storage and storage interfaces. What is clear is a continued shift toward common off-the-shelf (COTS) technology with open standards. This shift is being made in conjunction with maintaining legacy support for zSeries environments and applications. This is the same theme and premise behind virtualization, which is a popular storage topic today. Some recent examples of leveraging open technology for zSeries include support for the open Linux operating system on zSeries systems.
IBM has enhanced Linux on zSeries by adding SCSI Fibre Channel protocol (FCP) support, the same I/O protocol used by Unix, Windows, and Linux systems. FICON cascade leverages open technologies, including fabric binding and E_Port inter-switch links (ISLs) based upon the ANSI FC-SW-2 Fibre Channel standard. For increased distance, Fibre Channel over TCP/IP (IP) using a protocol commonly called FCIP (Fibre Channel IP) can be used to support long-distance storage networks. FCIP, which is a pending ANSI standard, can be used to extend Fibre Channel and its upper level protocols (ULPs), including FICON and FCP traffic, over long distances using IP. By creating a network tunnel that appears as a virtual, long-distance Fibre Channel link over IP or other wide area networking (WAN) interfaces, storage networks can be extended thousands of kilometers to support business continuance.
Today, virtual processors can run a variety of different operating systems on multiple logical partitions (LPARs), virtual networks to access storage, and virtual storage that can be accessed for open systems mainframe environments. IBM’s latest iteration of the zSeries mainframe, the z990 (T-Rex), is yet another system in a long list of IBM S/390 mainframes. The z990 signals a convergence of the virtual server with virtual storage networks and virtual storage (see my article in the June/July 2003 issue of z/Journal, “zVirtual Storage and Storage Networking”) to implement a virtual data center. T-Rex enables the consolidation of open servers and mainframe workloads by functioning as a z/OS mainframe, open systems, Linux super server, or both.
A virtual data center may bring back memories of the information utility and related concepts from the late ’80s and early ’90s, which are being re-spun today. The premise of virtualization is to simplify, mask complexity, enable technology transitions without disruptions to other applications, and to help contain costs. What does T-Rex have to do with a virtual data center, since the mainframe is supposed to be dead? The mainframe, along with other technologies, including tape, printers, AS400/iSeries, JCL, and COBOL, are considered by some to be dead technologies. Although the mainframe is lacking in hype and market appeal, it has reached the plateau of productivity in many environments.
STORAGE ACCESS METHODS
To some people, the term “SCSI interface” conjures up thoughts of SCSI storage for attaching disk drives, CDs, printers, and other devices to your PC workstation. This is hardly the view of a high-performance storage interface for a zSeries-class system. Granted, SCSI originated, matured, and gained popularity on PCs and workstations. Today, the SCSI command set protocol has been separated from the physical interface to support ultra-high bandwidth and variable distances, depending on the physical interface. For example, SCSI Parallel Interface can operate over short distances at up to 360MB/sec and SCSI Fibre Channel Protocol (SCSI_FCP), more commonly known as FCP, can currently operate at up to 400MB/sec full duplex with 10GB (20GB/sec full duplex) on the horizon.
I am sure you have heard of storage area network (SAN), network-attached storage (NAS), and direct-attached storage (DAS—a new term for DASD). Here are some other new terms: Fabric-attached storage (FAS) is similar to a SAN in that it refers to storage that is attached to a network (fabric). Content addressable storage (CAS) describes object-based storage access. An example of this is the EMC Centera, which can be used for storing reference and other static data to meet regulatory compliance and support data retention. Object-based access relies on application-specific I/O methods using location-independent storage approaches in contrast to traditional access, which is location-dependent. Serial-Attached SCSI (SAS) is a new form of SCSI that simplifies the bulky cabling traditionally associated with parallel SCSI. SAS is scheduled to become available in 2004, is targeted as a replacement for those applications that have traditionally used parallel SCSI, and will support speeds into the 100s of MB/sec at relatively short distances.
Figure 1 shows the SCSI command set, including traditional parallel SCSI (pSCSI) (on the lower right) and the relationship of the command set with other interfaces. The SCSI command set runs on FCP, InfiniBand (SRP), IBM SSA, and IP/Internet SCSI (iSCSI), which earlier this year became an IETF Standard.