If you haven’t yet heard of the emerging Fibre Channel over Ethernet (FCoE) or one of the different vendors’ marketing terms, including Data Center Ethernet (DCE), I/O Virtualization (IOV), Converged Network Architecture (CNA) or Data Center Fabrics leveraging a hybrid premium Ethernet, rest assured, you will soon, as the technology is now in demonstration mode.

OK, so what’s the hook and why should you be interested in FCoE if you deal with mainframes? The answer is simple, and it’s called Fibre Channel, which is the underlying technology that upper-level protocols, including FC-SB2 (more commonly known as FICON), along with SCSI Fibre Channel Protocol (aka FCP), or what many people generically refer to as Fibre Channel. While I have yet to hear a formal public announcement from IBM that they will be supporting FICON via FCoE, all you have to do to see where the future is headed is look at how FICON coexisting with FCP open systems SCSI traffic concurrently on Fibre Channel in Protocol Intermix Mode (PIM) evolved from propriety ESCON.

The key enabler for FCoE is a new type of lossless Ethernet incorporating lower latency, Quality of Service (QoS), priority groups, and other enhancements to support deterministic channels such as behavior within the data center without having to map Fibre Channel onto TCP/IP and associated overhead. Fibre Channel over IP (FCIP) is commonly used for implementing remote mirroring and replication or remote tape copies over long distances between storage systems that map Fibre Channel onto IP. FCoE maps Fibre Channel frames directly onto a new and improved hybrid Ethernet (see www.fcoe.com and www.fibrechannel.org). The new premium Ethernet is targeted for data centers and thus is distance limited; however, it incorporates many improvements over traditional Ethernet, including QoS for low-latency, deterministic performance behavior traditionally associated with channel-based protocols and interfaces.

The business and technology value proposition or benefits of converged or virtualized I/O connectivity for enterprise environments in the future are similar to those for server and storage virtualization, and include the ability to:

• Reduce power, cooling, floor space, and provide other green-friendly benefits
• Boost clustered and virtualized server performance, maximizing PCI or mezzanine I/O slots
• Rapidly re-deploy to meet changing workload and I/O profiles of virtual servers
• Scale I/O capacity to meet high-performance and clustered server or storage applications
• Leverage common cabling infrastructure and physical networking facilities.

While FCoE isn’t yet ready for mission-critical, primetime deployment despite early vendor buzz and hype, it’s certainly time to be learning more about it for planning purposes, including discussions with vendors as to when their servers, operating systems, adapters, switches, and storage systems will support the technology. Thus, FCoE is at a similar point that FICON was at about 10 years ago when it started to gel, leading to increased adoption before and predominantly during 2001 and 2002.

Moving forward, premium or low-latency Ethernet (aka DCE) will complement traditional or volume Ethernetbased solutions leveraging various degrees of commonality. For storage-related applications, FCoE addresses and removes traditional issues and perceptions about Ethernetbased TCP/IP overhead, latency, and non-deterministic behavior, preserving experience and knowledge associated with Fibre Channel and FICON tools and technologies.

If you’re working in an enterprise environment that’s using Fibre Channel and/or FICON, FCoE should be on your radar as part of your long-term, strategic plan for server and storage I/O connectivity. If you recall the transition from mainframe block-mux to ESCON to FICON, or on the open systems side from parallel SCSI to quarter speed propriety Fibre Channel to current PIM with Fibre Channel FCP and FICON coexisting on a common Fibre Channel infrastructure, FCoE should cause some déjà vu!

As with other virtualization techniques and technologies, align the applicable solution to meet your particular needs and address specific pain points while being careful to not introduce additional complexity. You can learn more about storage networks, interfaces and protocols in Chapters 4, 5 and 6 of the book, Resilient Storage Networks (Elsevier ISBN 1555583113) and at www.storageio.com (including additional information about I/O virtualization and Fibre Channel over Ethernet). To learn more about green and associated power, cooling, floor space, and environmental topics, visit www.greendatastorage.com.