Storage

zStorage Industry Update and Trends zSTORAGE

2 Pages

Disk-based data protection, including backup, mirroring or replication, Point-in-Time (PIT) copy and snapshots along with long-term retention (archive), remain popular. Highcapacity disk drive deployments, including ATA and Serial ATA (SATA), remain popular as secondary storage. High-capacity Fibre Channel disk drives are also finding their way into enterprise-class storage systems as an alternative to SATA disk drives for storage-centric and tiered storage applications. SATA disk drives are available in capacities up to 500GB, while high-capacity Fibre Channel disk drives are at 400GB.

While not new in 2005, Continuous Data Protection (CDP) received more coverage and debate due to the arrival of products from established vendors such as Microsoft, IBM and EMC, among others. One way to look at CDP is to consider your Recovery Time Objective (RTO), or how long you can afford your data to be unavailable, along with Recovery Point Objectives (RPOs), or how much data you can afford to lose. The traditional technique has been to perform scheduled full and regular incremental or differential backups perhaps combined with some journaling and replication of data locally or remotely. Your RTO and RPO may require that no data be lost and little to no downtime incurred. Another variation would be that you can afford some downtime with some data loss. For some people, CDP means all data is constantly protected and can be recovered to a particular state (RPO) in a short time if not instantaneously (RTO) with a fine granularity.

Near-CDP refers to larger granularity for RPO and RTO, however, finer than traditional backup provides. An example of near-CDP would be Microsoft Data Protection Manager (DPM), which has a default granularity of an hour. Compared to a pure CDP that enables an RPO and RTO of zero (no data loss or disruption), near-CDP, as in the Microsoft example, would have a default behavior of RPO of one hour or less.

While pure CDP continues to mature and gain traction, the real growth area for CDP will be with near- CDP for Small and Medium-Size Businesses (SMBs) and Small Office/ Home Office (SOHO) environments using cost-effective solutions such as those based on Microsoft DPM for Windows. Some solutions, such as Microsoft DPM, are block- or volume-based, while others are file-based such as IBM Tivoli Continuous Data Protection for Files. There are advantages to both; depending on your needs, you may need a combination of technologies.

Additional storage and networking trends and improvements include:

  • 4GB Fibre Channel continues to evolve with availability of switches, host adapters, and storage systems. Not all environments benefit from the increased bandwidth compared to 1GB or 2GB, but many environments can benefit from the lower latency and consolidation capabilities enabled by 4GB Fibre Channel and eventually, 4GB FICON.
  • 10Gb Ethernet continues its evolution as a backbone network for high-bandwidth and consolidation applications. Development of copper-based 10Gb technology and lower-cost 10Gb chipsets continues, though widespread adoption, especially at the desktop, is still distant.
  • Serial-Attached SCSI (SAS), not to be confused with Statistical Analysis System (SAS), is a relatively new storage interface intended to replace parallel SCSI (also known as UltraSCSI). Initial SAS deployments in 2005 were as embedded storage on servers from vendors, including HP, IBM and Sun, and in entry-level storage arrays. It may be a few years before SAS-based disk drives appear in high-end, enterprise-class storage systems, but entry-level and midrange disk arrays are prime candidates for SAS disk drives, as are disk-based backup and virtual tape libraries. One advantage of SAS is the ability to have SATA disk drives coexist using the same back plane interconnect along with general connectivity improvements over parallel SCSI.
  • iSCSI, like InfiniBand, went through a massive hype cycle and then a relative quiet period and is increasingly being adopted. iSCSI is being deployed in primary and secondary storage environments, particularly in cost-sensitive environments where good performance is good enough. While debate about iSCSI vs. Fibre Channel continues, the real debate should be iSCSI vs. NAS and which, if not both, applies to your environment.

If you’re running or considering multiple Linux images on zSeries processors, you’ll want to be aware of N_ Port Virtual ID (NPIV). NPIV virtualizes physical Fibre Channel adapter ports presenting a virtual N_ Port to each image sharing an adapter. This means that without NPIV, each Linux image would need its own physical adapter to have a unique N_Port ID and World Wide Port Name (WWPN). That’s important because NPIV enables each virtual N_Port to have its own unique WWPN that can be used by storage-based Logical Units (LUNs) and volume masking and mapping features while sharing a physical adapter.

Figure 1 shows a zSeries mainframe with four Logical Partitions (LPARs), one supporting z/OS and three supporting Linux. The z/OS image has two physical channel adapters config ured for FICON for redundancy while Linux Images A and B each have a single adapter (no redundancy) and Linux Image C has two adapters for Fibre Channel FCP. The dashed lines indicate the primary data path with the solid line (from the switch to mainframe and storage devices) indicating the redundant path. A disadvantage of this configuration is that channel adapters need to be dedicated to the Linux LPARs unless a shared adapter is configured. The downside of using a shared adapter across the LPARs for Linux without using NPIV would be the inability to guarantee unique access from a specific Linux image to a specific LUN mapped to the shared physical port.

 

The solution is to use NPIV as seen in Figure 2, where a shared adapter presents a unique virtual N_Port for each image, enabling LUN mapping of a LUN to a specific Linux image for security and data integrity.

InfiniBand, which had been written off as dead technology, started regaining attention in 2005. InfiniBand has found its niche as a high-performance, low-latency, server-to-server interconnect for server and compute clusters, also known as compute grids. InfiniBand is also being used for high-bandwidth access to storage from vendors such as Engenio. InfiniBand as an interconnect interface supports multiple upper-level protocols and application interfaces, including iSER, Remote Direct Memory Access (RDMA), TCP/IP, xDAPL, and SCSI Remote Protocol (SRP), among others.

So does this signal it’s time to abandon Fibre Channel as a storage interface in favor of InfiniBand?  For most environments, probably not. However, for environments and applications that need or want to leverage InfiniBand, it provides an interesting alternative to Fibre Channel and Ethernet-based iSCSI. It’s still unclear what, if any, advantage InfiniBand may have for a pure IBM mainframe environment, but for those with mixed server environments, InfiniBand is a technology to watch.

The storage market can still be characterized as a buyer’s market if you’re a prudent buyer. You can expect another active year in the storage industry with continued development and adoption of previously announced products and technologies, along with new ones. Understanding these different technologies and techniques, along with where they might fit in your environment, enables you to make more effective decisions. Strive to develop a strategy for storage that implements the goals and objectives of your overall IT strategy and complements your server and networking strategies.

2 Pages