Storage

Consider the open system’s evolution into virtualized storage, which essentially lagged that of the mainframe for well over a decade. For open systems, storage virtualization capabilities started on the operating system, where most of the control points were located for UNIX, Windows, and later Linux. SAN and Network-Attached Storage (NAS) have been a major initiative for more than a decade and ultimately moved much open systems storage virtualization off the server.

By 2002, the virtual tape market for open systems was gaining some momentum as a backup solution. By 2005, virtual tape, combined with de-duplication, became a popular storage initiative for many open systems storage management systems. Open systems businesses have countless, non-integrated storage management products to choose from, and storage software vendors are working to reduce the complexity and number of choices.

Organizations that have completely avoided mixed-platform storage virtualization will increasingly pay a price in terms of reduced efficiencies, greater complexity, and less IT flexibility. While the risk of staying the course may be lower and less disruptive in the near term, CIOs should carefully weigh those risks against cost, complexity, and staff availability; they should consider implementing cross-platform storage virtualization solutions as those mature. Cross-platform data storage management solutions are available that:

• Analyze reports

• Project trends

• Schedule or automate storage resources across open systems and mainframe environments.

Virtualization Improves Storage Utilization

Mainframes have consistently provided much higher levels of both storage and server utilization than open systems. Open systems storage analysis for disk capacity utilization shows that, at best, about 45 percent of available disk space is allocated. Many systems are lower. That means that about 55 percent of the installed open systems disk space is unallocated and unused. Significant resource underutilization is obviously a poor economic strategy, especially in the presence of terabyte-plus capacity disk drives.

Thin provisioning, essentially a virtualization technique that prevents over-allocation, addresses this problem and is now available on many open systems disk arrays. Not surprisingly, this capability first appeared on the mainframe in 1965 with OS/360, which enabled allocated but unused disk space to be released. Further improvements in mainframe disk storage utilization arrived with the initial implementation of Data Facility Storage Management System (DFSMS) in 1988, which became an effective policy engine for managing storage resources. By using DFSMS or similar tools, mainframe disk utilization typically reaches 80 percent or better, nearly twice that of open systems disks—significantly improving storage economics.

Because tape capacities are large and rapidly growing, now up to 1TB for native mainframe cartridges, the amount of data typically written on the tape is often less than its native capacity. When using an integrated VTL, the ability to put multiple logical volumes on a single tape cartridge greatly improves the utilization of the tape cartridge while reducing the number of cartridges. On average, tape storage benefits from a 2:1 compression ratio, essentially doubling the native cartridge capacity. Mainframe tape users with integrated tape library architectures consistently attain tape cartridge utilization levels above 80 percent.

6 Pages