Storage

The delivery of IT services is more difficult today than ever due to several factors. First is the constant growth of data volumes that not only stress primary storage but also make backup and recovery extraordinarily difficult. This is exacerbated by users’ much higher expectations for data availability that in turn severely restrict backup windows and restore times. In addition, the expanding use of virtualization and cloud-based services give users a taste of immediacy and location-independence they lap up like hungry dogs; virtual machines can be created in minutes, and data can be accessed via the cloud rapidly from any location, with multiple devices. Mainframe systems aren’t immune to these new developments and are increasingly being asked to provide unprecedented services and service levels, particularly in terms of data availability and protection. As the business becomes accustomed to secure, scalable, efficient backup procedures and instant recovery of applications, mainframe systems must adjust.

A key challenge for mainframe deployments in delivering these kinds of service levels rests with the extensive use of tape for backup and recovery, batch processing, Hierarchical Storage Management (HSM), fixed content archiving, etc. While tape does an excellent job of providing cost-effective processing abilities, enabling economically attractive, long-term storage and supporting high throughput rates, it can be a slow process due to the amount of setup required. If you want to recover last year’s payroll data from offsite tape (a fairly common scenario even these days), you must first locate the right tapes, transport them to your data center, mount them, and read them back sequentially. This can mean days of delay before the actual recovery can even begin. If you need more tapes mounted than you have drives available, additional delays occur.

Tape can also be unreliable with its many mechanical parts; tape libraries, tape media, robotic arms, and the like offer multiple opportunities for errors or failure. The fact that the majority of tape issues these days are human handling or software errors doesn’t mitigate the point. Tapes aren’t RAID-protected and can easily be lost, corrupted, or stolen; unfortunately, Disaster Recovery (DR) is an expensive, time-consuming, laborious process that usually involves a third party. Tape volumes can be replicated remotely, but only with upgraded products and expensive channel extensions; IP-based replication is impossible for many organizations because of the massive (and costly) bandwidth needs, not to mention additional software. In addition, the aforementioned data growth impedes performance of all processes and exacerbates the problems.

Disk-to-disk-to-tape solutions can minimize the mechanical processes, but are faster than standard tape only if the data resides in the cache. In addition, they’re often proprietary, take a significant amount of management and floor space, and can’t handle certain applications. In the end, to accommodate the many use cases of mainframe tape, organizations often find themselves obligated to buy and operate two or three different solutions in parallel, increasing both equipment and operational costs.

Single System, Multiple Capabilities

Disk-based virtual tape libraries offer an excellent solution to these challenges; when data growth won’t let up, they may be the only way to speed backup and DR as well as batch processing, HSM, and fixed-content applications. Deduplication appliances can be added to the infrastructure deployment to reduce data volumes for backup and DR, but they become another device to be managed separately. Of even more value would be a solution that includes standard primary storage and deduplication storage in a single cabinet. While virtual tape controllers would handle tape emulation and mainframe connectivity, internal storage could support all application needs. Tape files would be directed to standard disk for unique data types such as DFHSM migration, keeping the data available for fast recall or deduplication storage for applications such as backup.

Data volumes that tend to have duplicate blocks can benefit from deduplication, but backup, with its multiple redundant files, is a prime target. When files are backed up, the same data is copied over and over again, consuming storage space and clogging network bandwidth. By deduplicating those volumes, companies can save on storage purchases and free up bandwidth for replication to a DR site. Deduplication can shrink data volumes by vast amounts—30 times isn’t uncommon—and reduce bandwidth needs by up to 99 percent. These are huge savings, enabling companies to not only keep backup data onsite (and quickly available) longer, but also enable remote replication of many data sets instead of just the most critical.

This combined system would be able to handle all mainframe tape use cases in a single solution, all managed from the same location. It would enable companies to deploy advanced tape replacement and leverage storage tiers while enjoying the benefits of unified management. Other benefits include:

• Faster, less costly backup and restore. Deduplication of backup of data dramatically shrinks the amount of data, making backup much faster, saving on both storage and transmission costs, getting data to offsite locations faster, and enabling faster recovery.

• Recovery objectives defined by business need. By using disk and deduplicated storage, companies would no longer have to restrict their Recovery Point Objectives and Recovery Time Objectives (RPOs/RTOs) to tape’s limitations; instead, they could be defined according to business need. The time required to write backups to sequential tape has a direct impact on RPO, since it forces a significant period of time to elapse between backups. If it takes 24 hours to back up to tape, then a failure or outage will result in at least that much data loss. Similarly, RTO is dependent on how long the tape recovery process takes, regardless of what the business needs.

• Improved Total Cost of Ownership (TCO). By combining these tasks, companies would be able to buy and manage a single system instead of buying three systems and running them in parallel and still provide the most efficient environment for each task. Backup could go to deduplication storage while highly interactive data could be directed to high-performance storage. This would save on management costs as well as data center floor space, power, and cooling. Other key savings would ensue from reducing the costs of tape vaulting, tape purchases, and software licensing for multiple solutions.

2 Pages