Virtual tape systems can provide significantly more functionality for tape data sets than physical tape systems. Virtual tape disk arrays provide levels of scalability, performance and flexibility for mainframe tape operations that are simply unavailable with physical tape. In many cases, a single virtual tape array can take the place of multiple tape subsystems to meet functionality requirements for batch processing, data protection, Hierarchical Storage Management (HSM) migrations and archiving. In addition, they can extend Disaster Recovery (DR) capability to tape workloads with array-based replication.

New Demands for Protection

Many organizations send their tape data off to a third-party records management company for protection; should a recovery be needed, it may take more than a day just to get the tapes back before even initiating the recovery. Virtual tape gets data back to the recovery site much faster. Many mainframe users such as banks and other financial institutions require extremely high levels of availability and protection, with strict Service Level Agreements (SLAs) for uptime and no data loss. They may run two data centers in the same location for instant protection, as well as another data center outside the region to protect against a more widespread disaster. Data protection is paramount, and these organizations define minimal Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs) to ensure the least data loss and fastest recovery time in the event of a disaster.

Data Consistency

One area that can be particularly problematic for mainframe users is keeping data consistent throughout the data protection process. Data must be replicated to protect it from corruption or deletion, and administrators must be confident that it’s available, accurate and recoverable. In addition, they must know the relationship of the replicated data with current transactions and changes. To put it another way, the data itself is important, but equally important is the administrator’s comfort level that the recovery data can be matched up with current activities. Re-syncing disk and tape data sets is a difficult job.

With different functions operating on mainframe disk and tape or virtual tape, it’s up to the administrator to configure replication processes to accommodate their specific needs. Disk and tape replication processes are separate, and when disk and tape are out of sync, there can be significant consequences. In fact, some mainframe users are required to declare a disaster if tape data sets fall behind disk by even a few minutes.

Imagine the following scenario based on a main data center (DC1) housing both disk and tape data sets and a remote data center (DC2) for offsite replication. DASD is doing synchronous replication and tape is doing asynchronous replication:

Disk ahead of tape: Disk is replicated from DC1 to DC2 on every I/O, while tape is replicated from DC1 to DC2 every 15 minutes. As a result, disk data is ahead of tape data. The disk catalogs are current, while the most recent tape is 15 minutes old. 

State of this environment: Your catalogs have entries pointing to tapes that aren’t yet replicated; in addition, your Migration Level 2 (ML2) resident data sets are gone. In this scenario, you have both data loss and data integrity problems.

There are applications that keep disk volumes in sync; of great value would be a consistency capability that encompassed both disk and tape. This would make all data universally consistent, and enable disk and tape to be managed in the same consistency group, greatly improving recoverability. A key benefit would be the deterministic characteristic, providing assurance to the administrator that all data was in sync.

The method for achieving this would require a synchronous replication solution to a highly available remote site. With data sets in the same consistency group, an automated replication function would send the writes to both disk and tape targets. For additional protection, asynchronous replication to a third site would be helpful, enabling organizations to implement the “star” replication configuration for an out-of-region data center. Having a single replication method and combined consistency grouping for both disk and tape would improve data availability and simplify recovery operations.

To enhance ease-of-use, an automated failover application could be applied to the universally consistent disk and tape data, pulling all the pieces together to get business back in operation quickly. With this consistent approach to data recovery, mainframe administrators could be assured of no loss of data such as ML2 and DBMS logs, while ensuring the fastest possible business resumption.

Knowing that replicated disk and tape data sets are consistent would take a big weight off the administrator’s mind. If a disaster occurred, it would be much simpler to find the correct tape volumes and begin restore, and administrators would have peace of mind knowing they could recover faster because of the universal data consistency. In addition, it would provide assurance of recovery for regulatory compliance, lifting another burden.

Single Solution Platform Eases Management

A virtual tape library that offered universal data consistency would deliver simpler management, faster recovery and regulatory compliance. This type of system could be configured with highly available storage and built-in applications for synchronous and asynchronous replication as well as automated recovery. 

As you evaluate disk-based mainframe virtual tape solutions, be sure to check on other attributes. For example, will the solution work seamlessly and transparently with your existing applications and business processes, or does it require code changes, or alteration of production operations or JCL? If it interrupts current applications and processes, the disruption may not be worth the effort. Also, a virtual tape platform with multiple storage options can handle various tape processes while easing management. Performance is always important, along with scalability, security and encryption.