Oct 24 ’13
DB2: What’s Tape Got to Do With It?
Many people in the computer field are surprised when they learn that tape was available before disk. Think about your favorite old TV shows or movies; when the director wanted to show data being processed, you would see flickering lights on part of the CPU or console area. Often, you would see the old-fashioned tape reel being written to or read. Tape was one of the earliest means of storing and conveying large amounts of data and information. IBM introduced the 726 tape unit in 1952, while the first IBM disk, the 350, was introduced in 1956.
When it comes to data, DB2 professionals generally think about disk, but most sites store four to 15 times more data on tape than on disk. Today, there are a variety of tape types available, and some tapes aren’t even tapes at all but rather disk. IBM sells as a broad category manual, automated and virtual tape. Cartridges and disks generally replace the old tape reels, which allow for better reliability, performance and floor space reductions. Gone are the days when a computer operator had to frequently clean the tape drives, splice broken tape or try to unkink a reel when it bunched up.
Manual tapes still exist in the form of cartridges. Backing up a 3390 mod 3 no longer consumes five tape reels. Modern cartridges can hold 4 TB of information. It isn’t uncommon to place more than 10 full-volume disk backups on one cartridge. Automated Tape Library (ATL) uses robotic components to mount cartridges. The beauty of the ATL is that tape management is fast, as it doesn’t rely on an operator to mount or move tapes around; it’s all handled by robotics. Neither manual nor ATL tapes use disk, so there’s no virtualized layer. One key drawback to consider with cartridges that hold up to 4 TB is what happens when a tape snaps; we’ll examine that later.
Some tape data sets never make it to tape; rather, they’re redirected to disk with products such as the IBM Tape Mount Management (TMM) and Virtual Tape Facility for Mainframe (VTFM). Tape virtualization is more commonly used with the IBM TS7720 (tapeless tape) or the TS7740 (Hydra, which is the combination of a front-ended disk and back-ended tape). There are other tape technologies, such as deduplication, which won’t be discussed here. Tapeless tape is fast becoming a favorite of many sites, as you see tape operations, such as a mount, unload, keep, etc., but no tapes exist; it’s all RAID 6 disk. The TS7740 is a combination of disk and tape. The TS7740 is front-ended by RAID 5 disk, but back-ended by physical tape.
Tape virtualization has many similarities to disk virtualization. One of the commonalities is that as I/O devices, they have several powerful processors (generally Power7) to deal with all the virtualization and other requirements.
From an MVS perspective, virtualized tape operates as a real tape; for example, tape mount commands are real. MVS issues these commands, but no tapes are involved. Mounting a scratch tape for a DB2 archive log or image copy data set shows you the total tape flow: Mount the tape, display what tapes are being used and unwind the tape. Although MVS shows you all these operations, none are occurring.
One key drawback to these virtualized tapes is that MVS tells DB2 the data sets that reside on them are on real tape and therefore can’t be shared; they must be used in a serial operation. No parallel operations may exist, even when the data really resides on the disk portion of the device. Consider, for example, a DB2 archive log data set that resides on the disk portion of virtual tape and now has 10 recovery jobs wanting to use it in parallel. Unfortunately, the 10 jobs must execute serially; no parallelism is possible unless the archive log data set is brought back down to real disk and DB2 now knows it exists on disk. An alternative is to have Hierarchical Storage Management (HSM) migrate the archive log data sets from disk to tape, in which case they will be recalled to disk when required. The one exception to the parallelism issue is when using VTFM, which has the look of tape, resides on disk, but allows for parallel requests by using Parallel Access Tape (PAT).
As a point of interest for DB2 professionals, the repository that manages tapes in IBM Virtual Tape Systems (VTSes) resides on DB2 for LUW (in this case, UNIX in the virtualized hardware). You can’t connect to that DB2 to read the repository, as it’s totally segregated.
For virtualized tape, the TS7720 is a disk-only system, while the TS7740 is front-ended by disk and back-ended by real tape. The disk portion of both the TS7720 and TS7740 is called Tape Volume Cache (TVC). Tape allocations occur to the TVC, meaning that when DB2 allocates your archive log or image copy tape data sets, although it looks like the allocation was to a tape, it resides on disk; in this case, the TVC. For the TS7740, DB2 doesn’t deal with the back-end tape; we only know and care about the logical tape volume written to the TVC. One logical or physical tape can hold many data sets. If the archive log data set was written to tape volser 123456, that’s what’s important to us and DB2, not the real physical tape on which it resides. Not all allocations to the TVC of the TS7740 should be treated the same because although the TVC has a large amount of space, in the big scheme of total space required, it’s limited. DB2 professionals should tell the storage administrator how long to keep data on the TVC vs. the physical tape. What are the chances you will need your archive log or image copy data sets in the near future? If the chances are low, the storage administrator can send your specific data sets off to tape more quickly and allow the TVC to have more space available for data sets that require faster access. TVC residency time is influenced by the SMS Storage Class value for Initial Access Response Time (IART), referring to the Preference level.
Logical tapes that are moved to physical tapes in the TS7740 are stacked, similar to the way HSM stacks data sets on HSM-owned tapes when using the MOD approach. This avoids tape waste. If you took a 4 TB manual tape and used 20 MB and didn’t mod onto it, then that’s all that’s used of the 4 TB, which is an enormous waste. If five data sets reside on one logical tape, and only data set three is required, the entire logical tape is brought back into the TVC. Reading directly from a manual, ATL or TS7720 tape may be much faster than from the TS7740 when a data set resides on physical tape because the logical volume must first be read from the physical tape that’s recalled into the TVC and then finally read. This is similar to the way we use disk—data goes through the cache on reads and writes.
With virtual tape, there’s a virtual volume reconciliation and reclamation when data on logical volumes is modified. In concept, this is similar to HSM’s reconciliation and reclamation.
Virtual tape allows for up to 2 million logical volumes. At first glance, that seems incredible. You might think you will never run out of tapes, but what about a DB2 customer that has 100,000 table spaces and indexes they back up daily and retain the copies for 10 days? If nothing special is done and every data set is allocated to its own scratch tape, you would have exhausted half the allowed logical volumes at the end of 10 days. This problem can be avoided by using the same approach you use today with data sets residing on manual or ATL tapes: Use JCL backward pointers (VOL=REF=) to the previous data set allocation, which will use the MOD approach for the logical tape.
One advantage of using virtual tape is it lets you set up a grid or peer-to-peer implementation, which is a popular option among customers. Your shop has two sites, each with its own TS7740. When you create DSN1 on tape volser 123456, it’s duplicated to the secondary site’s TS7740. In the event the primary site’s TS7740 fails or is unavailable, the data set will automatically be read from the secondary site. This approach also negates or reduces the need to create software duplex tapes or data sets. There are various reasons why you may want to keep two software duplexed copies of the archive log data sets.
Special attention is required for data sets such as the archive log, image copy or HSM data sets you want to software duplex in the TS7740. ARCHLOG1 and ARCHLOG2 can wind up on the same physical tape, which defeats the purpose of duplexing the data sets. What happens if the physical tape snaps as mentioned earlier? Both copies are now lost unless a grid or peer-to-peer implementation is in place. The storage administrator can set up different physical volume pools to ensure these types of data sets reside on separate physical volumes.
Virtual tapes emulate 3490E devices. The SMS Data Class sets each virtual tape to 400, 800, 1000, 2000, 4000 or 6000 MB, with the default generally 800 MB. The value is important to DB2 customers, especially when it comes to archive log data sets. The DB2 Boot Strap Data Set (BSDS) has a slot for each archive log data set volume. ZPARM CATALOG in macro DSN6ARVP determines if an archive log data set will be cataloged in the ICF catalog. The default is NO. Disk archive log data sets are required to be cataloged; therefore, only one slot for the first volser is required. Tape archive log data sets, on the other hand, aren’t required to be cataloged; therefore, each non-cataloged tape volume takes up one slot in the BSDS, while each cataloged tape volume takes up only one slot similar to disk. Let’s consider a 4 GB active log data set that’s being archived to tape and not cataloged with the Data Class using the default of 800 MB. In this scenario, five BSDS slots are used to accommodate the archive log data set.
ZPARM parameter MAXARCH controls the maximum number of slots the BSDS can hold, while parameter ARCRETN controls the number of days an archive log data set is retained. Let’s assume MAXARCH=1000, ARCRETN=21 and on average 100 archive slots are used in the BSDS daily. The intention is to keep 21 days’ worth of archive log data sets, but not to exceed 1,000 slots. In this scenario, all the slots are exhausted in 10 days, negating the requirement to keep 21 days’ worth of archive log data sets. Specific recoveries may fail if an archive log data set more than 10 days old is required (there’s a complicated solution to this problem but it isn’t covered here).
Although you have a choice of using 400, 800, 1000, 2000, 4000 or 6000 MB, only the space used will occupy the TVC and/or tape. Requesting 4000 MB for an archive log data set that only used 30 MB because the ARCHIVE LOG command was executed will result in the TVC and/or tape only occupying 30 MB. On the other hand, if a data set hits logical tape end of volume, a new logical tape with the same capacity is requested. Keep in mind that a data set on tape can reside on 255 volumes, while a data set on disk can reside on only 59 volumes.
The following parameters are used only when writing to disk and therefore are ignored for tape, even when writing to the TVC:
• ZPARMs PRIQTY specifies the amount of primary space to be allocated for an archive log disk data set.
• SECQTY specifies the amount of secondary space to be allocated for an archive log disk data set.
• ALCUNIT controls the units (blocks, tracks or cylinders) in which primary and secondary space allocations are to be obtained for an archive log disk data set.
MAXRTU is another common ZPARM parameter often set incorrectly when dealing with virtual tape. MAXRTU specifies the maximum number of dedicated tape units that can be allocated to concurrently read archive log tape volumes. The default is set to two, which was a good value when dealing with manual tape, but the virtualized tape units allow for up to 256 virtual drives per cluster. Consider increasing MAXRTU when dealing with virtual tape. Along with MAXRTU, DEALLCT determines the length of time an archive read tape unit is allowed to remain unused before it’s deallocated. DEALLCT should be set high enough to ensure archive logs are used for such operations as mass recoveries, but low enough to not lock tape units for too long of a period. Don’t set this value to 1440 or NOLIMIT unless you will follow it with a SET ARCHIVE command; otherwise, your tape and the unit won’t deallocate until DB2 shuts down. In a data sharing environment, the archive tape isn’t available to other members of the group until the deallocation period expires. For data sharing environments where recoveries are run from multiple members, set DEALLCT=0. If all recoveries are executed from one member, DEALLCT can be a higher value.
ZPARM BLKSIZE determines the block size to be used for the archive log data set. Most customers set this to 24576 or 28672. Although 28672 is optimal for tape (for tape, generally the larger the block size the better), if you bring the archive log data set back to disk to allow for parallel access, you will lose a considerable amount of space. Only one record can be allocated to each track, while 24576 would allow for two records per track. If you’re going to bring the archive log data sets back to disk, specify 24576. If you specified 28672, don’t re-create the data set on disk as 24576, as this will cause a BSAM read error when reading a block.
Image copy block size can exceed typical allocations when written to tape by using Large Block Interface (LBI), allowing customers to specify a value up to 256 KB. This doesn’t include archive log data sets that have a maximum of 28 KB. Some customers with large amounts of data and especially large page sizes have seen as much as a 50 percent reduction in run-time and some CPU reduction as well by implementing LBI and using larger block sizes.
Key Issues to Consider Before Using Tape
Tape is an excellent medium for a variety of DB2-related data sets, such as archive logs, image copies, very large sort work data sets and other assorted data sets. Data sets such as VSAM, PDS and PDSE must all reside on disk with a further restriction that ICF Catalogs, PDS and PDSE can’t exceed one disk volume. Some key issues to consider before using tape:
• No concurrency or sharing is allowed within a data set or volume; therefore, parallelism doesn’t exist (except when using VTFM).
• When mass recoveries of hundreds or thousands of objects are required, data sets such as archive logs and image copies may queue for longer periods, waiting for the serialization of a physical tape volume (even in the TS7740), causing elongated recovery times.
• Decide when compression should occur. Starting with DB2 9 New Function Mode (NFM), you can compress your archive log data sets on disk and generally create smaller data sets. Avoid compressing objects on disk and then hardware compressing them on tape, as this will generally cause a reduction in tape compression efficiency.
• Tapes perform best using pure sequential access. Other access types may result in some performance degradation.
• Data sets residing on tape can’t be striped.
Tape is an important part of every DB2 environment. Choosing the right technology for the right data sets is essential and can mean the difference between success and failure. Part of this consideration is ROI. You may find that placing some of your data sets on a tape environment greatly reduces the overall cost. Choose wisely between cost and function.