Storage

One of the most frightening topics for a newcomer to z/OS is disk storage management. With more than 50 years of history, beginning with the introduction of the IBM System/360 in 1964, mainframe disk systems have had a long time to mature and develop, and in the process, have achieved an extraordinary level of both reliability and complexity. Despite this high level of sophistication, the main architecture used in z/OS disk administration—System Managed Storage (SMS)—is really quite simple to understand and use.

A Bit of History

IBM introduced SMS in 1989 as a means of dealing with the astonishingly rapid growth of DASD (disk) storage and the out-of-control problems that attended it. Prior to SMS, DASD management was mostly ad hoc—disk volumes were “owned” by various user groups and might contain all sorts of different types of files; data sets were backed up inconsistently and placed on volumes haphazardly; and there was little centralized control of disk space utilization. As the amount of disk storage in large enterprise systems began to grow in the ’80s from dozens to hundreds to thousands of gigabytes, the lack of consistent rules for managing data became a serious problem. Some of the most disruptive types of problems were due simply to explicitly specified volume serial numbers in JCL. As new DASD volumes were added to the system or old ones removed, jobs would often fail when the desired data set wasn’t on the volume or the volume no longer existed. Data set placement and out-of-space errors were frequent, and storage administrators spent much of their time responding to emergencies by changing JCL and shifting data sets from volume to volume. By the late ’80s, these problems were critical.

The answer to the situation lay in the concept of automation via System Managed Storage; the idea that, instead of relying on human intuition and intervention to determine how data sets and volumes should be used, the system could manage itself based on a set of clearly defined policies and definitions. In 1989, IBM introduced major changes to what was then the MVS operating system, especially in the area of data set creation (i.e., allocation, or the Data Facility Product [DFP] component of the operating system). IBM's version of SMS was branded DFSMS; the “DF” standing for “Data Facility” and “SMS” referring to the Storage Management Subsystem. Because the concept of SMS extended beyond just allocation (DFP), the “DF” prefix was applied to a number of other DASD-related components, including DFHSM and DFDSS. This renaming was later extended, so the various SMS components are now named DFSMSdfp, DFSMShsm, DFSMSdss, DFSMSrmm and so on.

System Management of Storage

DFSMS introduced the concept that the management of disk storage should be automated. But what do we mean by “automated”? Exactly how does the system manage itself? To answer this question, we must first ask what goals we’re trying to achieve. Some of these goals might be:

1. Ensure data sets are allocated appropriately; i.e., data sets are placed on a volume that meets the application's requirements for performance, availability (mirroring, etc.) and so on.
2. Balance the allocation load across volumes and across the system so disk space is used evenly and out-of-space errors are minimized.
3. Eliminate the explicit links between data sets and DASD volumes so data sets can be placed on the most suitable volume, and can move from place to place, without requiring changes to JCL and control statements.
4. Ensure data sets and volumes are backed up appropriately and that backups are retained for the proper amount of time.
5. As data sets age, they should be moved to a less expensive medium, such as tape, and retained as necessary, until they ultimately reach the end of their usefulness and are deleted.

If these goals, or similar ones, can be achieved automatically, with a minimum of human intervention as the system grows and changes, then we have a self-managed system. Since its introduction, DFSMS has provided this automation, leaving the storage administration staff free to concentrate on the more important and interesting tasks of planning for the future.

Policies and Definitions

To achieve the goals defined here, SMS requires a set of policies it can implement. The policies might include those for the “front-end” of storage management; i.e., data set creation and placement, such as “all data sets belonging to application ABC must reside on mirrored volumes that may be no more than 80 percent full,” or “TSO user data sets must reside on DASD volumes named ‘TS*’ and should include a high-level qualifier equal to the TSO userid.” Policies might also be defined for the “back-end” of storage management; i.e., the backup/migration/deletion requirements that apply to a data set for the remainder of its lifecycle after allocation. Such a policy might include something like “work data sets with a DSN beginning with ‘XYZ’ will be deleted after they’ve been unused for three days.” SMS allows policies such as these to be defined and then automatically enforces them, allowing for efficient control over an installation's data, something that was impossible in the days of manual storage management.

Classifying Data Sets and Volumes

The policies defined for SMS must have something to work on. That is, if a policy specifies an action, such as “place test data sets whose names begin with ‘ABC’ on volumes WORK01-WORK99,” then there must be a means of classifying data sets and volumes so data sets or volumes with similar characteristics can be dealt with as a group. This classification is at the heart of DFSMS. The following four SMS constructs are used to classify data sets and volumes:

• Data class
• Storage class
• Management class
• Storage group.

The first three apply to data sets; each data set can be assigned a data class, storage class and management class at the time it’s created. The last construct—storage group—applies to DASD volumes, not data sets, and allows volumes to be grouped together into pools, or what DFSMS calls storage groups.

2 Pages