IT Management

IT Sense: Enterprise Storage: Back to DASD?

2 Pages

There are those who would argue that the history of distributed computing is a classic illustration of the old saw, “What goes around comes around.” If the 1970s witnessed the movement of computing out from under the protected shelter of the centralized data center and into the decentralized wilds of the cubicle farms and equipment closets of the corporate workplace, then the early 2000s are presenting a logical countertrend as distributed computing takes a course that leads it back into the glass house once more.

Indeed, all kinds of colorful metaphors are being pressed into service to describe the phenomena, from simple physics — for every action there is an equal and opposite reaction — to philosophy — Yin and Yang — to classical metaphysics — the One and the Many. Popular adjectives used by analysts include “cyclical,” “inevitable” and “dialectic.”

Nearly everyone agrees that computing is “coming home” to the data center. However, for the most part, these people are wrong.

In fact, computing never left the data center. Before Y2K, it was commonly understood that 70-odd percent of mission-critical applications continued to reside on Big Iron. That was a full 30 years after the so-called distributed computing revolution supposedly gutted the centralized mainframe computing model. Post-Y2K, the percentage of mainframe-hosted apps fell — but not by much. For all its warts, the central data center model still provided the only trustworthy environment for use in supporting important business processes.

The reason was simple: control. The mainframe data center provided control: access controls to keep out the riff raff, environmental controls to keep out the moisture and the heat, power controls to keep out the spikes and surges, and most important, operating system controls to provide disciplined IT staff with the tools they needed to keep out the bugs, keep down the costs, and keep up the performance.

The 1970s distributed computing revolution was an experiment in sharing control. Advocates proposed the use of simpler operating systems that required little administration running on simpler hardware that required no special environmental provisions; in other words, organizations began sharing control with the vendor hardware and software engineers. Advocates further proposed that users operate, administer and manage their own shrink-wrapped application software. Therefore, organizations shared control with vendor application software developers.

One thing that the distributed computing model lacked, however, was a strategy for sharing management responsibility. It was soon discovered that 100 or more of anything was inherently unmanageable. Common tasks such as data backup, provisioning, and security — provided as a function of the operating system in the mainframe world — were not accounted for in the distributed computing paradigm. The management “baby” had been thrown out with the proverbial bathwater, and the last 20 years have been about trying to add these capabilities back into the distributed environment.

Shared Storage Underscores the Problem

I was reminded of this recently when I talked with a vendor of a storage virtualization software product in Northern California. For those who do not understand storage virtualization (I get confused from time to time when a vendor’s marketing department gets ahold of the term), it refers to a capability to aggregate physical storage devices and array storage partitions, usually represented by Logical Unit Numbers (LUNs) into larger “virtual volumes.” These virtual volumes can subsequently be presented to operating systems and used just like physical disk drives.

The rationale for using virtualization was to reduce the number of devices that needed to be managed. The virtualization software is supposed to insulate distributed server administrators, as well as operating systems and application software, from the complexities of the storage environment.

2 Pages