IT Management

Over the last couple of months, several announcements have hit the press from the big players in the distributed computing world. These announcements, in effect, establish proprietary technology stovepipes around specific application, hypervisor and operating system software—and branded server, network, and storage hardware—“stacks.”  Some of these stacks have been built from products acquired by the “alpha vendor,” while others represent alliances between technology partners.

In the first category, Oracle has purchased Virtual Iron (a hypervisor play), Sun Microsystems (an operating system, file system, middleware, and server/storage hardware OEM), and announced the upcoming release of an Oracle database-optimized computing environment. Meanwhile, Microsoft has joined forces with HP and QLogic to push out a Microsoft-optimized platform, while VMware, Cisco Systems, and NetApp are building a VMware hosting environment with lots of “unified management” capabilities designed to support VMware’s dynamic application re-hosting.

In short, these vendors are building “Mini-Me” mainframes. Reading between the lines of their announcements, they seem to be conceding that open systems hasn’t quite delivered on its value proposition. 

Anyone around in the early ‘80s will recall the mantra of the distributed systems folks. They declared that breaking with the proprietary mainframe platform would usher in a veritable Age of Aquarius. Users were going to run their own IT, rather than wait for changes and fixes from those laggards in the big iron data center. Moreover, they argued, open de jure standards for connectivity and management would be inherently superior to the de facto standards of the mainframe world: They would open up the computing paradigm and propel the development of competing products that would reduce cost, enhance choice, and avail firms of an entirely new set of features and functions often stymied by the myopic and self-serving interests of IBM.

Lured by the appeal of cheap, easily deployed and managed, user-driven technology that could turn on a dime in response to corporate needs, distributed computing took hold in a big way. Only some of the promises didn’t quite pan out. Try as they might to educate users, turns out that Joe the sales guy didn’t want to add “programmer” to his job description, and Susan the accountant had little interest in becoming a network guru. In the final analysis, products were too complicated and additional IT staff, specializing in the distributed or networked computing wares, needed to be hired. Unfortunately, the new cadre of IT workers lacked well-defined hierarchical ordering and procedural discipline of their glass house peers, so companies needed more of them.

De jure standards didn’t work out as anticipated, either. Low-level hardware standards worked fairly well after a long and tedious effort to develop plug-and-play architectures, but the very fact of competition limited the willingness of vendors to make their wares commonly manageable. Today, for example, there are tens of thousands of pages of Fibre Channel standards, developed mainly by vendors working at standards groups such as the American National Standards Institute (ANSI) that ensure each vendor can develop a storage switch that conforms to the letter of the standards—but with absolute certainty that its product won’t interoperate with its competitor’s standards-compliant product. Absent common management, capacity, bandwidth, and processor are wasted. Pretty soon, the cost of a lot of “cheap” gear grows into something quite embarrassing. Plus, you need to hire more monks.

Customer push-back on these points helped propel server virtualization into the forefront of IT thought a couple of years ago. But, even in this area, vendor competition has produced incompatible hypervisors. That hypervisors are viewed as the future operating systems of “clouds” suggests that incompatibilities will shortly be baked into mainframe Mini-Me hardware/software technology stacks. The open systems folks are becoming what they beheld—and decried:  mainframes.

Mainframes still have the edge, of course. Almost half a century of learning has taught big iron some tricks about resource optimization, management, and virtualization that the Mini-Mes need to learn to become competitive. Plus, discipline in the organizational model of mainframe operations has grown up concurrently with the technology itself and represents a huge learning curve for newcomers. 

Imitation is the sincerest form of flattery, but I recall something I was told in a class I took years ago on training at IBM: You can admire a teacher and even try to imitate his or her style in your own classroom. Ultimately, however, that makes you an imitator, in many ways inferior to the original whose style you’re emulating. Imitating a mainframe isn’t the same as being one.