IT Management

Mainframe computing is suddenly sexy again. Not in a hype-filled marketing sense, but in a very real and practical context. And it has been a long time in coming as time is measured in the Internet Era.

To hear knowledgeable folks recount the story, the mainframe, with its homogeneous infrastructure and proprietary operating system, had its foibles from the beginning. But these were offset by many pluses.

Mainframes cost a lot of money, both to acquire and to upgrade over time. And software licenses weren’t cheap, either. But companies (including those for which I worked) endured the costs with minimal grumbling as a price for doing business.

Year-over-year growth in software licensing fees was tolerated in the main because there were usually incremental value improvements at a functional level with each new release—demonstrating that IBM and independent software vendors were doing something to legitimatize their price increases.

This changed with Y2K, when the mainframe-related cost curve suddenly spiked. The reason was obvious: A lot of code, both in home-grown apps and vendor utilities, needed to be fixed.

Doubtless, some vendors took advantage of the situation to gouge the consumer. In some cases, fuel was added to the fire (the exasperation of the consumer, that is) by hype around “open systems” and “end-user computing.” A lot of new technology was sold to “fix the problems of the data center” with sort of a do-it-yourself approach.

Ironically, the very market for distributed computing that was created to upend the monolithic mainframe has now subdivided itself into Enterprise Class vs. Small to Medium Business (SMB) class. The do-it-yourselfers seem to be congregated at the lowest rung these days, building infrastructure out of white boxes and value-added software, while the folks at the highest rungs seem to want to build a new monolithic mainframe out of Tinker toy servers and virtualization software.

Despite all this change, the mainframe has crept along— a fixture in many shops often treated by analysts and others as a dinosaur of computing. This is in spite of the fact that workloads relegated to mainframes are actually exponentially climbing. Today’s mainframe workloads, measured in Millions of Processes per Second (MIPS), have grown from 3.5 million MIPS in 2005 to 12 million MIPS today. IBM, which had for a time seen its market share in computing diminish, is now back to pre-2000 levels, thanks in no small part to the mainframe.

In future columns this year, I plan to revisit the economics of mainframes. It occurs to me that mainframes do a better job of handling core computing services than distributed systems from both a CAPEX and an OPEX perspective. You have the tools in the mainframe space to get far more work accomplished with far fewer bodies than would be possible in distributed computing. This idea is resonating with the front-office that’s confronting a recessionary economy and a desire to constrain OPEX growth in their IT departments.

Moreover, with mainframes, you might just have a more compelling consolidation story to tell than you have in distributed server world. Some companies I’m visiting are less convinced that the road to green nirvana is achieved by virtualizing server sprawl and imploding several hundreds of servers into several tens of servers, with or without VMware. In fact, many are seeing the result of virtualization to be a combination of unstable server operations and renewed server sprawl. Why not use mainframe LPARs instead, the murmuring goes, to create “bulletproof ” tenant operating system environments that can be insulated from each other by technology that has been developed and proved over decades.

Then, there are the resiliency concerns that seem to be favoring mainframes again. Distributed servers can and do fail because a disk drive manufacturer somewhere wanted to save a few pennies on a vibration sensor, or because some PCI bus firmware wasn’t quite up to snuff, or a device driver failed to properly load. These are problems plaguing little iron, not big iron.

Finally, there’s the green thing: What compute platform consumes more energy—a z/OS mainframe or 800 servers to support 400 failover strategies?

It’s a new year. Maybe common sense will prevail as we turn our attention to business value in the way we crunch our IT budget numbers. Your opinion is welcome at jtoigo@toigopartners.com. Z