For the last three years now, we’ve done primary research in the mainframe market to understand where things are headed for the IBM System z platform. This has taken the form of a survey that touches approximately 1,000 respondents from some of the largest, most recognizable mainframe users around the world.
Each year, we pay particular attention to this question: “For every dollar spent on your mainframe operations, how much of it is allocated to software, hardware, labor, and other (e.g., facilities, energy, etc.)?” Our respondents report that software is always the largest line item and also one of the largest growth areas (in contrast with distributed environments, where labor is consistently reported as the biggest cost). With software costs large and visible, they become the focal point for the inevitable cost-cutting hammers that finance guys (with their spiffy spreadsheets and limited understanding of the underlying story) like to wield.
This raises the question, “Is mainframe software too expensive?” We’d argue that no, it isn’t. Moreover, shifting more of the budget into software from the hardware and labor categories is likely to result in even better overall Total Cost of Ownership (TCO) for large, compute-intensive data centers.
Before we argue about the budget mix, however, we must first dispose of the argument that “the mainframe itself is too expensive.” We must argue that, for a given workload, the mainframe is a better alternative than solutions based on other platforms for a given workload.
Let’s assume the value of the workload is constant and the business has justified deploying the application. The only issue in question is whether the application portfolio is most cost-effectively hosted on a mainframe or in a distributed systems environment.
To open our assault on blatant falsehoods concerning the “unreasonable expense” of the mainframe platform, here’s a quote from Gerry Shacter in PC Week from 1992:
“Mainframes are big for one reason. It was too hard to manage a lot of small ones. PCs will get bigger, too, until there are no more because the one that’s left will be so big, it will be able to handle all the work. It will be called a mainframe, and these upstart programmers will think they invented it.
“One day, the management of the Fortune 500 will wake up and look upon its thousands of PCs and wonder what they are all doing, finally realizing that it would be much easier to replace them all with one mainframe and then wonder what it’s doing.”
Within these rather amusing predictions, we find the truth. First, that for large, resource-intensive workloads, there’s a concept known as “economies of scale,” and second, the labor to manage many little things attempting to work together is much higher than the labor to manage one big thing doing a lot more work from a centrally managed perspective.
While it’s unreasonable to assert that all low-end systems will eventually morph into one giant mainframe (despite attempts by folks such as Amazon to create a monolithic “cloud”), we’re on pretty safe ground by saying that the case is quite clear for data centers running massive databases, with high transaction rates, mixed workloads, and extreme security and availability requirements (these include banks, major corporations, etc.). The annual mainframe survey referenced earlier shows these are exactly the data centers that are growing mainframe capacity and adding new workloads, thus driving larger and larger configurations.