IT Management

Though only separated by an innocuous letter “a,” the difference between “idle” and “ideal” when it comes to data center iron is extremely profound—especially if talking about mainframes. Thus, as a dyed-in-the-blue mainframer, I was taken aback to see a recent statement that unequivocally asserted that “Mainframes are idle 40 percent of the time”! Like many of you, I, too, would have taken this comment with the obligatory grain of salt, had it been made by the likes of Sun, Microsoft, Unisys, or Intel. However, what made it truly tantalizing was that it was being actively promoted by the company with the largest vested interest in mainframes—IBM.
 

This thought-provoking claim appeared in an IBM tutorial on grid computing under the title “New to Grid Computing” and was in the “DeveloperWorks” section of IBM’s Website. I printed a copy of the tutorial for the record, just to be on the safe side. I had visions that somebody high up in IBM’s mainframe group might feel this wasn’t exactly the best publicity for the zSeries, though you have to admit, it sure makes you think very seriously about IBM’s new capacity on-demand options and Workload License Charges (WLC). In reality, IBM’s pSeries marketing folks may also be scratching their heads, since the very next statement in the IBM piece said: “Unix servers are actually ‘serving’ something less than 10 percent of the time.”


These are truly dynamite statements, and I’m extremely puzzled why IBM even bothered to develop the muscle-bound Power5 processor if Unix systems are so under-utilized. Luck was on my side. A few days later, I had a chance to meet with IBM’s vice president for Grid Computing, Ken King, at the GlobusWorld show in Boston. I asked him to explain the significance of these statements.

To set the stage, I started by showing him my, by now very topical, “Deep Blue” column from the February/March issue where I talked about “PCgate.” Then, I asked him whether these claims of gross server underutilization were an indictment on IBM, or if it was the fault of IT professionals like us. Well, I’ll let you guess who got the blame. Ken, however, was quick to point out that this was probably the best that could have been achieved with the technology we’ve had for the last 20 years, and this is what IBM intends to fix with its global on-demand initiative, not to mention computing. Well, this got me thinking even more about IBM and grid computing, since some of you may have noticed that we’ve yet to hear anything significant about grid computing and key IBM infrastructure products such as CICS and IMS. Yes, I talked to Ken about that, too, but that’s another story for a later date. At this juncture, just in case   you missed it, it’s worth noting that Sun, just ahead of that GlobusWorld show (whose focus was grid computing), stole the march from IBM by announcing its $1 per CPU-hour “computing as a utility” service.

It’s time for an analogy: This being the use of private vehicles over public transportation to get one to work, especially in locales with relatively good public transport. Within this analogy, grid computing becomes car pooling. It sure mitigates the problem but is, at best, still only a partial solution. And I’m beginning to think this will also happen with grid computing vis-à-vis the data center. As I said in my last column, and which Ken confirmed, server-side technologies, in particular, virtualization, now give us powerful tools for redistributing workloads. But I worry that alone isn’t enough. We need to rethink and rework our entire enterprise computing model, based on shared resources. That, however, isn’t going to happen very quickly, despite its promise of enormous cost savings. It’s back to that analogy and issues such as control, convenience, inertia, and even security.

There are some potential short-term fixes, and IBM can certainly help us here independent of all this grid stuff. For starters, if mainframes really are idle 40 percent of the time, it’s totally incumbent on IBM to give us better granularity (and pricing) on the “On/Off Capacity on Demand” feature. The 24-hour window, in my opinion, is now inappropriate. I think a major part of the problem is that, with the Web, our usage patterns have changed. We are doing more online transactions and less batch-mode processing. Thus, we are provisioning our servers to handle the interactive loads, which despite globalization, is still heaviest when North America is at work. Moving Linux workloads to the mainframe, though simplifying the complexity, won’t, in my opinion, help us dramatically improve the server utilization issue, especially if Unix/Linux servers are only active 10 percent of the time. 

The bottom line is that we have a major problem on our hands, and IBM has decided to take it beyond what I had originally referred to as PCgate. Grid computing is just part of the solution. But first we need to appreciate the magnitude of the overall underutilization of computing resources. Then we can start to see what can be done—one step at a time—to slowly solve this problem.