It’s kind of odd to start out a column (a brand new one, nonetheless) with a discussion of previous work, but it’s interesting to observe that a lot of what’s going on in the IBM community with the latest “green” initiative is, well, not new. Those who’ve heard my rant about prior art and the lack of basic library research in the current practice of IT (if you haven’t, it’s worth the trip to the next HillGang VM users group meeting; to learn more, email firstname.lastname@example.org) know that the basic premise of data center design is to optimize the utilization of space, environmentals, and location to maximize the benefit of IT to the business it supports. The current buzz over “green” data center design is a strong hint that we at some point have lost track of this goal and allowed it to be buried in the onslaught of the server sprawl encouraged by the rise of the discrete machine culture. The concepts and ideas behind “green” design aren’t new ideas at all; they’re ideas that have been an effective part of good design patterns for decades. We’ve optimized airflow for decades, and the temperature gradient management techniques that are “revolutionary”—yawn. We’ve seen water and silicone coolant techniques before, and I still have my plumber’s key from the last 3081 I had the privilege to work with. What we’re seeing is a new packaging of the techniques, and some new enablers to hand over control of the technique to programmatic control.
An interesting intersection with the Linux and virtualization community is how much IBM is relying on Linux virtual machines to enable control and management of the environmental elements of this phenomenon. The heavily virtualized System z server represents an excellent starting point for fixed-consumption unit planning in the data center—the ability to predict a flat power and environmental consumption amount whether you have one virtual server or thousands is very desirable if implementing a conservation pattern. Much of the power management and common environmental monitoring techniques that are “new” are conceptualized in the Linux implementations of similar techniques—great for integration and data centerwide control infrastructure—with some startling gaps, or decisions that seem (to me) to miss some of the point.
The thing that confuses me is the continued dependence on z/OS-based infrastructure to control some critical elements. It’s been clear for decades that IBM isn’t interested in delivering the ability to control partition weights and IRD-style operations outside the z/OS world, but with the increasing usage of z/VM and Linux, it would seem to be a question of not whether the VM/Linux systems will need control of these knobs, but when—without the imposition of a z/OS guest or LPAR. The two-stage nature of the System z virtualization (LPAR at a gross level, z/VM at a micro level) is a desirable thing, but without access to the management controls, the requirement for z/OS will continue to render that solution unattractive.
In any case, it’s a topic that bears watching. Time will tell whether IBM is successful in remarketing this “new” idea— even if it is recycled.