I’m just back from WAVV, and a grand time was had by all. More than 150 customers, vendors, and IBMers spent four days in Covington, KY (just outside Cincinnati), talking about z/VM, z/VSE, and Linux on System z. Although WAVV may not hold its conventions in the hot spots of the convention universe, it’s an all-volunteer organization that delivers a great bargain for System z education. You can also use it as a vehicle to tell IBM—and other vendors—what features and functions you’d like to see in upgrades and future products. WAVV has a very slick requirements process; to learn more, visit www.wavv.org. (Be sure to see Pete Clark’s column in this issue, as he discusses some recent changes to the requirements process.)
Another highlight of WAVV was hearing Dr. Karl-Heinz Strassemeyer (the IBM Fellow who shepherded the Linux development project inside IBM) talk about the future of System z computing. After about 25 minutes of hard-core, semiconductor physics goodies, he laid out several interesting visions of what happens next with large systems. The idea of specialty accelerators seems to be in the plan; the System/360 concentrated on producing a general-purpose computer by trying to equally balance CPU performance, memory bandwidth and I/O capacity, but there will always be special problems that need different trade-offs. Adding specialized processing units for things such as business intelligence analysis or trend-spotting engage the whole idea of cooperative processing with Intel and Power systems to do the “right tool, right job” thing. Without spoiling Dr. Strassemeyer’s fun, I recommend you ask your IBM rep to have an IBMer come talk about the future of System z—it’s a doozie.
Two other items popped up on the radar that sounded interesting to share. First, the first Red Hat Enterprise Linux version 6 betas are available from Red Hat’s Website at www.redhat.com. The latest version of RHEL, which includes several new and interesting open source packages and a lot of upgraded packages, seems to be catching up with the Novell crowd on integrating the latest and greatest from IBM and others. We’ve just started testing it, and so far it looks like a fairly significant improvement over RHEL 5. We’ll know more in a couple of weeks when we finish the test runs.
Second, we’ve been playing with a very interesting tool for managing server provisioning named xCAT (the Extreme Cloud Administration Tool); this tool was recently open-sourced by IBM and is available at http://xcat.sourceforge.net. xCAT provides a set of tools to set up management and provisioning of servers, create and deploy physical or virtual machine system images, and remotely manage those systems in a straightforward way. It includes lots of hooks for local code and provides a nifty configuration database of systems that have been built, deployed, and destroyed over the lifetime of the tool. IBM originally developed xCAT to deploy large clusters and grid computing complexes, but it’s proved to be a neat way to build and deploy virtual servers on System z as well. IBM is still working on official System z support, but we’ve been trying out a number of features and it’s pretty cool. It’s definitely worth reading the docs to get a sense of the kinds of things you need to know and do to manage a large Linux infrastructure.
Next issue, I’ll talk a bit more about our test application of xCAT on Linux on z and some late news on the Sun/Oracle Linux front that’s developing apace.