The almost-deafening noise around “utility computing” is inversely proportional to the availability of any real technology to support such a concept. Advocates are full of sound and fury, to paraphrase the Bard, but they have no fist in their glove.
There, I said it.
I’ve been an avid watcher of grid computing and “superclustering”— architectures that should provide some sort of basis for utility computing—for the past several years. Like anyone with a nerdy bent, I’m amazed at how much mileage all those poorly funded government research laboratories and collegiate technology institutes have managed to get out of a little commodity off-the-shelf gear and a lot of chutzpah. The ability to build supercomputers out of Linux boxes is truly amazing, and getting all the little servers “singing on the same sheet of music” to parse out the logical complexities required to visualize everything from a warhead detonation to a weather pattern is a remarkable achievement in anyone’s book.
However, while “deep-blue, math”-based applications may be well-served by such platforms, the truth is that most accounting and human resources systems deployed today aren’t really designed to take advantage of them. I’m not aware of any development plans at Oracle, SAP, Microsoft, or anywhere else aimed at porting business applications to massively parallel architecture.
Even if mainstream applications could be hosted on utility architecture, the truth is that the management software stack required wouldn’t be there. Most of our current management technology is not “application facing,” but rather “resource facing.” Comparatively little work has been done to characterize applications based on the combination of CPU, network, and storage resources that will enable their optimal performance. If it had been, we would be rolling around on the floor, laughing at PowerPoint slides depicting utility infrastructure as “plumbing.” It doesn’t take a degree from MIT to realize that not every application is content with a simple drink of water; some want Gatorade or a frou-frou drink with a little umbrella, or a tall, cool beer.
And while we’re on the subject of beer, consider this fact: Even lousy, low-end, grocery store beer has a “stale by” date, but chances are very good that your mission-critical data doesn’t. That’s because even less work has been done to characterize the data produced by applications—their access requirements, retention requirements, confidentiality, or security requirements, or even their “stale by” dates remain a mystery.
This lack of application- or data-facing management technology is the bugaboo of utility computing. It flies in the face of the “one-size-fits-all” assumptions underlying most arguments advanced by utility computing advocates. And it exposes them for what most of them actually are: thinly veiled efforts to lock customers into a proprietary vendor architecture, or lynchpins of poorly formed business case to outsource technology to a third-party service provider.
Like it or not, our current application set in business involves a diverse combination of software products and architectures, each with different demands to make of our infrastructure. Just as most “backup everything” or “SAN everything” or “network everything” or “database everything” strategies have failed to deliver on their business value proposition, so, too, are most utility computing strategies beset with half-baked vision and value.
Is there any place for the utility computing constructs in development today—outside of DoD, the National Weather Service, or the Search for Extra-Terrestrial Intelligence, I mean? Sure there is.
Parallelism is one ingredient required to remove latency. Parallel engines can enable faster I/O throughput through virtualization appliances in storage. Parallel load-balancers can alleviate bandwidth chokepoints. Parallel caches can establish a single global instance of data to alleviate the need for replication and wasted storage capacity.
Here’s an idea: Let’s think about using the output of the High Performance Computing Center at the University of New Mexico and other hotbeds of utility architecture to advantage today—not as a panacea for business application hosting, but as another enabling layer of infrastructure technology.
As Sigmund Freud once quipped, “Sometimes a banana is just a banana.” Z