Given the hype in the trade press, you would think that server virtualization is another “2.0” trend— like Web 2.0 or Infrastructure 2.0. Yet, the facts tell me a different story.
At the high end of analyst estimates, only about 27 percent of companies are pursuing a server virtualization strategy—not the majority as you might assume from the hype, but a fairly compelling minority. Of these, only about 37 percent are leveraging VMware to virtualize servers on x86 platforms.
For the most part, the servers being virtualized aren’t hosting apps that have high-performance requirements. Unscientific polls and usage cases provided in the trades suggest that virtualization remains a consolidation play for low-performance and low-utilization file and print servers and low-traffic Web hosts. VMware would probably disagree, but there’s no big push by companies to replatform database servers, large email systems, or high-traffic Web hosts into its virtual machine environment: x86, even on its best day, doesn’t parse resources with sufficient alacrity to support high-performance apps.
Another reason I’m reluctant to jump on the virtualization bandwagon is that the winning technology—what will become the de facto standard—has yet to appear. The current crop of software products seems to lack what marketing folks call “sustainable differentiation.” With competitors appearing weekly to the redoubtable VMware offering, some of which proffer significant cost and performance advantages over the EMC-controlled server virtualization vendor, the ultimate winner of the “hypervisor” war remains to be determined.
Then there’s the price tag. Being generous and accepting the claim that up to 20 physical servers can be virtualized on a high-end x86 platform, we’re looking at about $15 to $25K for the multi-processor physical host, plus VMware licenses at $3K to $6K per processor; depending on software options, virtualizing servers isn’t cheap. At the high end, you’re looking at nearly $3,000 per virtual machine. Add in the training requirements for support staff, and the fact that, in most cases, decommissioned servers whose apps have been virtualized tend to be “re-purposed” (that is, placed back into use by the company for hosting other stuff), and any purported cost savings seem ethereal to me.
By contrast, IBM’s latest entry into the mainframe market, the z10 Enterprise Class (EC), provides the capability to virtualize up to 1,500 x86 servers using logical partitioning technology (LPARs) at a price of roughly $600 per server. With claims such as these topping its z10 press releases, IBM seems to want to position the mainframe as the superior virtualization solution. But I’m a bit circumspect about this claim, too.
Yes, the mainframe provides a superior solution to x86 hypervisors on its face. LPAR technology is seasoned and bulletproof. Sysplex ensures you won’t lose all your LPARs if a processor fails—another resource picks up the load. Score one for the mainframe.
Mainframe operations personnel are costly, but for companies that already have an investment in the technology- savvy staff, this cost is a wash. You don’t need to establish new capabilities with a hard to come by Operating Expenditure (OPEX) budget. Score two for the mainframe.
The real issue comes down to one of objectives. Since the preponderance of servers being consolidated on x86 these days are file servers and low-traffic Web hosts, you need to ask yourself whether the mainframe really is the platform you want to use to host these apps and their data. Conversely, if you want to virtualize performancehungry apps—databases, high-performance transaction processing apps, high-traffic Web servers—maybe the mainframe is the preferable platform for reasons of its superior resiliency and resource management.
Somewhere along the way, data criticality must factor into your choice of a virtualization platform. Let’s agree that mainframe resources are ideal for hosting mission-critical applications and data. By this logic, using LPARs to virtualize a bunch of file servers whose data may be less frequently accessed and less missioncritical only makes sense to me if there’s some other objective that you are pursuing—such as wrangling data into a better managed storage tiering scheme (using DFHSM, for example, to free up space on disk and to more efficiently leverage mainframe grade tape resources). While mainframe DASD is more expensive than certain “open systems” storage platforms, this acquisition cost difference is most certainly eroded by the higher utilization efficiency obtained in mainframe storage by virtue of de facto standards for DASD imposed by the operating system and tools such as DFSMS that have no peer in the open systems world.
Again, from a simple analysis of cost, resiliency characteristics and data management efficiency, mainframe-centric virtualization seems to make sense. Your views are welcome.