Spotlights

Jan 20 ’12

One of IBM’s best-kept secrets is that information technology buyers can save more than a million dollars if they deploy workloads with heavy Input/Output (I/O) on a mainframe as opposed to deploying these same types of applications on a group of x86-based, multi-core blade servers. Benchmark data shows that large-scale architecture (such as a mainframe) doesn’t require as much headroom, or spare capacity, as smaller systems to execute heavy I/O workloads. So, a mainframe can run 240 virtual machines as compared to about 10 virtual machines per blade on a typical Intel 8 core system. (In this comparison, both systems run the same workload at the same service level.)

The price of running a heavy I/O workload on 240 virtual machines running on a mainframe (at 70 percent CPU utilization with a high-reliability, service-level profile) should be approximately $3.3 million. The cost of running the same environment (240 virtualized machines, 24 blades, on 192 CPUs) on a group of Nehalem EP-based, Intel Xeon blade servers is approximately $4.8 million (see Figure 1). The cost of the IBM hardware is significantly more than the cost of the Intel hardware, but when software licenses (usually charged per CPU core) are rolled in, the numbers change radically to favor Linux on the mainframe. Accordingly, choosing an IBM System z as a Linux/cloud consolidation server has the potential to save IT buyers more than a million dollars!

How Is This Possible?

The whole design point of large-scale mainframe architecture is based on sharing resources. The mainframe is known as a “shared everything” architecture. Mainframes share memory, a large internal communications bus, central processing units, disk, and more. In a scale-up, shared environment, all of these resources can be made available to a common pool—all within a single chassis, or self-contained mainframe architecture.

Demand for resources in this pool is constantly fluctuating (usage peaks and valleys) just like in a distributed environment; however, it’s interesting to note that the peaks and valleys tend to balance out better in a mainframe, large-scale resource pool.

Smaller-scale servers such as Intel Xeon multi-cores don’t have as much headroom. Resources such as CPU, memory, and I/O are bound within each server, so sharing resources means hopping across a network. Keeping track of where resources are is difficult enough, but when you add network congestion and latency problems, it’s easy to see why headroom issues occur. So, smaller-scale servers must be over-provisioned. More headroom needs to be allocated to handle usage peaks and valleys and to deal with network issues in a given, small-server environment. This makes smaller servers (such as blades) less efficient.

Where’s the Proof?

A benchmark study conducted by IBM’s Software Group Project Office (accessible atftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03125usen/ZSW03125USEN.PDF) reveals the advantages of the mainframe. With regard to the report’s credibility, consider that:

  • IBM sells many x86 Xeon multi-core systems. It doesn’t help IBM to disparage x86 server platforms.
  • This study was done in 2009, before Nehalem EP (Intel’s first real Xeon multi-core server architecture) was released. So, you could argue that the report compares a mainframe to older Xeon architecture. However, to remedy this, we’ve supplied an updated graph (see Figure 2) based on more current Xeon architecture.
  • IT executives who use both architectures can and do verify the core principle of the report—that scale-up mainframe architecture manages headroom and capacity better than x86 servers.

It’s also important to understand the spirit of this benchmark. IBM engineers were looking for a way to explain why mainframes can host more virtual machines than smaller x86 multi-core environments. So they constructed a model that:

2 Pages