High-Performance Computing (HPC) suddenly is hot, driven by the explosion of interest in Big Data and the need for analytic compute power to handle it. IBM dominates the high-performance supercomputing market with 52 percent market share in the most recent year.
The star of IBM HPC is Blue Gene. It enables petaflop-scale performance while remaining efficient in terms of power, cooling, and floor space by using specially developed processors based on Power technology. The current Blue Gene doubles processor cores (IBM PowerPC) to four per node and adds four-way Symmetric Multiprocessing (SMP) functionality, hardware Direct Memory Access (DMA), 10GbE, and aggressive power management. Its design allows for the dense packing of processors, memory, and interconnects in packages ranging from one to 256 racks. Yet, the machine provides a standard programming environment that supports a wide range of IBM and open source software libraries and middleware.
IBM, however, is redefining what it means by HPC. In an effort to get away from the heavily scientific connotations of HPC, the new name at IBM is Technical Computing. n terms of platforms, Technical Computing at IBM spans almost the entire platform gamut from Blue Gene to POWER7 to System x and iDataPlex, including systems from its recent Platform Computing acquisition. At a recent briefing, IBM stepped back from talking megaflops, the traditional measurement of raw super computing power (millions of floating point operations per second), and now petaflops.
On its new Technical Computing roadmap, IBM labels the very highest-end exascale, which goes to orders of magnitude beyond petaflops. Most new HPC users are aiming for something less extreme—business-oriented analytic computing driven mainly by Big Data. Where supercomputing in the past was compute-intensive, the new Technical Computing will also be data-intensive.
The middle of the supercomputing spectrum is where POWER7 and Power Systems come into play, as do x86-based systems running Intel’s Sandy Bridge and Ivy Bridge processors. At the far end of that middle ground, IBM has put the POWER8 and POWER9. POWER8 is expected in 2013.
POWER7 is IBM’s most powerful commercial CPU for Technical Computing. On the spec sheet, the new POWER7 p family looks impressive: up to eight cores, 32 threads, dynamic memory expansion, automated optimization, impressive SPECint benchmarks, massive software parallelization, support for AIX/Linux/System i (OS/400), and good balance between cores, memory, bandwidth, cache, and more.
HPC action clearly is shifting to the analytics side, especially operational analytics. Operational analytics often entails real-time analysis; for example, when the customer is still on the phone with the call center or in the midst of an online transaction. Of this, the biggest challenge revolves around unstructured data, notes Bob Friske of IBM’s Power Systems Analytics.
Here the organization must deal with high volumes of data pouring in from different sources and requiring that it be absorbed in real-time. This takes massive parallelism, which was designed into POWER. A single POWER processor has four cores and with higher virtualization can run more cores. It also, adds Friske, has such capabilities as intelligent threads, which can be balanced based on the workload, and both cache and memory affinity, which can be locked on parts of the processor to boost performance. Turbo core allows you to shut off half the cores and divert all the level 3 cache to the remaining cores for performance. Max core does exactly the opposite.
Intelligent threading is done by POWER dynamically as it recognizes the workload. Turbo and Max cores have to be set by IT, and memory and cache affinity must be written into the application.
The POWER server lineup offers an integrated approach to the design, development, and testing. With Capacity on Demand, Hot-Node Add, and Active Memory Expansion, the servers ensure you can keep the most important applications available even when adding capacity to handle new business demands.
Big Data Analytics
China Telecom struggled to gain insight into its customers’ needs because its information was scattered across the business, making it difficult to identify trends. The company wanted to gather and analyze data to help it respond more quickly to marketplace changes and better communicate with customers—a typical Big Data business challenge.