Operating Systems

The power of High Performance Computing (HPC) is well-established, whether it’s being applied to weather forecasting and climate analysis, genome modeling and analyses of neuron function, or material modeling for new generations of semiconductors. But when we think of HPC, we think of supercomputers, colleges, universities and research institutes. IBM has always been a major player in this space, which is why the recognition of Sequoia, the IBM Blue Gene/Q system installed at the Lawrence Livermore National Laboratory in Livermore, CA, for the Department of Energy, as the number one ranked supercomputer in the world came as no surprise.

Of more immediate interest to enterprises, however, is the fact that IBM, with the finalization of its 2012 acquisition of Platform Computing, has moved aggressively to create an HPC solution capable of scaling to the needs and budgets of enterprises. Earlier in 2012, Helene Armitage, general manager of IBM Systems Software, said that: “The acquisition of Platform Computing will help accelerate IBM's growth in smarter computing, a key initiative in IBM's Smarter Planet strategy, by extending the reach of our HPC offerings into the high growth segment of technical computing ... . Our intent is to enable clients to uncover insights from growing volumes of data so they can take actions that optimize business results.”

Just what are the needs for HPC presence in enterprise data centers today? IDC reports that the volume of data under management in enterprises is doubling every 18 months. Gartner cited data growth as a major data center challenge more than two years ago, and nothing has happened since to change that. On top of this, enterprise IT is being asked to deliver business analytics capable of breaking down and evaluating this data—and in some cases, it’s being asked to deliver new types of applications that allow the business to statistically model for various financial risk scenarios or to virtually model product concepts before they take on physical form. Today, many more mainstream end-user applications need a high-performance infrastructure.

Traditional transaction servers aren’t designed for these workloads, and traditional HPC with supercomputers is too costly and specialized. Positioned between these two options is IBM Platform Computing, which comes with a price and resource scalability that fits more comfortably within enterprise data center dynamics and budgets.

Platform Computing vs. Traditional HPC

Platform Computing solutions use technologies (e.g., x86 hardware, Linux operating systems) that are already employed in enterprise IT infrastructures and that provide scalability and adaptability for the non-transactional parallel computing required for HPC. It does this without requiring the enterprise to invest in the supercomputer-level dollars and resources that characterize large university and research institute installations of HPC—and it has the ability to be “rightsized” for enterprise HPC applications that will benefit the business.

Today, these business needs are being felt in data centers in several ways:

• Compute demand continues to rise, fueled in part by x86-class server expansion.
• Compute and data in both transactional and Big Data applications need to be supported within the framework of a common IT infrastructure, with the ability to optimize mixed workloads for cost, performance and strategic benefit.
• IT needs an affordable, scalable way to implement HPC with short time to market that also includes little or no “wait time” while IT learns HPC management skills.
• The HPC in the data center needs to be malleable as it scales so it can be delivered in cluster, grid or cloud frameworks, depending on the best way to deliver service to the business.
• The Platform Computing products acquired by IBM come with a base of more than 2,000 global customers that include 23 of the 30 largest worldwide enterprises. Today, there are four different platform computing software solution sets that address varying enterprise HPC scenarios and needs:

- Platform LSF (Load Sharing Facility) Family: Scalable workload management software for demanding, mission-critical, heterogeneous computing environments that comes with workload policy automation and is capable of scale out to thousands of concurrent users and jobs that share a virtual pool of IT resources (e.g., bare metal or virtual, Linux or Windows, etc.)
- Platform HPC: Purpose-built HPC management software for small to medium businesses that’s optimized for ease of purchase, deployment, management and use and is bundled with hardware systems
- Platform Symphony Family: High-throughput, low-latency compute and data-intensive workload management software delivering high performance and shared services for heterogeneous analytics and Big Data (MapReduce) applications
- Platform Cluster Manager: Provisioning and management of HPC clusters, including self-service creation and optimization of heterogeneous HPC clusters by multiple user groups.

What an enterprise ultimately chooses for its high-performance infrastructure approach depends on the nature of its workloads. Platform computing offers the broadest set of integrated capabilities so companies don’t need to patch together disparate commercial and/or open source point offerings. To cite examples:

• Red Bull Racing wanted to more fully exploit an expensive IT infrastructure for its worldwide racing competitions. With Platform LSF, it was able to intelligently schedule workloads and dynamically allocate hardware and software, based on business policies. Engineering simulations were easily accomplished with user-defined workflows. Overall system performance and throughput increased 20 percent.
• A small manufacturer that wanted to run larger, more complex design simulations at the same time it reduced time to results adopted purpose-built HPC to automate cluster management. The product design simulation lifecycle went from 28 days to two days.
• A large financial services provider with 14 separate business units that shared a global infrastructure wanted to avail on-demand computing for more than 200 investment banking and analytics applications to all of its business units. The organization implemented Platform Symphony Family in a multitenant, shared grid capable of managing heterogeneous applications for data- and compute-intensive applications.

Adding HPC to IT Infrastructure

IDC reports that worldwide factory revenue for the HPC technical server market increased 3.1 percent in the first quarter of 2012 to reach $2.4 billion, up from $2.3 billion in the same period of 2011. The research firm also forecasted that HPC technical servers will grow at a Compound Annual Growth Rate (CAGR) of 7.3 percent to attain revenues of $14 billion by 2016. 

2 Pages