Dec 18 ’08

IBM’s New Enterprise Data Center in Practice: Sparkassen Informatik

by Joe Clabby in Mainframe Executive

Due to poor architectural design practices, data centers all around the world have become highly inefficient and fragmented. Distributed systems architectures proliferate network access points, creating tens, hundreds, or even thousands of ports that need to be protected from security intrusions. Distributed servers are notoriously inefficient from a resource utilization perspective, often being used at less than 20 percent of capacity. Underutilized distributed servers waste power and energy due to inefficient power supplies, a proliferation of network interface cards, and an overabundance of supporting network switches, hubs, and routers. Further, distributed systems architectures are rife with program-to-program interoperability issues—such as communicating between Microsoft .NET, Java 2 Enterprise Edition (J2EE), and legacy programming models—and these interoperability issues also hamper the efficiency of business processes.

To address these issues, last May IBM introduced a new model for data center design— called the New Enterprise Data Center (NEDC)—that focuses on controlling data center costs through more efficient information systems and data center designs, as well as better resource use, improved program-to-program communications, and more advanced service and process flow management practices. This NEDC model calls for:

• Increased systems/storage consolidation
• Heavy use of virtual computing involving underutilized resources (Virtual computing is the logical pooling of physical resources that enables unused resources to be easily found and exploited.)
• Implementation of a Service-Oriented Architecture (SOA)
• Deployment of automated systems and business process flow “service” management software.

IBM believes that, by redesigning data centers using the NEDC model, IT executives can collectively save billions of dollars annually. One IBM customer in Germany’s retail banking industry, Sparkassen Informatik (SI), is already following IBM’s NEDC advice and is saving millions of dollars annually in hardware, software, power, management, and services costs. This article describes SI’s business and information system environment in greater detail and shows how SI is saving millions and using the NEDC model for competitive advantage.

SI Background

Owned by the German Savings Bank Organization, SI is a highly successful provider of IT services to Germany’s retail banking industry. Through mergers and acquisitions, strong customer service and attractive pricing, SI has grown into an organization that owns more than 30 percent of Germany’s retail banking services market and now supports more than 290 savings banks, 30 million banking customers, and 175,000 banking employees.

The senior executive in charge of SI’s production data centers and network environments is Uwe Katzenburg, stellv. Vorsitzender der Geschaftsfuhrung. SI has chosen IBM as its premier information systems and services supplier and Katzenburg said he believes that choice has led to operational efficiency, especially in IT, which is key to SI’s profitability.

SI’s enterprise information systems are tuned to deliver maximum computing power without wasting computing cycles or energy. To deliver maximum computing performance, SI:

• Buys dense systems architectures (primarily large, scale-up systems and blades) that house dozens or hundreds of servers in compact systems enclosures or chassis. (By consolidating computing power into dense packages, the management of thousands of servers is greatly simplified, software licensing costs are reduced, and the need to install thousands of redundant failover servers on a one-to-one ratio is scaled back.)
• Virtualizes (logically pools physical computing resources) its enterprise-class servers to increase usage rates and reduce acquisition costs (though to date little virtual computing with x86 resources has occurred)
• Deploys advanced systems management software to automate systems/storage/network management, helping to reduce management labor costs
• Modernized its data centers—adding new power management systems to feed its dense architectures, updating its Uninterruptible Power Supplies (UPSs) to deliver the exact amount of backup power required in case of failure, and adding new cooling facilities to efficiently dissipate the heat generated by its dense systems architectures—all while significantly scaling back the number of data centers it operates. (In an age where many enterprises are building more data centers, SI has downsized from nine to six consolidated data centers over the past five years.)

The Road to Operational Efficiency

As an IT service provider, SI recognizes that any money it can save in its own IT operations gets directly passed to the company’s bottom line as increased profitability. So one of the most important questions SI’s executive board (which consists of six IT and business executives) needed to address from the outset of this company’s formation was, “How can we improve overall information systems efficiency, while reducing waste?”

Some answers were obvious. Systems, storage, and network management can be labor-intensive and expensive. Further, management of distributed information systems can be extremely complex (finding and exploiting unused resources in a distributed environment can be a real challenge, as can securing all the access points in distributed systems architectures. SI’s executive board recognized that the company should automate the management of systems resources whenever possible to contain management costs.

Other opportunities to improve enterprise computing efficiency were, however, less obvious. For instance, is it more energy efficient to deploy hundreds of smaller servers, or dozens of large servers? By consolidating many servers into fewer servers, SI realized that large servers, running at higher usage rates, actually burn tremendously less energy than smaller servers that run at 10 percent or so of capacity. (As proof of this concept, IBM recently announced its mainframe “gas gauge,” a measurement tool that has been used to show that certain mainframe configurations can process the workload of 250 x86-based Linux servers using only 10 to 12 percent of the energy used by these Linux servers.)

SI’s board decided to centralize management functions across fewer systems (as opposed to managing multiple, distributed, frequently underutilized servers). “Systems consolidation,” according to Katzenburg, “has become a key focal point for SI—and a matter of policy.”

SI’s Data Center Systems Environment

SI long ago realized distinct operational cost advantages by moving to a centralized, dense model of computing. To achieve these advantages, SI uses several different classes of dense servers, including System z mainframes, high-end and midrange IBM Power systems, Sun UltraSPARC-based servers, and occasionally scale-up x86 servers (such as IBM’s X4 System x servers). SI also deploys HP and Fujitsu blade servers to handle Windows serving.

Each type of dense server runs different types of workloads:

• The IBM System z is charged with running highly secure, transaction-intensive, COBOL-based workloads. (CICS transaction environments remain a more efficient way to process transactions in a tightly coupled, dense mainframe environment than using a myriad of distributed servers and databases.)
• IBM Power and Sun UltraSPARC servers run UNIX application workloads; most importantly, SI’s OS Plus Portal application environment. SI could standardize on one system’s platform in the UNIX space, but has chosen to split its UNIX business to create a “healthy competitive environment” between Sun and IBM and to create a means to leverage acquisition costs between the two vendors.
• Windows x86 servers run select branch custom applications, a full suite of client applications, and provide terminal services.

Why Mainframes? The Sysplex Anomaly

SI’s data centers mix distributed servers with blade servers and mainframes. SI will soon have installed 30 IBM System z (mainframe) footprints in 11 Sysplexes (large, clustered environments). This deployment of mainframes is anomalous; IT service providers usually base their compute offerings on 64-bit UNIX architectures, including IBM Power and Sun UltraSPARC platforms or x86-based architecture. And this anomaly deserves closer scrutiny.

Why did SI choose mainframes as a central architecture in its information systems environment? One factor is resource consolidation. Mainframes pack a lot of computing power into small real estate footprints as opposed to expansive, space-hogging, distributed server farms. By capitalizing on this dense systems packaging, SI has increased its processing capacity while reducing the number of large data centers it needs to run.

Another big driver for mainframe usage is resource virtualization. By condensing many servers into fewer servers, using the mainframe as a consolidation platform, and deploying virtual computing server environments on the mainframe, SI lowered its server count while increasing its server usage rate to the 90 percent-plus range. One of the reasons mainframes perform at such high utilization rates relates directly to the mainframe’s strong support for virtual computing. System z architecture has supported virtual computing for almost 40 years, and features several advanced management capabilities, including Logical Partitions (LPARs), advanced memory management, and the ability to support thousands of virtual machines per system. By contrast, x86-based servers offer comparatively basic support for virtual computing.

Still another driver is mainframe management. Centralized management across consolidated systems was an important focal point in SI’s efforts to control costs. Mainframe architecture has been built around the concept of centralized management and offers the industry’s richest centralized systems management and centralized virtual systems management environments.

A core tenet of SI’s business is reliable delivery of computing capacity. Mainframes still provide the highest Meantime Between Failure (MTBF) in the industry while offering almost limitless expansion capacity. From a system design perspective, System z mainframes are ideal for meeting SI’s reliability requirements and capacity needs. SI’s System z mainframes regularly operate at close to 100 percent capacity while handling huge transaction volumes in a consistent, reliable, and secure manner. Some of SI’s UNIX servers are operating at 85 percent utilization rates while most of SI’s x86-based servers are operating at lower utilization rates.

The combination of all these drivers has had a positive effect on overall computing efficiency, but also has put some stress on data center designs, as denser systems generate more heat that must be dissipated. To deal with heat dissipation, SI has modernized some of the devices in its data centers such as chillers, Heating, Ventilating and Air Conditioning (HVAC) systems, and UPSs, but hasn’t yet reached the point where it needs to water cool its data centers. (Note: Water cooling has a 3,000:1 cooling advantage over air and may become a desirable cooling alternative for SI at some future date.)

Summary

Simply stated, SI’s business strategy involves providing reliable computing services at a price point that substantially undercuts a retail bank’s internal computing costs as well as computing costs offered by its competitors. To do this, SI uses technologies that let it deliver highly reliable services— specifically, powerful, highly scalable servers with strong Reliability, Availability, and Security (RAS). Further, SI also relies on technologies such as automated management software that help reduce systems, storage, and network management costs.

By consolidating its systems, using virtual computing, and redesigning its data centers, SI has become a leading pioneer in the implementation of the NEDC model. The company’s aggressive adoption of the principles of consolidation and virtual computing, its standardization on SOA infrastructure, and its implementation of efficient data center cooling and energy use have helped improve its competitive position while also enabling its customers in an extremely cost-effective manner. SI’s competitors have been slower to react to these changes in data center design and SI has consistently gained market share.

SI’s use of System z mainframes is seemingly an anomaly, but when comparing System z to other systems architectures, it’s easy to understand why SI has made such a huge commitment to IBM System z architecture. System z represents an ideal design point for SI; they can operate at 100 percent capacity for extended periods with little risk of failure, enabling SI to achieve its main operational reliability and efficiency goals. Further, System z’s small footprint is important because data center real estate space is limited. Finally, System z’s efficient power consumption characteristics make it an even more attractive offering due to rising energy costs.

SI recognized early on the cost and asset utilization advantages it could realize by consolidating its server environments and employing a virtual computing approach with its servers and storage devices. Further, SI also recognized that SOA could simplify the integration of hardware and software assets resulting from its many acquisitions. Additionally, by consolidating and virtualizing its information systems architecture, while also implementing SOA, SI has been able to create distinct competitive advantage over its systems integrator competitors.

SI’s achievements are replicable across data centers of all sizes in multiple industries. The deployment and optimization of dense systems architectures will necessitate some changes in data center design, but SI has proved that running optimized, energy-efficient information systems has a clear, profound, positive impact on competitiveness and profitability.