Operating Systems

Imagine you’re the CIO of a large bank. Your IT department has developed a new service for the bank’s ATM machines that can bring in additional revenues. Unfortunately, once you deploy the service, it creates a larger-than-expected workload increase on the mainframe and on some distributed components. You discover that the service won’t generate enough revenue to justify the cost of adding capacity to support it. If you had known this before you deployed the service, you could have avoided the problem.

Today’s business services involve the complex interaction of mainframe and distributed components. Transactions typically begin at the endpoint, go through a Web server to an application server, and finally to a mainframe for processing. Assessing the overall capacity footprint of services is difficult and involves the manual correlation of disparate data from multiple sources, both mainframe and distributed.

What if you could see, in a single enterprisewide view, the capacity consumed by a transaction across all layers of the infrastructure—from the endpoint to the mainframe? What if this view showed the cost implications of that consumption? The resulting visibility would reveal where you’re spending money on capacity and enable you to optimize that spending.

Here are three strategic requirements for making this enterprisewide approach to capacity optimization a reality:

Achieve end-to-end visibility: Monitors gather reams of operational data from IT infrastructure components across the enterprise. The problem is that the data is scattered across multiple sources. To transform it into actionable information, capacity planners must assemble, correlate, and analyze the data manually, a costly and time-consuming effort.

What they need are tools that analyze and correlate this data to track the capacity footprint of transactions end-to-end, across all distributed and mainframe components, and present this information visually in a consolidated, easy-to-read view. This view should annotate capacity information with Key Performance Indicators (KPIs) that are meaningful to both mainframe and distributed systems capacity planners.

The tools should also apply predictive analytics to the data to uncover potential capacity bottlenecks, enabling the IT staff to take action to avoid service disruptions. For example, through trend analysis, the tools could detect the impending capacity saturation of a particular physical server and alert the IT staff. Predictive analytics also enable planners to assess the capacity impact of business services before deploying them.

Take a business-oriented approach: It’s important to understand capacity consumption from the business perspective. That requires visibility into how IT infrastructure components are being used by the business services they support. To achieve this, the end-to-end view should characterize each workload with respect to its business relevance. Planners can then drill down for more detailed capacity consumption data. For example, a mainframe planner can drill down to determine how a service is using various mainframe components. The mainframe provides especially valuable data that gives insight into the business implications of capacity consumption.

Stay cost-focused: Traditionally, capacity planning has been divided between mainframe and distributed system specialists. The problem is that the two groups don’t typically use common metrics or terminology. To work together effectively, mainframe and distributed systems planners need a view that employs metrics meaningful to both.

A common metric is cost because, after all, the major purpose of capacity optimization is to maximize cost-effectiveness. Consequently, the enterprise view should include the cost implications of capacity consumption so planners could see where money is being spent on capacity and optimize spending. For example, they may see an opportunity to move portions of a mainframe workload onto lower-cost distributed resources.

What About the Cloud?

The rapid adoption of cloud computing is making capacity optimization even more challenging. Business services often rely on both private and public cloud components, some of which may be running on the mainframe. The strategies for achieving enterprisewide capacity optimization can be deployed for the cloud as well, making a truly hybrid IT environment that incorporates mainframe, distributed, and cloud achievable.