Operating Systems

The Drive to Optimize Costs

According to Howard Rubin, founder and CEO of Rubin Worldwide and a pioneer of technology economics, if IT were a country, it’s estimated to have the fourth largest Gross Domestic Product (GDP) in the world ($4.5 trillion) behind the U.S., China, and Japan. The average company is estimated to have spent 3.5 percent of its revenue and 4.3 percent of its operating expenses on information technology in 2011. Fifty-seven percent of IT expenditures were for costs associated with infrastructure (source: Rubin International, “Technology Economics: The Economics of Computing—The Internal Combustion Mainframe,” Oct. 26, 2011). So, as necessary as IT is to the business, it’s also quite expensive.

Even though a large mainframe may run many applications and be shared across many different parts of the business, the mainframe often has the largest line items of expense. These big-ticket expenses make the mainframe a target for cost-cutting. IT must be concerned with making all platforms, including the mainframe, more cost-effective. So, IT must constantly optimize costs by juggling the priorities of increasing availability and performance while also lowering expenses.

The Risks of Haphazard Mainframe Management

Some organizations approach mainframe management haphazardly and that can create risks. Having improper management of performance thresholds, one of the basic building blocks for mainframe management, can lead to haphazard management practices for availability and performance. If you don’t stay on top of how you're setting performance thresholds—and how many alerts are firing on your current availability thresholds—there could be a tendency to ignore alerts that have no meaning. However, meaningless alerts just put traffic on the system and make it more difficult to identify significant alerts.

The presence of unnecessary alerts may signify there are real, underlying problems that aren’t being caught. A user or business-critical application could be suffering from performance or unavailability issues that will go undetected.

The business must constantly revisit thresholds, but some may have been set by a guru who is no longer there. That’s why it’s important to address these questions: Do you know why each threshold was set? Do you have a history of each metric, whether it aligned to a performance problem, why it was necessary, and whether it should be changed? It’s difficult to ensure thresholds are dynamic—attuned to business needs—if it becomes necessary to rely on institutional memory or guesswork.

Or, perhaps you know that some thresholds need to be adjusted to be more meaningful, but researching them keeps getting postponed. The right technology methodology can help. There may be more than 10,000 metrics for DB2 that could be set. So, how do you minimize IT resources while optimizing application availability? When you're dealing with 10,000 metrics, how do you know which ones are worthwhile?

An effective way is to organize these metrics into groupings that let you view data from a business perspective. It isn’t possible to set 10,000 metrics individually because it can cause you to miss some metrics and overstate others. What’s required is the capacity to pre-select metrics based on Key Performance Indicators (KPIs) and business needs.

False alarms are a nuisance, and if enough go off, someone might start clamoring to get them fixed. Likewise, if an alarm should have gone off due to a business-critical issue but didn’t, your organization would eventually be driven to zero in on that problem and fix it. Waiting for something to break isn’t advisable. Instead, initiate more proactive, systematic approaches to setting performance metrics.

3 Pages