Jan 8 ’13

How to Reduce System z Software Costs

by Nick Pachnos in Enterprise Tech Journal

Keeping a lid on IT costs remains the top priority for IT organizations worldwide. That’s the opinion of nearly 70 percent of participants in BMC Software’s seventh annual survey of mainframe users. This represents a significant increase since the 2011 survey, when 60 percent identified cost containment as a major focus.

As consumers and business people demand anywhere, anytime access to information and applications, IT is increasingly turning to the mainframe to exploit its superior availability, security, centralized data serving, and performance. Ninety percent of the nearly 1,300 survey respondents consider the mainframe a long-term solution, with 50 percent expecting it to attract new workloads.

These new workloads are likely to require additional processing capacity. MIPS aren’t cheap, which means IT must find a way to balance support for increasing demand on the mainframe with the mandate to cut costs.

Although the mainframe is quite cost-effective, it’s under intense scrutiny. As a shared processing environment, it has large, highly visible hardware- and software-related line items on the expense ledger. That has always been true, but there are two big differences now:

• The need to stay competitive in this economy is forcing companies to take drastic cost-cutting measures.
• Top managers recognize that the mainframe houses strategic data and performs business-critical processing. However, they don’t always understand how it generates revenue.

So, how do you reduce mainframe costs while ensuring top performance? This article describes three ways: reducing peak utilization, optimizing application performance, and improving business availability.

Reducing Peak Utilization

This is a good place to start because it can yield significant savings. Some background on the mainframe cost structure helps illustrate the point.

Most System z customers pay for their systems-related software using a workload-based pricing model. The approach is similar to the way people pay their energy bills, and it’s based on a monthly license pricing metric. Users pay for their system and subsystem software based on the peak four-hour rolling average utilization in a particular month.

Customers that can reduce peak utilization can realize significant savings in monthly software charges. Before doing this, however, you need a fundamental understanding of the peak. What makes up the peak? When does the peak occur? And what is running at peak times and on which machines?

Mainframe management software can enhance visibility into your peak usage. The software should show where peaks occur and which workloads contribute to the peak on which machine. You may be running subsystems and system software, batch software, and application software. Everything is counted in the peak.

Any action you take that reduces the peak, even if it involves just one component of that peak, will reduce all workload-based systems software charges that run during the peak. For example, if your peak four-hour rolling average is 1,000 MSUs and you can move 100 MSUs out of the peak period, you can save on every piece of workload-based systems software running on that machine.

The Role of Management Software

Capacity management software can give you a historical view of your utilization on a particular physical machine. If you wish to lower your peak by deferring, moving or consolidating workloads, then capacity management software will show you the effect on the workload response time and performance of those applications. Database maintenance on databases supporting business workloads generally runs during non-peak times. However, one large retailer looked at a usage report and saw that a particular machine actually peaked at 4 a.m. due to substantial batch database maintenance work. That’s the peak this retailer was paying for. In this case, more efficient DB2 management solutions could have saved money for that particular machine.

For almost every user, monitors consume more resources during peak processing. More is going on, so there are more activities to monitor. That means more efficient monitoring can also reduce the peak.

Both CICS and DB2 tend to have more processing intervals during peak processing periods. So, in general, anything you do to make CICS and DB2 more efficient around the clock makes them more efficient in peak periods. Even small efficiency gains across these various areas can add up to hundreds of thousands of dollars in savings each month.

Capping: Pro and Con

Many customers set a cap on peak usage as a cost-control measure. For example, if you decide you don’t want to pay for more than 1,000 MSUs, you can set a cap and the processor won’t let your system run above that level.

The drawback is obvious. If you set the cap too low, the systems will slow performance and you run the risk of hampering your revenue-generating potential. Business-critical online processing always gets the highest priority, so you set weighting factors for those online processes higher than for less critical processes and batch. There’s a point where you’re going to start constraining business if you exceed your cap.

If you can see a performance problem ahead, you can prevent it by raising the cap. However, you must do that before you exceed the cap, because the system will immediately start downward throttling performance, including the performance of business-critical services. This is another place where management software can help you. If your management software can monitor the rolling four-hour average utilization in real-time on specific machines, it can send you an alarm before you reach your cap.

There’s a growing challenge in managing the cap in this way: How will you recognize the patterns that indicate you will be exceeding your cap? In the old days of stable processing with known usage patterns, this was easier. In the modern environment of mobile-enabled business applications, those patterns are dynamic, driven by customer behaviors that change significantly and continuously. You will need to rely on systems management technology to automatically learn about the changing patterns and determine what constitutes the correct threshold for raising the alert.

Of course, software that’s efficient and intelligent is especially helpful in lowering IT-related costs. For example, efficient system monitoring software uses fewer resources, and efficient database utilities use less CPU time. Effective management software can help you more aggressively offload software to specialty processors, such as the System z Integrated Information Processor (zIIP), whenever appropriate.

Intelligent, “advisor-like” software can help you lower costs by determining whether you even have to do maintenance on parts of the system. For example, many customers routinely reorganize hundreds of databases every weekend, whether or not it’s necessary. Using good management software, you can determine that, say, only 20 databases need to be reorganized this weekend. The most efficient way of processing is not to process at all.

Optimizing Application Performance

Another way to reduce mainframe costs is to tune your applications to reduce resource consumption and prevent and remediate problems. Management software is particularly good at this. For example, poorly performing SQL can cause a big waste of CPU time. Even a few inefficient commands can cost dearly, especially if they query a back-end DB2 database hundreds of thousands of times per hour.

Technicians rarely have time to analyze the code, but management software can automatically analyze it and quickly identify where the bottlenecks are. It lets you write better-performing SQL, significantly lowering the cost of DB2 and the processing DB2 incurs.

Some companies have reported they’ve reduced MIPS requirements for major business applications by as much as 35 percent. Imagine the cost benefit you would see if you reduced your peak needs by 35 percent!

Improving Business Availability

Minimizing planned and unplanned outages can substantially improve business availability. Remember, the concept of availability has changed. We’re more impatient now for several reasons, including our reliance on mobile devices. If you have an iPad, it’s likely you have no patience even for starting up a laptop.

With an iPad, it’s almost instantaneous to go online and do business with a vendor. If that vendor isn’t available, you can easily and rapidly go somewhere else. If you think of the culture and expectations of IT today, you can understand why the demand for full availability is more prevalent than ever.

For example, a large retailer used to take a four-hour outage every weekend, from 2 a.m. to 6 a.m., for DB2 maintenance. One day, someone asked an interesting question: Are any customers actually trying to buy something online during those outages? After some research, someone found the answer: Yes, about 60,000 customers per week.

The retailer was shocked to learn it was losing substantial revenue every week and didn’t even suspect it, but probably shouldn’t have been surprised by this. This is the new availability. People want things when they want them. The retailer used a database utility to reduce the maintenance window from four hours a week to one hour a month.

Three Points to Remember

First, before you can begin to lower your IBM software costs, you need to know the when, what, and where of your peak utilization. You need to know when it occurs, what it includes, and where it’s located (which machine). You also must understand the implications of manipulating workload out of the peak period.

Second, it’s not easy to know all that, so you need management tools that can help you visualize what’s included in the peak period and be able to analyze various scenarios to gain visibility into how you can lower the peak.

Third, remember that management software decreases outage time and improves application performance. Consequently, it enables more customer transactions, thereby driving increased income.

Intelligent use of currently available management tools can dramatically reduce System z mainframe costs. These tools can optimize mainframe costs around the clock and within peak utilization periods, while maintaining or improving performance.