Imagine looking up at a steep wall. You’re a fireman, responsible for the safety and wellbeing of everyone on both sides of that wall. Suddenly, you smell something burning. Looking up at the wall, you see smoke streaming over from the other side. You rush to grab a hose, but where do you aim it? You know the fire is somewhere on the other side, but where is its source?
This is the problem many mainframe administrators face daily when they begin to look at their system’s performance. They may know there’s a problem—something is devouring CPU capacity at an alarming rate—but isolating the problem and solving it are significantly more daunting (and usually more expensive) tasks. Just like our poor fireman, lack of visibility is the key culprit. It’s highly likely that in a mainframe environment, the undetected source of this fire will be an application. With traditional mainframe performance management, visibility is limited to the operating system and subsystems. However, automated Application Quality Management (AQM) allows for proactive monitoring and tuning on the application layer, the most likely location for performance-throttling problems. This helps you identify and correct problems before they flare up and become full-blown infernos.
Implementing automated AQM tools is a preventive measure to automatically detect discrepancies in response times of objects on the application layer, identifying them before they become a costly problem. Using smart logic, they provide detailed evaluations of computing times, resource usage trends, and potential bottlenecks quickly and accurately, allowing for proactive correction and tuning.
The benefits of automated AQM are many and can be measured on three specific levels:
- The system benefits. You can eliminate or postpone costly CPU upgrades.
- The human benefits. Your application developers, system management and tuning experts are freed up from putting out fires.
- The business benefits to your application’s users.
The System Benefits
Several market research firms have confirmed the continued importance of the mainframe and predict even a slight increase in the market over the next five or six years. These same analysts also predict a significant demand on existing machines. This combination of new, more demanding requirements on older machines has traditionally meant one thing—upgrades. Whether it’s hardware or software, this is the equivalent of “throwing money at the problem.”
Under these real circumstances, implementing automatic AQM becomes an even more logical solution to upgrading mainframe-processing power. In contrast to the open systems world, mainframe hardware or software upgrades can be costly. In fact, the majority of money spent on upgrades has shifted from hardware to software costs to a ratio of about 30:70.
The objective is to avoid any upgrades as long as possible. Usually, a cost/benefit analysis of the potential solutions for CPU bottlenecks favors optimization measuring and tuning at the application level. An automated AQM solution, comprised of a proactive optimization solution at the application level and automated tools for analysis and tuning, can free up 30 to 90 percent of used mainframe resources.
Even if a minor hardware upgrade is still required, it’s a significant cost savings over a wholesale software and hardware upgrade.
To put this into practical terms, assume your company has a transaction running on the mainframe with average CPU use of only 0.05 seconds. This single transaction may not register as a “problem,” so it doesn’t appear on your performance radar. However, if this transaction is called 150,000 times per day, half of that during peak time, the result is more than 125 minutes of CPU time daily and 31,250 minutes annually. At a rate of $12 per CPU minute, the annual cost is about $375,000.