If algorithm-juggling mathematicians are the rock stars of the 21st century Web, then capacity planners may just be the rock stars of the 21st century data center. Equipped with special tools and methods, the capacity planner helps keep the business running by ensuring critical back-end systems (such as order entry) and customer-facing applications (such as Websites) have the capacity they need to provide good performance.
But modern capacity management is a constant race to keep up with business change. Senior management expects answers—and action—at a faster pace. No more collecting data, analyzing it, and coming back with a report months later. First-cut answers are now required in early business planning cycles, where go or no-go decisions are made. IT budgets are much tighter, and businesses can no longer rely on over-provisioning or over-staffing.
If a mainframe is part of your IT infrastructure, it’s typically a vital and costly IT asset, and the capacity planner plays a critical role in helping control associated costs. To keep up with the increased level of participation in the business cycle, the capacity planner needs to become more efficient and be ready with the right answers to meet the challenge.
Two Simple Goals
In today’s business climate, when you break down mainframe capacity management, there are two simple goals: get the most out of what you have, and spend less time and money doing so.
For example, you can combine work to occupy a smaller footprint on the mainframe, including hardware, software, power, and space. Or you can rearrange work in such a way as to pay less for software licenses.
To meet these goals, concentrate on the work that’s most important to the business and on the most cost-effective ways to handle it. Focus first on managing what most affects the business rather than trying to manage everything.
The Number-One Challenge
Managing the capacity of the mainframe is different from managing the capacity of distributed platforms. For example, underutilization of hardware is a common occurrence in the distributed IT world, where servers often never reach high percentages of capacity utilization.
By comparison, it’s common for mainframes to run near 100 percent capacity utilization during peak periods. If you take into account latent demand (i.e., work that already entered the system and is waiting for available resources), mainframes can run beyond 100 percent. This means there’s more workload waiting to be dispatched, pending available CPU capacity. This isn’t a bad situation, as long as your work is properly prioritized. The main challenge for the mainframe is prioritizing the work and scheduling it to use existing resources most effectively. The things you need to monitor, manage, and resolve are different from what you need to monitor, manage, and resolve in the distributed world. Here’s an example:
The marketing department of an international cell phone manufacturer creates an aggressive new promotional campaign that, if successful, could increase usage of the order-entry system by 40 percent. The order-entry system typically uses a certain percentage of mainframe capacity. Increasing the application’s usage by 40 percent will push the mainframe beyond what’s believed to be necessary to ensure the all-important billing batch jobs keep meeting their Service Level Agreements (SLAs).
Timing of these batch jobs is critical because it affects revenue recognition. But the promotional campaign is also critical because it will bring in new revenue. The additional workload will cause the mainframe to surpass 95 percent—the point at which most companies would normally buy a new machine. But this company has neither the budget nor the room for another mainframe. IT must tell marketing it can’t run the promotion.