Extremely high performance is easy to achieve if you’re prepared to spend a lot of money. Unfortunately, few IT groups can do that. A driver can get extremely high performance by purchasing, say, a Maserati GranTurismo. However, most drivers can’t afford such a car and must be generally satisfied with less performance. They want good performance at a good price; so do enterprise executives. IT is evolving toward a better balance between performance and price.
How Analytics Help
Analytics help lower the cost of delivering better performance by automating repetitive, labor-intensive monitoring tasks. Many years ago, performance analytics on the mainframe existed only in the mind of the systems programmer or administrator, who theoretically was constantly monitoring system performance. For example, a CICS technician would watch an application and keep pressing the ENTER key to gauge response time. The technician would observe how closely the response was approaching some threshold. Typically, the threshold would exist only in the technician’s mind, not in an actual Service Level Agreement (SLA).
A lot of interaction between IT personnel and the system was required to develop a mental trend line of what was happening in an application. Unfortunately, this method could rarely predict problems before they manifested. And no one had time to keep pressing the ENTER key all day.
The software industry ultimately produced tools that enabled a more efficient process: management by exception, which involved systems monitoring and automatic alerting when certain thresholds were breached. The alerts prompted the attention of someone who could then isolate the problem.
Analytics, a capability built on top of that process, gives IT the insight and knowledge needed to preempt a problem. It observes current situations, correlates events, predicts the future, and surfaces higher-order alerts or exceptions that identify and often solve problems before they fully manifest themselves.
Analytics in Action
Analytics allows a correlation of two or more variables. Suppose you’re responsible for providing a certain level of service in the form of transactions per second with a specified response time. You look at response time and transaction rates; both are good. They’re not too high and they’re not trending in a wrong direction.
But if you look at both measurements simultaneously, you may see that performance is outstanding. It might even be too good—simply because the transaction rate is too low. So something’s going wrong—something you weren’t able to see without a side-by-side analysis of transaction rates and performance.
Some companies that use analytics can detect that a problem in an application actually exists on the distributed side of the house because they recognize the rate of communication with the mainframe is lower than it should be, even though response time is perfect.
Two Real-World Examples
A large insurance company uses analytics to continuously maintain high performance in its mainframe applications. The company automatically monitors Key Performance Indicators (KPIs) on the mainframe, aggregates those indicators into workloads, and creates an application-level view of how the business is operating. Using analytics software, IT personnel can see application performance trends over time—at a high level and close to the business. They’re in a position to predict problems before they affect the business, drill down to causes, and prevent the problems.
A large financial services organization in Europe actively uses analytics to ensure mainframe availability and performance. IT personnel deploy analytics tools to monitor applications performance and perform analytics at the application and SQL levels. Then they can change individual SQL statements and create permanent application performance improvements.