In the past, organizations planned for known peak traffic periods, such as retailers preparing for Cyber Monday. Starting last year, the trend toward tablets and “couch commerce” began to shift this paradigm. Retailers began to see greater spikes of traffic on Thanksgiving evening, as customers shopped from the comfort of their sofas after Thanksgiving dinner. The “anytime, anywhere” nature of mobile devices means traffic is more sporadic. At any given time, you could have thousands of transactions touching the mainframe as users process credit card information, search for products or check account balances online. Exceptional infrastructure performance—and insight into how all application tiers, including third-party services, are affecting the mainframe—is required continuously.

The Modularization of Software and Applications

The drive to bring Web and mobile applications to market faster and more cost-efficiently is prompting increased adoption of third-party services. Organizations often choose to leverage existing functionality from external third-parties rather than developing it on their own. Some of these services (i.e., social media plug-ins, product tours and ratings and reviews) sit on the front-end and serve to create a richer, more satisfying user experience. Others, such as credit card verification, provide critical functionality and reach all the way to the back-end of the data center and to the mainframe to complete transactions. This modularization of software has many benefits for organizations, but the problem with plugging in someone else’s code is that it might not be optimized for your environment, specifically your mainframe.

Avoiding the Perfect Storm

The sharp uptake in mobile Web access, combined with increasing software modularization, can add up to a perfect storm from a performance perspective. Many organizations don’t find this out until it’s too late, when users are complaining on Facebook or Twitter, or the organization has lost significant business to the competition due to slow application response times.

To assuage this threat, organizations need to be prepared, by proactively monitoring the user experience and finding opportunities for optimization across the complete application delivery chain. Given the high volume of mainframe transactions, even slight mainframe performance optimizations can have a huge impact.

The Performance Impact of the Mainframe

For example, a leading financial services institution in the U.S. has more than a million online visitors to its retail online banking Website on any given day. Performance monitoring tools detected a significant problem: The typical response time of a page load was between two and 3 seconds, and this was rising to an average of 19 seconds for all site visitors. Advanced diagnostics helped the firm isolate the source of the issue to a particular DB2 region on the mainframe. All the application servers that were calling this region were affected.

By comparing the current performance levels to a historical baseline, the firm saw that response time in DB2 changed from 3 milliseconds, on average, to 5 milliseconds. While this may seem like a trivial time increase, it caused the 3.32 million transactions conducted during this timeframe to drastically slow down at the users’ browsers; some sessions were even timing out. After identifying, isolating and addressing the source of the problem, database and mainframe administrators restored response times to normal levels. Given the sheer number of transactions affected, the impact on customer satisfaction and the overall business was enormous.

It used to be that the mainframe was a black box; organizations couldn’t see how distributed applications were impacting mainframe performance. A new generation of APM lets organizations see, with granular visibility, transactions that are going on inside the mainframe, how an application is spending its time, and what may be taking too long combined with deep analysis of application code and logic. Armed with this information, organizations can better work with third-party service providers to ensure their services are tuned for their mainframe environments. Optimizing time-consuming transactions also helps organizations save on MIPS and mainframe resources—an important consideration given the high expense of mainframe processing time.

Conclusion

Far from being the monolithic, isolated, back-end systems they’re often perceived as, today’s mainframes play a crucial role in supporting positive user experiences. Today’s customers are extremely fickle and will move on to the competition at the drop of a hat if Web application performance is subpar. To address this reality, organizations must deliver exceptional Web application experiences while keeping costs in line, and the mainframe presents tremendous opportunities for doing that.

But organizations must be able to find these opportunities and that’s where a new generation of APM comes into play. By combining user experience monitoring with deep-dive diagnostics that span the full application delivery chain, organizations can uncover precious mainframe optimization opportunities that bolster overall application performance and improve the bottom line.

 

2 Pages