Mar 25 ’13

Embracing a New Generation of APM Strategies

by Thomas Fisher in Enterprise Tech Journal

The mainframe has long been a stable, reliable consistent platform for high-value, critical transactions such as Enterprise Resource Planning (ERP), online order-taking and financial transaction processing. Mainframes today continue to touch most transactions worldwide. According to Independent Assessment, an industry authority on mainframe computing, 72 percent of the world’s financial transactions are processed on mainframes.

In recent years, and as a result of the “self-service” explosion on the Web, we’ve seen the mainframe’s role evolve. Today’s mainframes support customer-facing applications and play a more pivotal role in the overall performance—speed and reliability—of the user experience. Subtle mainframe optimizations can exert an enormously positive, measurable impact, and given how demanding today’s users are, organizations are remiss if they don’t include the mainframe as part of their Application Performance Management (APM) efforts.

This article explores the changing role of the mainframe and why it’s critical that organizations include the mainframe in their APM strategy. We’ll also discuss how trends such as the rise in mobility and software modularization are impacting mainframes, and how a new generation of APM helps isolate and fix mainframe performance bottlenecks—before they impact your users.

The Mainframe’s Role in the Application Delivery Chain

It used to be that mainframe applications didn’t interact directly with customers. Consider the banking industry. Banking customers used to walk into a bank branch to request their account balance, which a teller would then convey to them using a mainframe-based application to access customer data and process the transaction.

Today, users check their accounts directly and conduct basic transactions via the Web, and increasingly the mobile Web. To complete such transactions, mainframe-based applications are still used to access customer data and process transactions, but they’re part of a much larger, more complex set of systems. Requests from millions of users hit the mainframe daily, which impacts mainframe resources and demands exceptionally high levels of mainframe speed and availability.

This is all thanks to the “Google Effect.” Today’s users have extremely high-performance demands; they expect all the Websites and applications they interact with to be as fast and reliable as Google. Evidence of the Google Effect can be seen in the retail industry. An Aberdeen Group report, “When Seconds Count,” found that every added second in response time above 2 seconds resulted in a seven percent reduction in conversions. Conversely, for e-commerce-type applications, any improvement in response time increases revenue.

Years ago, Websites and Web applications were much simpler. Web pages were static and Web applications consisted of basic, multi-step processes, where all functionality was contained in a single page. Today’s composite Websites and Web applications are much more complex, pulling in services from multiple, external third-parties to enrich the user experience and provide advanced functionality.

Today, the Web pages and Web applications originating in the data center must traverse a long, complex path consisting of Web servers, content delivery networks, regional and local Internet Service Providers (ISPs) and numerous other network elements en route to users. Collectively, this concept is known as the application delivery chain and it comprises numerous variables both in and beyond an organization’s firewall. The user experience depends on all these elements working together. A bottleneck anywhere in the chain can degrade the user experience and put revenues at risk.

The key to gaining greater control over these elements, both internal and external to your data center, is continuous performance monitoring from the user perspective. A new generation of APM, this approach identifies early on when there’s a user performance problem. It then pinpoints, with razor precision, opportunities for optimization across the full application delivery chain with deep insight into how the Web and mobile applications are affecting the mainframe.

The Advent of Mobile

The self-service trend on the Web and mobile devices has renewed the emphasis on mainframes because they support applications that are increasingly customer-facing. The surge in mobile Web users has generated more traffic, and that traffic is random and unpredictable.

In the past, organizations planned for known peak traffic periods, such as retailers preparing for Cyber Monday. Starting last year, the trend toward tablets and “couch commerce” began to shift this paradigm. Retailers began to see greater spikes of traffic on Thanksgiving evening, as customers shopped from the comfort of their sofas after Thanksgiving dinner. The “anytime, anywhere” nature of mobile devices means traffic is more sporadic. At any given time, you could have thousands of transactions touching the mainframe as users process credit card information, search for products or check account balances online. Exceptional infrastructure performance—and insight into how all application tiers, including third-party services, are affecting the mainframe—is required continuously.

The Modularization of Software and Applications

The drive to bring Web and mobile applications to market faster and more cost-efficiently is prompting increased adoption of third-party services. Organizations often choose to leverage existing functionality from external third-parties rather than developing it on their own. Some of these services (i.e., social media plug-ins, product tours and ratings and reviews) sit on the front-end and serve to create a richer, more satisfying user experience. Others, such as credit card verification, provide critical functionality and reach all the way to the back-end of the data center and to the mainframe to complete transactions. This modularization of software has many benefits for organizations, but the problem with plugging in someone else’s code is that it might not be optimized for your environment, specifically your mainframe.

Avoiding the Perfect Storm

The sharp uptake in mobile Web access, combined with increasing software modularization, can add up to a perfect storm from a performance perspective. Many organizations don’t find this out until it’s too late, when users are complaining on Facebook or Twitter, or the organization has lost significant business to the competition due to slow application response times.

To assuage this threat, organizations need to be prepared, by proactively monitoring the user experience and finding opportunities for optimization across the complete application delivery chain. Given the high volume of mainframe transactions, even slight mainframe performance optimizations can have a huge impact.

The Performance Impact of the Mainframe

For example, a leading financial services institution in the U.S. has more than a million online visitors to its retail online banking Website on any given day. Performance monitoring tools detected a significant problem: The typical response time of a page load was between two and 3 seconds, and this was rising to an average of 19 seconds for all site visitors. Advanced diagnostics helped the firm isolate the source of the issue to a particular DB2 region on the mainframe. All the application servers that were calling this region were affected.

By comparing the current performance levels to a historical baseline, the firm saw that response time in DB2 changed from 3 milliseconds, on average, to 5 milliseconds. While this may seem like a trivial time increase, it caused the 3.32 million transactions conducted during this timeframe to drastically slow down at the users’ browsers; some sessions were even timing out. After identifying, isolating and addressing the source of the problem, database and mainframe administrators restored response times to normal levels. Given the sheer number of transactions affected, the impact on customer satisfaction and the overall business was enormous.

It used to be that the mainframe was a black box; organizations couldn’t see how distributed applications were impacting mainframe performance. A new generation of APM lets organizations see, with granular visibility, transactions that are going on inside the mainframe, how an application is spending its time, and what may be taking too long combined with deep analysis of application code and logic. Armed with this information, organizations can better work with third-party service providers to ensure their services are tuned for their mainframe environments. Optimizing time-consuming transactions also helps organizations save on MIPS and mainframe resources—an important consideration given the high expense of mainframe processing time.

Conclusion

Far from being the monolithic, isolated, back-end systems they’re often perceived as, today’s mainframes play a crucial role in supporting positive user experiences. Today’s customers are extremely fickle and will move on to the competition at the drop of a hat if Web application performance is subpar. To address this reality, organizations must deliver exceptional Web application experiences while keeping costs in line, and the mainframe presents tremendous opportunities for doing that.

But organizations must be able to find these opportunities and that’s where a new generation of APM comes into play. By combining user experience monitoring with deep-dive diagnostics that span the full application delivery chain, organizations can uncover precious mainframe optimization opportunities that bolster overall application performance and improve the bottom line.