IT plays a central role in today’s enterprise. That’s a given. But not all IT enterprises operate the same way, and those that perform very well increase the productivity and effectiveness of lines of business—and give companies a competitive edge in the marketplace. While cloud and virtualization have been heralded as the panaceas to end enterprise dependence on the mainframe, more often than not they’re reliant on the mainframe to complete transactions and are actually driving up usage. In tandem with the increase in usage driven by new innovations, we’re seeing customer expectations soar. Today’s customers are technologically savvy, armed with high-computing devices such as tablets and smartphones, and they’ve become accustomed to speed and high performance. They’re also more fickle and will move to a competitor if they’re unhappy with a service; therefore, performance is more critical than ever.
Yet ensuring these new applications and services run smoothly and efficiently is an extremely complex task. Mainframes weren’t originally designed to interact with customer-facing applications, and mobile was still just an imagined future. Now that the old and new worlds have been forced to collide, the challenge of managing these environments and ensuring consistent performance can seem like an insurmountable task. Yet, failure is unthinkable with reputations, relationships and revenues at stake. However, despite the dynamically changing nature of the IT environment, many companies have failed to adapt their approach to monitoring and managing the performance of their mainframe application environments. IT teams are still siloed and visibility across the application delivery chain is poor. As a result, costs are escalating, inefficiencies are seeping through the cracks and companies are regularly being forced into war-room situations, which are a drain on time and resources, resulting in slower mean time to resolution and delaying innovation across the business.
These issues and challenges were highlighted in a Compuware-commissioned study of 350 CIOs from enterprise organizations. The study examined the impact of new technologies and trends on the mainframe application environment. The findings presented here provide analysis of the data and an overview of potential solutions.
Key highlights from this study include:
• More than half (55 percent) of enterprise applications call upon the mainframe to complete transactions.
• Eighty-nine percent of CIOs stated mainframe workloads are increasing and getting more varied.
• Ninety-one percent of CIOs say high customer expectations are increasing the pressure on the mainframe to perform.
• Eighty-seven percent of CIOs believe complexity is creating new risks in relation to application performance.
• Seventy percent of CIOs say mobility has increased MIPS consumption by more than a quarter since their mainframes began interacting with mobile applications.
• Sixty-three percent of companies are unaware of application problems until calls start coming into the helpdesk.
• Eighty percent of IT departments are fire-fighting performance problems in war-room scenarios on a monthly basis.
New Workloads, New Challenges
Contrary to the popular myth that mainframe usage is in decline, in reality, it’s in heavier use today than ever before and performing a wider variety of tasks. The research showed that more than half (55 percent) of enterprise applications call upon the mainframe to complete transactions; with one in 20 relying on the mainframe to complete all their transactions (see Figure 1). These applications aren’t just the traditional back-end systems with which the mainframe made its name; 89 percent of CIOs said the mainframe is now running more new and different workloads compared to five years ago (see Figure 2). As a result, CIOs estimate that distributed applications have increased the mainframe workload by an average of 44 percent over the past five years (see Figure 3).
Distributed applications aren’t only forcing the mainframe to work harder due to the surge in demand, there’s further pressure for it to deliver these services at even faster speeds. The smartphone generation of consumers expect services to launch in seconds; if they don’t, dissatisfaction is often expressed very swiftly, scathingly and publicly. As a result, CIOs are feeling the pressure: 91 percent said now that customer-facing applications are using the mainframe, performance expectations of the mainframe have increased (see Figure 4). However, meeting these expectations isn’t straightforward.
Integrating new applications with older mainframe applications adds layers of complexity to the application delivery chain, and greater complexity invites more risk: 87 percent of CIOs agreed that the integration of new technologies into the mainframe application environment is creating complexity and business risks that previously didn’t exist (see Figure 5). In particular, CIOs highlighted lost revenue (48 percent), loss of employee productivity (47 percent) and brand/reputation damage (43 percent) as being the top three performance-related risks to their business (see Figure 6).
This added complexity is also increasing hidden costs. The single largest cost associated with the mainframe is the computer power required to operate it, also called Million Instructions per Second (MIPS) usage. As mainframe usage increases, it’s logical that MIPS consumption rises as well. The research shows that mobile in particular is driving a significant rise in consumption, having increased MIPS usage by an average of 41 percent; with a small, yet significant, 2 percent saying consumption had more than doubled as a result of the introduction of mobile (see Figure 7). Yet these figures could be misleading. It isn’t only an increase in usage that’s driving MIPS consumption, but inefficiencies are also making the mainframe work harder than it needs to. The research shows that a staggering 68 percent of developers creating new distributed applications have very limited understanding of the mainframe (see Figure 8). Many CIOs (70 percent) are concerned that this lack of skills and understanding of the mainframe application environment is leading to inefficient coding of distributed applications, resulting in increased MIPS consumption and impacting performance (see Figure 9).
A New Approach to Performance Challenges Is Needed
Despite a clear understanding of the risks and costs associated with the new hyper-distributed world, as outlined previously, the research shows that companies are still trying to combat new threats with old defenses. As the mainframe environment has matured and diversified, for many, performance monitoring has not. Eighty-nine percent of companies are still relying on infrastructure tools to aggregate data or averages to monitor the performance of their IT (see Figure 10) and only a fifth of companies (21 percent) are using next-generation business transaction monitoring; with the majority (79 percent) having no visibility of the actual end-user experience (see Figure 11).
Yet relying on averages and internal-facing monitoring tools only gives companies a snapshot of performance. While these approaches provided a solid indication of performance in the days when the mainframe was performing less complex tasks, today’s applications rely on a number of different links to form the customer experience. As a result, even though everything is working fine internally and the green lights are on, certain transactions might be failing, which can create bottlenecks and frustration for those customers impacted. As a result, many companies (63 percent) are often unaware of performance problems until calls start coming in to the helpdesk (see Figure 12).
This reliance on outdated performance monitoring is hindering organizations’ ability to proactively prevent performance problems before they take root; meaning IT departments are constantly having to fire-fight problems after they’ve already started to cause disruptions. This is impacting IT’s ability to deliver, as they’re constantly on the back foot, trying to troubleshoot problems. It’s no surprise, therefore, that 75 percent of CIOs are feeling under extra pressure to reduce mean time to resolution on application performance problems (see Figure 13). However, the added complexity of the new hyper-distributed application environment means that isolating the cause of performance problems is increasingly difficult: 74 percent of CIOs believe cross-platform complexity created by the combined use of mainframe and distributed environments in the application delivery chain is slowing down problem resolution (see Figure 14).
Eliminate War Rooms
When you look at how an enterprise typically deals with a problem when it occurs, you can see why problem resolution in the new mainframe application environment is such a lengthy process. When an application starts to falter, companies often have to call upon all the different parts of the IT team to come together to identify and resolve the issue; a situation commonly referred to as a war room. The research shows that IT departments are being forced into these war-room situations on a regular basis, with the majority of respondents (52 percent) saying their IT departments are tied up in war rooms at least once a week (see Figure 15); and an alarming 39 percent having to do this more than once a week, or even daily (9 percent). When you consider that each of these sessions tie up an average of nine employees (see Figure 16), these war-room situations can have a major impact on an IT department’s ability to innovate and perform their core roles.
All these issues put together are creating a major drain on today’s enterprise. The mainframe is working harder than ever before, due to the increasing number of distributed and mobile applications calling upon it. Added to this, rising customer expectations are putting pressure on IT teams to deliver a seamless experience within seconds, every time. Also, the varied nature of new workloads is increasing the complexity of the mainframe application environment, meaning there are more links in the chain that could break.
Traditional approaches to performance management are no longer effective, meaning companies are unable to take a proactive approach to ensuring an excellent user experience. As a result, by the time the company is aware of an issue, it has often already started to impact staff and customers. Yet troubleshooting these problems isn’t as straightforward as it once was. With no visibility into how distributed and mainframe applications interact, distributed teams can’t track performance into the mainframe. And mainframe teams are blind to the impact of distributed code on mainframe transactions and workload. In addition, distributed developers may not realize how their code impacts the mainframe, all of which results in inefficient coding and rising costs, as well as slower mean time to resolution when problems do occur.
This is why a new approach to mainframe application performance management is required. Companies need to move from a reactive to a proactive approach, identifying and rectifying problems before they occur. To do this, total end-to-end visibility of the entire application delivery chain from the end user right through to the mainframe is required. Rather than taking snapshots of performance through averages, companies should have dynamic data on the health of their IT in real-time, combined with the ability to conduct deep application transaction management to delve down to line of code, giving IT teams a much clearer view of where and why a problem has occurred. Similarly, through correlating data across the entire application delivery chain, companies can bridge the technology silos that exist within their organization and improve mean time to resolution.