IT Management

There may be some dirty little secrets lurking in the halls of your enterprise, specifically your IT department. What you don’t know or fail to acknowledge can really hurt your IT operation and your business.

These secrets are a byproduct of a long-standing cultural divide between mainframe and distributed developers. Mainframes and distributed systems, previously looked at as separate entities, are now rapidly converging. As a result, people from two separate and very different cultures—mainframe and distributed—must come together to architect, develop and manage the resulting unified infrastructure. These secrets, if they aren’t properly addressed, could end up costing a company dearly in terms of wasted MIPS, poor application experiences for customers and time spent troubleshooting.

This article explores each of these secrets in detail: how they can threaten the health of IT services and the business, and why organizations need to “turn on the lights” to see how distributed applications are impacting the mainframe and vice versa. Ultimately, this can help close the gap and foster greater collaboration between mainframe and distributed developers, leading to more cost-effective, better-performing applications.

Dirty Little Secret #1

Distributed teams often don’t know what components comprise the application delivery chain. Today’s applications are delivered by a complex set of systems and services both within and beyond the firewall, known as the application delivery chain. Consider, for example, an end user who is accessing an online e-commerce application. Most likely, this application flows from the browser, through the Internet and the cloud, through the multiple tiers of the data center all the way back to the mainframe, which processes the order.

In the past, mainframe computing only supported back-end systems of record. Distributed computing has evolved to support customer-facing systems of engagement. For years, the mainframe has been managed as an island, but that can no longer be the case. As the mobile Web explodes, mainframes are increasingly “touched” as end users check account balances, book travel and conduct e-commerce transactions on a 24x7 basis. The mainframe is a critical player in the modern Web application delivery chain, and yet, some distributed and mainframe developers often don’t even speak the same “tech” language, let alone collaborate. In some extreme cases, we’ve seen distributed developers completely unaware their organization even had a mainframe.

Dirty Little Secret #2

Distributed teams don’t understand how traffic loads impact mainframe volume and costs. Distributed developers who don’t understand the role of their mainframe often hold the erroneous belief that it’s simply an endless bucket of resources. They’re driving more and more transaction volume onto the mainframe and then wondering why their transactions are taking longer. Often, these developers aren’t looking at their traffic. They’re writing code to a functional specification with no thought of mainframe performance. As a result, real opportunities to save on MIPS are often missed.

Consider the example of a developer who accesses the mainframe to get a list of customers, then executes individual SQL statements to get their addresses. The obvious question is why didn’t the developer just get the addresses with the SQL that obtained the customer list? Thousands of SQL statements were likely invoked when they didn’t need to be, resulting in a lot of wasted MIPS.

Another example is an insurance company that inserted every policy quote submitted into a DB2 table; at the end of the day, a DB2 batch job had to be executed to clean out the bad quotes. A simple two-line fix could have prevented the bad quotes from entering into the system in the first place. Distributed teams are often misusing the mainframe in this way, but there’s no easy way of identifying or proving it.  

Everyone agrees that a poorly performing mainframe application costs significantly more than an efficient, properly tuned application. Yet, mainframe and distributed developers often have diametrically opposed “conserve vs. consume” mindsets. People in the mainframe community have grown up in an environment where CPU time is charged back to the developers’ department. As a result, conservation of computing resources is a key issue for mainframe developers.

Compare this to the distributed environment, in which resources have been relatively inexpensive. Distributed developers have no concept of chargeback. People in this environment have grown up with a perspective of simply adding resources, rather than spending development time on minimizing resource consumption. To that end, distributed applications usually aren’t designed with resource conservation in mind.

Distributed developers aren’t the only culprits here. Third-party software offerings making use of DB2 and z/OS back-ends can be some of the biggest wasters of MIPS. Take the example of an accounting package that sent many unnecessary statements to DB2. Only after many long days of examining detailed system data on why DB2 was consuming more MIPS, did DBAs discover that the vendor software package was causing the increase. This wasn’t an easy task. When it was brought to the vendor’s attention, the reply was that the product was “performing as designed.” Designed, that is, to use more MIPS. In this sense, third-party software providers, at least this one, are missing a critical chance to help their customers save money. 

Dirty Little Secret #3

IT staffers don’t understand how an application flows from end-to-end. Ideally, distributed and mainframe developers should be united around the common goal of bringing the highest-performing, most cost-effective applications to market. But typically, IT staffers are so focused on their own silos and responsibility areas they don’t have the visibility needed to meet this primary goal.

Just as distributed developers often don’t consider the cost impact of their mainframe traffic loads, distributed and mainframe developers alike often lack visibility into how an application flows from end-to-end. It’s easy to understand why this is the case. Transactions designed in the ’80s are still being used today, but in entirely new ways. Adding to that, transactions can take a different path every time they execute. IT is, therefore, unable to see how a bottleneck in one transaction or server might impact another, and ultimately hamper overall application performance levels. With so many variables in the application delivery chain, entire IT teams become hamstrung in their ability to quickly identify the root cause of performance problems. This results in a lot of wasted time spent troubleshooting and finger-pointing, which can cost a business significantly in terms of lost sales, lost productivity and customer dissatisfaction.

Conclusion

For too long, the mainframe has been managed in isolation. Modern Web applications are elevating the mainframe’s role in the enterprise application ecosystem. As the line between distributed and mainframe environments increasingly blurs, there needs to be a way to bridge the gap between the respective groups of developers, culturally and technologically. Distributed developers need to better understand how their work volumes impact the mainframe. Likewise, mainframe developers need to seek out opportunities for optimization that drive performance improvements for distributed applications.

The message is clear: Throwing more MIPS at applications isn’t always the answer and “business as usual” isn’t going to work anymore. Organizations must shine a light on their secrets or risk paying a heavy cost. A common platform that enables developers to understand the inner-workings of applications, and quickly uncover bottlenecks across the complete application delivery chain—including the mainframe—can be tremendously useful.