Dec 1 ’09

Overcoming Complacency: The High Cost of No Decision

by Denise P. Kalm in Mainframe Executive

Market uncertainty, strained resources, and “innovation fatigue” are leading many IT organizations to fall into the trap of simply maintaining the status quo. But complacency is often far more expensive and dangerous than it appears to be. This article explores the downside of a “no decision” culture and argues for continued, proactive cost-benefit analysis of various opportunities for worthwhile mainframe innovation.

The High Cost of Inaction

Inaction can be hideously expensive. Consider replacement of the San Francisco Bay Bridge. In 1997, the estimated cost was $2.6 billion. Nine years later, after extensive debates and delays, the project cost had soared to $8.3 billion—and sources (see http://baybridgeinfo. org/) indicate the project still isn’t scheduled to be complete until 2011. So, indecision had a hard cost of $5.7 billion, as well as the significant soft costs associated with years of inadequate transportation infrastructure. In contrast, in 2008, South Carolina built a bridge of similar style, size, and scope for only $632 million; they made a decision and kept to it.

The tragedy of 9/11 can be similarly viewed.  The 9/11 Commission Report states that, “The missed opportunities to thwart the 9/11 plot also were symptoms of a broader inability to adapt the way government manages problems to the new challenges of the 21st century. Computer systems weren’t interlinked, so action officers couldn’t draw on all available information about Al Qaeda.” In other words, institutional complacency played a big part in leaving the country exposed to serious security risks that wound up having incalculable costs to the U.S. and the world.

IT isn’t immune to this phenomenon. When IT organizations inappropriately delay decisions, the organizations they serve incur significant hard and soft costs. So, maintaining the status quo is never an inherently “safe” approach. It can be extremely risky.

Causes of Mainframe Complacency

IT has dramatically changed in recent years. If an organization’s processes, people, and technology don’t adapt accordingly, the business won’t be able to successfully compete with those who have. New mainframe solutions provide IT with the ability to get maximum value from mainframe investments in a Web-centric, cross-platform world. But IT organizations can’t make the transition to this new approach to mainframe ownership if they’re locked into a no decision culture.

One of the main causes of such a culture is simple inertia. Evaluation, procurement, and deployment processes for a new product take time and manpower. People often prefer to stay with the known rather than face a steep learning curve, even if the new product is superior. So, no decision occurs because the old is “good enough.”

Other causes of a status quo culture are disillusionment with the promises of vendors and concerns about making a mistake. So, organizations often stand pat with existing software or whatever scripts and processes they may have developed in-house.

The problem, of course, is that mainframe products that were sophisticated in the ’90s may have been supplanted with products that deliver greater benefits and lower cost of ownership. So, IT organizations that avoid or defer decision-making also forego these potential benefits and cost savings.

Any failure to do periodic reassessments of items deemed good enough is highly problematic. Considerable opportunity cost can be incurred by not taking advantage of new technology—or even by simply neglecting to implement the latest releases of existing software.

Itemizing the Downside of Complacency

Innovation always comes at a price. To move forward, IT organizations must spend the time and money necessary to evaluate technologies, purchase products, and implement and integrate them. Not innovating also has a price that’s reflected in hardware resources, staffing costs, service availability, performance, and opportunity cost. Let’s consider each of these.

Hardware Resources

Software consumes hardware resources. When mainframes ran below their capacity, IT didn’t think twice about spending CPU seconds to monitor the system or run management tools. If a tool was considered useful, it was implemented. Today, resource costs must be more carefully allocated because it’s essential to avoid or at least delay processor upgrades since such upgrades considerably drive up software costs, too. No one wants to spend money on a processor upgrade just because of their management tools.

But no tool runs for free. Even a data collector takes some CPU and disk space. Network traffic and I/O also are concerns. The trick is to find the tool that provides the necessary functionality at the lowest possible price while minimizing resource consumption. So tools that offer Lightweight Local Presences (LLPs) or that leverage System z Integrated Information Processor (zIIP) capacity can actually save an IT organization considerable money. Some tools also reduce costs by offering User Interfaces (UIs) that run on PCs—further reducing resource utilization. With Graphical User Interfaces (GUIs) and Web and portal capabilities, new tools offer the additional benefits of sharing information more widely. This can help eliminate process “silos” and improve overall team productivity.


Newer products can be easier to install and operate. GUIs make management of any platform more straightforward. This is an especially critical issue as some IT organizations lose their most experienced mainframe staff to retirement. As new, less experienced staff take over, long learning curves can expose the business to serious risk. Easy-to-use products that eliminate requirements for a higher level of expertise—and that include lots of helpful automation and promptings—are well-worth their cost as the torch is passed to a new generation of mainframe professionals.

New solutions also help IT replace homegrown solutions that can be difficult or impossible to maintain, rarely scale as needed, and can be highly resource-intensive. Often, these homegrown utilities are the brainchild of one key employee who may not be there when the next coding change is required. Older vendor products can have the same drawback. They may require historical expertise that will disappear with a single retiree.

Staffing costs typically surpass resource costs so it’s essential to automate mainframe management as much as possible—and keep IT’s most skilled staff free to focus on strategic tasks. This isn’t just prudent from a cost perspective; it’s also vital for retention of key employees, who feel more satisfied and empowered when they’re freed from routine tasks.

Service Availability

In today’s customer-facing, Web world, there’s no acceptable outage window. Continuous availability of business services is critical to revenue, brand value, and customer retention. Many older mainframe products were built before this level of availability became a vital consideration, and most scripts aren’t designed to accommodate for all possible types of service interruption. When evaluating software, IT organizations must, therefore, ask hard questions about outage scenarios and be sure the answers meet their service-level requirements. As the cost of downtime increases, the risk tolerance drops. All mainframe software solutions must be capable of keeping services running and supporting rapid recovery from any problem.


Performance requirements also have become more stringent. Superior performance provides a competitive advantage in a world of impatient customers and offers improved productivity for employees who are constantly multi-tasking. Older software may not respond quickly enough or sufficiently scale to meet the constituents’ demands.

If a software tool is difficult to use or slow, people won’t use it. That translates to wasted software dollars. With budgets tight, no IT organization can afford this type of “shelfware.”


Businesses pay a high opportunity cost if they can’t go to market with a new business service or if they can’t effectively support growth in demand. Older software may not meet new business needs. It’s like driving a Model T on a modern freeway. Businesses assume significant risks when they try to achieve new standards of agility and service with antiquated mainframe management tools.

The inflexibility of older management tools also can be a problem in this regard. IT organizations may need to accommodate various types of business and technology change, such as a merger or the implementation of a Service-Oriented Architecture (SOA). This kind of change is better supported with holistic solutions that can interoperate and accelerate root cause analysis and problem resolution by presenting a “single version of the truth”—as opposed to older, compartmentalized tools that aren’t complementary.

Modern fighter jets are highly automated to overcome human reaction times. The same should be true of the modern data center. By the time a problem hits the radar of operations, it’s too late; customers are already affected. New, more intelligent tools can automatically handle such situations, ensuring the business doesn’t incur the opportunity costs associated with unhappy customers and missed market windows.

The “Right Stuff”

Competition occurs inside and between companies. In today’s global economy, companies can opt to outsource everything. The right software can be the key to achieving an edge over outsourcing alternatives. But what makes the right software the right software? How can IT best determine which new solutions are most worthwhile for both IT and the business?

One important factor is ease-of-use. GUIs should have easily recognized buttons and drop-down menus; these let more people use the tool, cross-train, and manage the system wherever necessary.

Software also shouldn’t add to your systems management workload. Complex install processes, difficult-to-manage configuration scripts, and awkward maintenance procedures all translate to a tool that won’t be used, updated, or managed. The right tools will make these processes simple, making it easy to keep up-to-date and transition jobs between employees without a steep learning curve. This could mean avoiding niche products and opting instead for consolidation with a vendor who provides a well-thought-out portfolio of products that are based on common design principles.

Where possible, tools also should interoperate to give management teams a holistic view of the computing environment. This view speeds recovery and enables everyone to work more productively by eliminating redundant operations.

The right tools also will be reliable, with excellent vendor support. In some cases, vendors will complement their tools with professional services offerings that ensure effective implementation and provide knowledge transfer to ensure that an IT staff gets full value from the tool.

Some Important Questions

It’s understandable for IT organizations to retrench, to some degree, in times of uncertainty and constrained resources. But a pervasive culture of no decision is clearly unhealthy and potentially dangerous. IT organizations need to ask themselves several key questions:

Businesses don’t exist to simply save money. They exist to satisfy customers, capture new markets, and drive optimized financial returns to investors. A pervasive culture of no decision can’t effectively meet these objectives. IT organizations can’t afford to remain complacent about how they operate their mainframe environments. By innovating in this critical area, they can deliver greater value to the business and better support business change—while also controlling costs and boosting productivity. That’s why every IT leader should take proactive steps to combat the no decision mentality and discover new ways to improve mainframe operations. ME