In today’s customer-facing, Web world, there’s no acceptable outage window. Continuous availability of business services is critical to revenue, brand value, and customer retention. Many older mainframe products were built before this level of availability became a vital consideration, and most scripts aren’t designed to accommodate for all possible types of service interruption. When evaluating software, IT organizations must, therefore, ask hard questions about outage scenarios and be sure the answers meet their service-level requirements. As the cost of downtime increases, the risk tolerance drops. All mainframe software solutions must be capable of keeping services running and supporting rapid recovery from any problem.
Performance requirements also have become more stringent. Superior performance provides a competitive advantage in a world of impatient customers and offers improved productivity for employees who are constantly multi-tasking. Older software may not respond quickly enough or sufficiently scale to meet the constituents’ demands.
If a software tool is difficult to use or slow, people won’t use it. That translates to wasted software dollars. With budgets tight, no IT organization can afford this type of “shelfware.”
Businesses pay a high opportunity cost if they can’t go to market with a new business service or if they can’t effectively support growth in demand. Older software may not meet new business needs. It’s like driving a Model T on a modern freeway. Businesses assume significant risks when they try to achieve new standards of agility and service with antiquated mainframe management tools.
The inflexibility of older management tools also can be a problem in this regard. IT organizations may need to accommodate various types of business and technology change, such as a merger or the implementation of a Service-Oriented Architecture (SOA). This kind of change is better supported with holistic solutions that can interoperate and accelerate root cause analysis and problem resolution by presenting a “single version of the truth”—as opposed to older, compartmentalized tools that aren’t complementary.
Modern fighter jets are highly automated to overcome human reaction times. The same should be true of the modern data center. By the time a problem hits the radar of operations, it’s too late; customers are already affected. New, more intelligent tools can automatically handle such situations, ensuring the business doesn’t incur the opportunity costs associated with unhappy customers and missed market windows.
The “Right Stuff”
Competition occurs inside and between companies. In today’s global economy, companies can opt to outsource everything. The right software can be the key to achieving an edge over outsourcing alternatives. But what makes the right software the right software? How can IT best determine which new solutions are most worthwhile for both IT and the business?
One important factor is ease-of-use. GUIs should have easily recognized buttons and drop-down menus; these let more people use the tool, cross-train, and manage the system wherever necessary.
Software also shouldn’t add to your systems management workload. Complex install processes, difficult-to-manage configuration scripts, and awkward maintenance procedures all translate to a tool that won’t be used, updated, or managed. The right tools will make these processes simple, making it easy to keep up-to-date and transition jobs between employees without a steep learning curve. This could mean avoiding niche products and opting instead for consolidation with a vendor who provides a well-thought-out portfolio of products that are based on common design principles.
Where possible, tools also should interoperate to give management teams a holistic view of the computing environment. This view speeds recovery and enables everyone to work more productively by eliminating redundant operations.
The right tools also will be reliable, with excellent vendor support. In some cases, vendors will complement their tools with professional services offerings that ensure effective implementation and provide knowledge transfer to ensure that an IT staff gets full value from the tool.
Some Important Questions
It’s understandable for IT organizations to retrench, to some degree, in times of uncertainty and constrained resources. But a pervasive culture of no decision is clearly unhealthy and potentially dangerous. IT organizations need to ask themselves several key questions:
- How long has it been since the usability and viability of existing tools have been re-evaluated?
- When was the last time anyone polled the marketplace for better solutions?
- Is everyone running the latest releases of the software they have?
- What critical operations are still dependent on homegrown scripts or utilities?
- Are the savings theoretically achieved by doing nothing starting to be outweighed by the downside risk associated with lower service levels, inadequate staff productivity, and the impending retirement of the department’s top mainframe experts?
Businesses don’t exist to simply save money. They exist to satisfy customers, capture new markets, and drive optimized financial returns to investors. A pervasive culture of no decision can’t effectively meet these objectives. IT organizations can’t afford to remain complacent about how they operate their mainframe environments. By innovating in this critical area, they can deliver greater value to the business and better support business change—while also controlling costs and boosting productivity. That’s why every IT leader should take proactive steps to combat the no decision mentality and discover new ways to improve mainframe operations. ME