A new age of performance management is upon us. More than ever before, IT personnel need tools that simplify management of today’s complex infrastructure, automate repetitive tasks, and ensure continuous availability of the computing systems required to run the business. Years ago, most IT infrastructures were simpler and all users were company employees; today, the user base is the entire world. Business applications span multiple operating systems and hardware platforms, and a diverse set of networking protocols facilitate communication. Companies are doing more work with fewer people. This article discusses tactical approaches to intelligently and cost-effectively manage this changing landscape.
The Way We Were
Back in the ’70s and ’80s, most data centers had one or two mainframes and a straightforward telecommunications network. Basic IBM-provided tools, packaged with hardware and systems software, were sufficient to maintain acceptable service levels, which were rarely captured as formal Service Level Agreements (SLAs). User expectations were low; few expected 100 percent availability or consistently fast service times.
Availability and performance became more critical as employees were measured on their responsiveness to business demands. Businesses expected the IT organization to support them by providing reliable systems capable of delivering consistently good service. Rapidly increasing transaction volumes precipitated the need for more comprehensive systems and network management solutions.
To address this need, many Independent Software Vendors (ISVs) delivered products that often required a fair degree of skill to install, deploy, customize, and use. The learning curve was steep. Those who believed the “mainframe is dead” myth served to exacerbate the skills issue by redeploying individuals with mainframe skills to manage new business applications on alternative computing platforms. This migration slowed the feed of new talent to the mainframe platform.
The Way We Are Now
With the dawning of the Internet, those entering the IT profession headed toward the distributed systems world of UNIX and Wintel-based servers, where they were well-compensated. These folks had no desire to work on what they viewed as Jurassic processors. However, many companies chose to maintain and expand the mainframe component of their IT infrastructure. Among the most compelling of various reasons were:
• The years, often decades, of development investment in the core business applications, designed to leverage the unique strengths of the platform
• The high cost and risk associated with redeploying these core business applications on alternative platforms
• Reliance on mainframe qualities of service (reliability, availability, scalability, etc.) that would prove difficult or impossible to match on any other computing platform