May 1 ’12

Technical Insights: Automation - From the “Glass House” to the Hybrid Data Center

by John Barnard in z/Journal

It’s interesting to see how the capabilities of the mainframe have evolved dramatically to meet critical business requirements. We’ve gone from machines with one or two CPUs with little storage and fixed partitions to today’s IT environment with scores of central processors and multiple virtual systems that can run simultaneously.

Computing today is much more complicated and effective than it was when the mainframe was gaining popularity in the ’70s, although mainframe management was still challenging due to the platform’s newness and the lack of automation capabilities. In the ’70s and ’80s, few performance monitoring, recovery management, or sophisticated administration tools were available for IMS or DB2. Now, in contrast, all those capabilities exist and have kept pace with the hardware and software present in today’s complex operating environments.

In the early ’70s, operators used card readers with a teletype as the interface. They would submit jobs for compilation or execution but had no interactive capabilities to see how a job was running, how it was impacting the system, or how well it was performing. Knowing how a job is running lets you proactively catch potential problems so you can maintain Service Level Agreements (SLAs) and avoid the impact of outages.

Either a green light indicated the job completed successfully or a system dump would occur, which usually implied the job failed. Today, such jobs can be pre-packaged in scripts and then automatically submitted by the operating system to support a policy based on a specified timeframe. While a DB2 job is running, you can look at the job and see how it’s performing.

In the early days, tooling really didn’t exist, either. Much of what happened was manual. This included manual submission and monitoring of the application’s performance or the batch program that was running. Monitoring was typically done after the fact. Online interactive systems were very primitive in terms of getting input from consumers and displaying the output as a result of some business logic that was running.

Over time, the automation capabilities appeared in the form of books or notebooks. The notebook would report, “After running job x, then run job y, and if job x fails, here’s a remediation.” Much of that activity today is automated through either workload automation or run book automation, which allows more of a “lights-out” operation. With automation, IT skills and resources are freed up to focus on strategic IT issues.

Today, monitoring software can view transactions as they flow across the operating system environment. Transaction flow monitoring follows applications via the transactions, monitoring them as they cross operating environments. Monitoring, administration, and automated operations solutions can look more broadly across the technology stack that exists within z/OS and provide a more concise view of what’s going on with the applications.

Processing units have become complicated. Applications today may visit multiple LPARs within a Sysplex through a single transaction or single thread within DB2, or a single transaction within IMS. At the same time, they may have been kicked off via an MQ message through a CICS application, through an IMS application into DB2, or perhaps out of a WebSphere Application Server on z/OS. Software with monitoring capabilities—including middleware and automation for problem resolution—can keep up with that complexity.

What’s emerging is the notion of full lights-out data center automation with automated self-healing of applications and systems that run on z/OS. As the systems and applications become more complex, it’s critical to consider how a lights-out, hands-off set of tools could be incorporated into the data center. This challenge will lead to more rules-based automation, up to and including artificial intelligence, to help manage these systems.

The advent of the enterprise and the hybrid computing model—including Linux on System z images, z/OS images, application work, Power blades, x86 blades and so on—increases the requirement for improved systems management automation that’s agnostic to the platform and can provide a broad spectrum of capabilities. 

Through the years, the mainframe has adapted to meet the needs of an increasingly complex business environment. Mainframe technology now provides the visibility, flexibility, and intelligence that give companies a real competitive advantage and the confidence to meet ongoing business challenges.