Jan 1 ’04

Learning From the Legend:  Leveraging Mainframe Systems Today

by Editor in z/Journal

To quote the oft-quoted Mark Twain, “The reports of my death have been greatly exaggerated.” It is a hackneyed refrain but, nonetheless, describes the current physiology of the bits and bytes of CMOS technology that compose the heart and soul of the S/390 Enterprise System Server—the mainframe. To some, the mainframe may conjure up nostalgic visions of a time long ago when systems programmers roamed a computing landscape of refrigerated, raised-floor, maximum-security rooms with large, tape-whirring, light-flashing computers and ruled through cryptic technological jargon that left mere business managers in awe. To others, the mainframe may represent an outdated monolith; emblematic of a decade dominated by disco, Watergate, and Bob Dylan. These people may feel that the mainframe is old, clunky, and seemingly useless in the hustling, bustling “enterprise” world of Unix, Windows, and Web servers—the so-called real technologies of today.

To the modern-day CIO, however, the mainframe does not fit any of these colorful characterizations. It is not considered obsolete or outdated. The CIO may view the mainframe as the intrinsic heart and soul of the computing enterprise, or it may merely serve as another important piece of the company’s infrastructure— just as important as its Unix, Windows, and Web servers. The mainframe computing system (a.k.a. MVS, z/OS, S/390, zSeries Enterprise Server, “Big Blue”) is an evolutionary component of a complex computing environment that services the worldwide data processing business needs of today’s largest corporations.

Given the longevity, adaptability, and resilience of this brilliantly conceived technological organism, perhaps we need to further explore the vital role that the mainframe plays in most modern-day, enterprise computing systems.

DISPELLING THE MYTHS

First, let’s dispel a few anthropological myths. There are no distributed enterprise systems. There are no mainframe enterprise systems. There are only enterprise systems. And enterprise systems are complex, heterogeneous, and unruly. Like an unruly child, enterprises must be managed. To effectively ensure business service availability (performance, reliability, and security), the enterprise must be managed as a holistic entity—not as a discrete set of arms and legs. This paradigm demands that we provide infrastructure, service, and application management across the computing enterprise, not merely on the computing platforms.

The mainframe still plays a critical role in enterprise computing, and there are many reasons why it will continue to do so. The mainframe is host to more than 50 percent of the world’s mission-critical data. The mainframe operating system has matured and evolved for more than 40 years, making it a mature adult, especially compared to the distributed- based Unix and Windows operating systems. With this maturity comes stalwart availability. IBM estimates mainframe availability at 99.999 percent. An image may fail, but a full reboot (IPL) of a mainframe-computing complex is rarely necessary except in development, testing, or maintenance modes. Unix is also a very stable system, but just think about how often you have to reboot your PC or Web servers in a year.

With this maturity comes safety. How many articles do you read about cyber thieves hacking into a mainframe? Most of these criminals have probably never heard of SAF, RACF, or the S/390 Cryptographic Coprocessor, and they almost certainly do not speak EBCDIC. The sophistication of mainframe security solutions has evolved over the past 30 years. In addition, the prevailing history of the rest of the so-called real technologies has been to ignore and disavow the mainframe, rather than integrate with it. As a result, the sheltered accessibility to the mainframe today allows it to enjoy tighter, more stringent security measures in the world of cyber crime. However, the mainframe does offer accessibility through open application program interfaces (APIs), and these APIs were developed within the context of security, not with security as an afterthought.

The mainframe is home to the traditional (or so-called “legacy”) applications that drive many of the business processes that we, as consumers, interact with on a daily basis. It is the back-end data store for much of the point-of-sale, cash-dispensing, Web-based transactions that populate worldwide commerce today. It would be sheer folly to dismiss the importance of the mainframe’s role as an atavistic processor of legacy-based applications.

Giga Information Group, Inc. reports that, in 1998, 75 percent of the mainframe computing power shipped to customers was utilized to process increased capacity demands from traditional application systems. Last year, only 40 percent of newly shipped mainframe computing power was in support of capacity upgrades. By 2004, Giga estimates that the trend will be a complete transposition of 1998, wherein 75 percent of all newly shipped mainframe-computing power will be in support of new application workloads. These new workloads are represented by mainframe deployment of WebSphere, WebLogic, PeopleSoft, SAP, and Siebel applications. Additionally, according to Gartner, as IBM has pushed for adoption of Linux on the mainframe, MIPS shipments in support of Linux are up 45 percent year-over-year, and IBM claims that 17 percent of its mainframe revenue was generated from Linux.

LINUX ON THE MAINFRAME

Let’s take a closer look at Linux on the mainframe. A large percentage of users are running pilot projects on mainframe Linux, and a significant number of users in the telecommunications and financial spaces are running production applications such as “billing inquiry” on mainframe Linux. Deployment and management of large numbers of virtual Linux servers can pose a management problem and open security risks. Managing these Linux environments, therefore, is critical.

At its December 2002 Data Center Conference, Gartner confirmed that interest in server consolidation is still high. A survey of the 950 attendees showed that about 16 percent are using Linux as a method for server consolidation.

The popularity of Linux has actually helped to drive mainframe shipments, accounting for 15 percent of the new MIPS shipped in 2001, and 20 percent in 2002, by Gartner’s estimation. Gartner says that more than 200 IBM mainframe customers have deployed at least one Linux application on mainframe systems in production environments. Another 400 are in the process of implementing Linux applications on the mainframe, or are at least evaluating doing so.

The strategic importance of mainframe Linux will become increasingly evident over the next 18 to 24 months. The platform’s unmatched capability to host thousands of Linux-based virtual servers on a single machine will continue to make it a preferred physical server consolidation vehicle. More important is the platform’s ability to integrate state-of-the-art business application solutions with legacy business transactions and data hosted by traditional mainframe operating system environments running on the same physical server. This, in effect, bridges the gap between the old and new worlds, leveraging the best of both.

Use of the mainframe exclusive HiperSockets capability (an in-memory virtual network) will become pervasive as more businesses discover they can easily connect the latest off-the-shelf applications running under Linux with highly reliable and scalable database systems hosted by z/OS on the same physical box. The near-zero latency of the virtual communication path between the two worlds allows customers to leverage the unique capabilities of both environments without suffering any performance penalty. No other server platform solution provides this capability!

Virtualization is a prerequisite for the deployment of a dynamic, self-managing computing infrastructure. The combination of Linux and time-tested mainframe software virtualization capabilities make it possible to build a computing infrastructure that can quickly adapt to changing workload conditions. For example, in the not too distant future, customers will routinely employ VM to dynamically provision and repurpose virtual Linux servers to create operating environments that are responsive to business requirements. The relatively small footprint of mainframe Linux (as compared to other mainframe application- hosting environments) makes it ideal for this purpose.

Finally, it is no secret that most new applications are being written based on open standards. Technologies such as Java are being employed to provide application portability. However, making high-usage middleware functions such as database systems portable—a requirement for managing software development and acquisition costs—requires much more than a programming language. Providing portability for most performance- sensitive middleware requires that the software be written to a standardized set of operating system interfaces. Linux is rapidly becoming the de facto standard for middleware deployment. Mission-critical, Linux-based business applications deployed on key middleware infrastructures such as WebSphere will, in many cases, need to scale beyond the I/O bandwidth limitations of traditional distributed servers. Mainframe Linux will satisfy the resource requirements of these high-end applications.

In addition to managing Linux from an application perspective, users also may need to manage the physical and logical environment in which the Linux application resides—VM and z/OS. Many of these applications are accessing data running in DB2 and IMS under z/OS. To address these various environments, it is important for users to manage from this holistic perspective.

MAKING THE MAINFRAME ENVIRONMENT EASIER TO MANAGE

A preponderance of the evidence examined reveals continuing growth in mission-critical mainframe applications. Given this continued growth, how do we make this mainframe environment easier to manage? Now, more than ever, there is a need to maintain highly efficient mainframe applications by implementing an application quality management process. IT executives often face the conflicting goals of reducing application development and deployment costs while increasing quality and customer satisfaction. Meanwhile, application developers focus on functionality and often do not have time to consider application performance. When new and modified applications are deployed, the IT staff is pressured to keep the applications and system running at required service levels while containing costs.

Few application developers have time to consider application performance until performance problems manifest themselves on resource-constrained mainframes. For many IT executives, a costly upgrade seems to be the only answer. However, application quality management offers more cost-effective advantages in the mainframe environment. The challenge is finding and fixing these costly application bottlenecks with limited staff and limited time.

As a resolution to these issues, users must first determine which units of work are causing their performance bottlenecks— are they batch jobs or online applications? Implementation of an automated measurement process can identify performance bottlenecks and deployment of an application analyzer can pinpoint their cause. This type of application quality management approach provides continuous application performance improvement and defers costly processing upgrades. Additionally, the manual effort required to target and track these performance opportunities is virtually eliminated, allowing more time for analysis of prioritized opportunities.

The challenge of managing application quality continues to increase as applications from varied platforms interact with mainframe applications. Effectively managing the challenges of this dynamic environment requires the ability to perform application quality management for multiple systems in a Sysplex, view the entire mainframe environment from a single window, analyze distributed data activity, and automatically direct user analysis.

Users are increasingly working with the mainframe using graphical user interfaces (GUIs) for application performance analysis. GUI point-and-click technology makes it easier to manage the entire mainframe complex, thus allowing users to track and prioritize application performance opportunities with the click of a mouse. Because the application is a key component to providing a business service, its optimal performance is also a key component. Managing that performance must be simple, automated, and manageable. As the infrastructure of a business continues to change, the ability to manage application quality is essential, and application quality management solutions must provide the ability to do just that.

Beyond making this environment easier to manage, the mainframe management needs to be made more intuitive. The mainframe skills shortage is creating a real problem in replacing experienced staff as they retire. Overall, the resulting decrease in expertise is making it harder to actually realize the ROI for the new Parallel Sysplex technologies being introduced.

Users are pressured in the current economic climate to do more with less. Cross-training of personnel has become a critical success factor that has stretched people and resources thin from a skills and expertise perspective. Customers are forced to take a more holistic approach to managing their IT infrastructure, such as leveraging skills across the organization— even distributed systems skill sets. This has an organizational impact on users.

The skills problem surfaces in a slightly more subtle fashion when you talk to customers about why they have system outages. The number-one reason is some form of human error in making a change to a critical system configuration. IT staff reductions have placed a greater burden on remaining IT personnel, while at the same time creating a temporary labor pool of skilled IT professionals. Given the resurgence of the mainframe, we can expect to soon revisit the skills shortage we faced a few years back. This impending shortage will likely be more acute due to the aging and retiring skill base and the lack of replacement by IT graduates trained in the mainframe disciplines.

The traditional user interface to mainframe environments is through a 3270 terminal and ISPF, a non-intuitive interface that can be hard to learn. Today, personnel familiar with ISPF and knowledgeable about configuration and management of the mainframe are not recent college graduates. What is required to make this an easier environment to manage is a topological view of vital system resources that offers easy viewing of these key assets. Access to the mainframe must be through an intuitive interface that is similar to Microsoft Windows Explorer and that assists in improving productivity and provides the ability to share z/OS information more easily. This is the type of user interface that today’s college graduates are being taught.

It is time to dismantle the platform boundaries and talk about managing the computing enterprise, not managing the computing platform. To do so requires an acceptance of the mainframe by the distributed world and a revolution in mainframe thinking that leverages intuitive Windows-like interfaces, and develops innovative intelligent software to simplify the configuration and management of mainframe systems and applications.

To some, the mainframe may be a dinosaur, while to others it may represent a dynasty. During its reigning lifetime, it has arguably come close to being both. Regardless, it is clear that its lifetime has not expired, and the mainframe’s evolution in the computing enterprise is still occurring. Z