May 1 ’12

The Last 30 Years of VM

by Neale Ferguson in z/Journal

VM and the VM community have proved to be survivors; a tough journey has made VM and its users stronger.

Every five to 10 years, some threat to VM arises in the form of poor economic conditions, marketing pitches in the guise of new technologies, or some eager, young executive within IBM who has come up with the brilliant idea of saving money by getting rid of VM. Rather than capitulate to these threats, the response of the VM community has been to rise up and prevent disaster. It’s a battle that has produced a lot of scars but has resulted in the robust hypervisor and ecosystem we call z/VM today

This article reviews how VM has survived and even prospered over the years. It’s an incomplete, though hopefully interesting, retrospective. For a more comprehensive analysis, please see Melinda Varian’s work “VM and the VM Community: Past, Present and Future,” which is available at www.leeandmelindavarian.com/Melinda/neuvm.pdf.

The Golden Years: Early ’80s

Back in 1981, VM/370 gave way to VM/SP Release 1, EDGAR deferred to XEDIT, and VM found itself in a golden age of sorts. The “Doubtful Decade,” as the SHARE VM Group had labeled it, had passed. SHARE attendances were measured in the thousands, and the future looked rosy. Hardware-wise, we were about to see the introduction of the 43xx series of processors, which were more like an oversized piece of furniture than their room-filling predecessors. Microcode— Extended Control Program Support (ECPS)—was created specifically to meet the needs of VM and VSE users.

In 1983, VM/SP Release 3 (see Figure 1) became available, and with it Restructured Extended Executor (REXX), a vitally important development that remains relevant today. REXX prompted dramatic growth in the number and types of utilities and applications running on CMS. With the announcement of SQL/DS (see Figure 2), VM/CMS was primed to act as a base for developing and running sophisticated applications. When the RXSQL program offering became available in 1986, the programming power of REXX and the data management of SQL/DS were coupled to provide a sophisticated, powerful tool base.

 

This golden age culminated when function-rich VM/SP Release 4 (see Figure 3) enabled VM to extend its reach through modern networks. Key components included the introduction of the Group Control System (GCS) and with it VTAM, VTAM SNA Console Support (VSCS), and management via NCCF/NetView or NetMaster. GCS was CMS Operating System (OS) simulation on steroids. Enough MVS system calls were implemented to enable VTAM to operate; for example, GCS provided true multi-tasking. VM systems programmers could now enjoy the wonders of VTAM configuration coding, NCP generation, and the almost mystical procedures of getting the Network Control Program Packet Switching Interface (NPSI) up to give the VM system access to X.25. A new version of the remarkable Remote Spooling Communications Subsystem (RSCS) was announced; it also took advantage of the new facilities.

Some of us thought GCS had far more potential than was ever realized. For example, at the TAB of New South Wales in Australia, some simple modifications enabled PL/I V2.3 to run, and by exploiting the GATE and DATE features of NPSI, a complex, powerful message switching system was created.

Later, VM/HPO 4.2 was a major highpoint for VM; it extended VM to the high-end, enabling large workloads to be consolidated and managed. VM now ran on the smallest of the 9370s to the largest of the 3090s.

Little known around this time, TCP/IP was now available as a product (5789-HAL), having emerged as WiscNet’s TCP/IP (see Figure 4) for VM/SP 2 and 3. Although popular with the academic users of VM, it’s doubtful anyone at the time saw the impact this piece of software would have in the future.

Storm Clouds Gather: The Late ’80s

Widespread introduction of the PC became a challenge. Suddenly, computing power was no longer confined to a data center. Expectations changed and established hierarchies were challenged. The move to “departmental processing” was on. This evolved into the semantically empty phrase “client/server,” a marketing term that was confused with IT architecture. Out of this, we saw the development of the 9370 series of hardware, which was aimed squarely at the VM and VSE base.

Another challenge was one of IBM’s own making. VM was labeled as “strategic.” This can be a good thing, but it also meant it was more visible within the organization. Les Comeau, in his 1982 presentation to SEAS, describes the advantages of flying under the radar when VM’s predecessor, CP-40, was first developed: “It would be extremely self-gratifying to attribute that success to brilliant design decisions early on in the program, but, upon reflection, the real element of success of this product was that it was not hampered by an abundance of resources, either manpower or computer power (see Les Comeau: “CP/40—The Origin of VM/370” at www.garlic.com/~lynn/cp40seas1982.txt).

The former advantages of low visibility and limited access to resources were now elusive. With lots of resources comes lots of attention.

Coincidentally or not, the policy of Object Code Only (OCO) was introduced. Ironically, this was the period when Linux made its debut. Much has been written about the OCO battles and there isn’t much more to add to the debate, except to note that there were decisions coming down from on high to a community that had worked differently and well for many years preceding the decision.

During this period, VM/SP 5 was introduced with a well-intentioned but less-than-stellar Graphical User Interface (GUI) overhaul. Now, there’s nothing wrong with the 3270; it’s a prime example of thin client computing, but compared to the exciting graphics on the Video Graphics Array (VGA) screens of the PC, it just didn’t generate much enthusiasm. VM entered the 31-bit world first as a migration aid for MVS sites moving from System/370 to 370-XA. For a while, it didn’t appear as though a VM/XA program product would come to the market. Fortunately, the VM community acted and VM/XA System Facility, and then VM/XA System Product, made it out the door.

One of the bright spots was the introduction of the Shared File System (SFS) with VM/SP Release 6. Similarly, Advanced Program to Program Communication (APPC)/VM and Common Programming Interface for Communication (CPI-C) extended the reach of applications outside data center confines. For those who had programmed using VTAM macros to implement application-to-application communication, the arrival of CPI-C was a huge win.

The ’80s ended with things looking bright for VM. Product quality was much improved; the Workstation Data Save Facility (WDSF), predecessor of ADSTAR Distributed Storage Manager (ADSM) and later Tivoli Storage Manager (TSM), was being developed; the P/370 was being planned in Poughkeepsie; and DFSMS/VM customer councils were convened to help guide IBM’s development plans.

Fear, Uncertainty, and Doubt: The ’90s

By the time of the worldwide recession and IBM’s pension crisis of 1991, the mainframe world as we knew it was in a state of major flux. We encountered the term “downsizing,” which resounded with bean counters. VM was seen as on the wrong side of the downsizing landscape. There were attempts to market against this trend by defining new terms such as “rightsizing” and “brightsizing” (a theme for a mouse pad that’s still a prized possession). However, it was hard to combat the perception.

The now infamous “last mainframe to be turned off” article was published in 1995. We can look back and scoff today, but then it just added to the pressure on mainframe users and IBM developers.

IBM also managed to shoot itself in the foot by discontinuing the Higher Education Software Consortium (HESC). Twenty years later, the Academic Initiative is rectifying this mistake, but the damage had been done with two generations of computer science students not being exposed to some of IBM’s greatest hardware and software.

The Glendale labs were sold and the VM team in Kingston, NY, disbanded or moved to Endicott, NY. SHARE attendance numbers began to fall dramatically. In 1991, Australasian SHARE/GUIDE (ASG) had an attendance of nearly 1,000—almost twice as many attendees as during the early ’80s. Just a few years later, that figure was less than half. By the end of the decade, ASG had merged with COMMON. Some of the darkest days for VM had arrived.

The VM community was down in numbers, but it still spoke loudly and rallied the troops using VMSHARE in the pre-Internet era, and then the listserver medium. The community experienced a few wins and a few important losses: OCO, a decommitment by IBM on OO-REXX, and the move of ADSM away from VM.

Fortunately, innovation was still active in VM development despite the difficulties of reduced human and capital resources. VM/ESA was announced and shipped at the start of the upheaval. It was an important version, as it brought VM beyond VM/XA and its 31-bit support to a new range of hardware. The 9672 implementations of System/390 were the first high-end IBM mainframe architecture implemented with Complementary Metal Oxide Semiconductor (CMOS) CPU electronics rather than the traditional bipolar logic (see http://en.wikipedia.org/wiki/IBM_ESA/390). It was important for VM to be able to exploit this technology to remain relevant and grow.

Another technology that breathed new life into VM was the arrival of John Hartmann's CMS Pipelines. A tremendous amount of community interest was generated and new applications started being developed and shared. When it appeared IBM was freezing the addition of new features to the official product, the community enabled the distribution of a “Runtime Library Edition.”

OpenEdition was a brave, important addition. The fact that it didn’t become a true UNIX-like environment for doing new application development is less important than the recognition of the importance of open standards and support for such concepts. In reality, several future enhancements were facilitated by the OpenEdition environment. Distributed Computing Environment (DCE), on the other hand, was a technology that promised much but just didn’t catch on.

Rising From the Ashes: 1999 to Today

By 1999, VM was like Oliver Twist, being marched around to various divisions within IBM to see who would take it on. All that was missing were the cries of “Boy for Sale.” One wag even suggested the community hold a bake sale to help fund VM development.

Web-enablement of VM helped keep the wolves from the door—thanks to the folks at The European Organization for Nuclear Research (CERN), Rick Troth’s Web server (courtesy of CMS Pipelines), and to Carl Forde, Jonathan Scott, and Perry Ruiter for the Charlotte Web browser.

However, “it’s always darkest before the dawn” is a cliché that proved true. Salvation was just around the corner. Primarily this came from the now famous skunkworks project within IBM Germany to bring Linux to the mainframe (see Figure 5). For anyone who saw behind the curtain during this time, it was no easy sell to make this happen. Stories of last-minute code drops to beat the IBM legal team have become legend even if they may be apocryphal.

How times have changed. More than 12 years later, it’s hard to find anyone at IBM who claims not to have supported this endeavor. To a lesser extent, the Bigfoot project run by folks outside IBM attempting a parallel port of Linux to System/390 was able to keep this possibility in the public space and, hopefully, increase support within IBM (see Figure 6). It was certainly an exciting, uncertain time for the VM community.

There’s no way to overstate the importance of Linux to the VM community (and beyond). Consider these points:

• Linux is a relatively well-behaved guest. It understands whether z/VM is present and can work cooperatively with it to improve performance and management. Work inside and outside IBM continues to improve this aspect. This is one area where the combined experience of the VM community has a lot to offer the VM and kernel developers (see Rob van der Heij: “Virtualization—Something New? History and Perspective of Virtualization on IBM Mainframes” at www.rvdheij.nl/Presentations/nluug-2007.pdf.)
• The portfolio of applications available to sites running z/VM has grown enormously.
• Linux has enabled major server consolidation that has helped reduce data center costs.

Many of the important hardware innovations from the Integrated Facility for Linux (IFL) to new instructions are a result of wanting to make Linux run better on System z.

The introduction of Virtual Image Facility (VIF) was a tacit admission that IBM’s desire to sunset VM was ill-considered. VIF was IBM’s answer to the question, “How do we reintroduce a technology we spent years talking down?” Still, today, there are customers who won’t run or be offered VM as they had been so thoroughly convinced to get rid of it. Considerable face would be lost were it to be reintroduced.

The speed of innovation—both technically and from a marketing perspective—was breathtaking. VM/ESA 3.1 begat z/VM and, with each subsequent version, there were major changes in the terms and conditions that made VM highly affordable and desirable. Similarly, from release to release, we’ve seen remarkable technical enhancements such as System Management Application Programming Interface (SMAPI) and Virtual Switch. The work of the Independent Software Vendors (ISVs), such as Velocity Software, CA Technologies, and Rocket Software, also must be acknowledged for providing capabilities that allow Linux on System z to thrive.

The Future

It’s been more than 10 years since Linux appeared on the scene and, coupled with z/VM, made and saved bucket loads of money. That is also about the timeframe when we see a new generation of executives rise to positions where they started eyeing z/VM with the same look its progenitors (CP-67, VM/370 and VM/SP) had received.

What form will this next battle take? What will be tomorrow’s equivalent of the OCO decision? Certainly, the global financial crisis has put all aspects of business under pressure. However, in contrast to the 1991 recession, z/VM and Linux can now be seen as helping businesses do more with less.

To assess potential threats to z/VM, it’s worthwhile to consider:

• What is IBM publically backing?
• What are we seeing being pushed upstream from the developers in Boeblingen?
• What is dominating the magazines in frequent flyer university?

Evidence suggests Kernel-based Virtual Machine (KVM) is being seen as the new savior that will displace z/VM and save IBM money. Consider the following in the context of the aforementioned questions:

• IBM is an active member and supporter of the Open Virtualization Alliance. Now this is just a good business decision. IBM recognizes there are many virtualization choices out there. However, sometimes when IBM embraces an idea, the company takes it to the extreme.
• After a hiatus of 18 months, the number of fixes and enhancements to the Linux kernel pertaining to System z’s implementation of KVM has increased significantly. This indicates real investment by IBM in the technology.
• KVM, ESX, Hyper-V, and Virtual Box are featured prominently in trade publications and journals aimed at the C-level executive. IBM also runs the openKVM Twitter feed.

This evidence reveals that KVM has increased in importance. KVM is a great technology with tremendous potential, and it’s open source. There are lessons that z/VM can learn from the vast community of KVM users. The hope is that KVM is another arrow in the quiver for System z and not the precursor to another onslaught on VM.

The bits and pieces are all there to allow a Linux system to use KVM as a hypervisor to manage a farm of virtual servers. One of the nice things about KVM is that we now get to peer under the covers to see how Start Interpretive Execution (SIE) works in z/Architecture (see Figure 7). However, to expect that the function and stability z/VM has acquired over 45 years of effort can be replaced in just a few years is dangerous thinking. There’s more to z/VM than just a hypervisor. It’s an ecosystem of tools, techniques, instrumentation, utilities, people, practices, and infrastructure that’s essential to the proper operation of large-scale server farms or cloud provisioning. What constitutes “enterprise ready” is different for an enterprise of 10 servers than one running hundreds.

Integration of or closer cooperation between KVM and z/VM is a worthy goal. The oVirt project appears an ideal mechanism for establishing a common management base to serve different sorts of virtualization technologies (see ZDNet’s “IBM's Open Virtualization Alliance, oVirt and KVM Update” and oVirt: “The Virtual Datacenter Management Platform”: http://ovirt.org).

This is all just speculation, but the history of VM is one of exhilarating highs and numbing lows. Vigilance by its community has proved vital to VM’s longevity and relevance. There’s only so much “doing more with less” that a team can take, and continuing to put the squeeze on z/VM would prove to be shortsighted and a return to the mistakes of the past.

Whatever lies ahead, there are a few immutable truths:

• The teams of developers behind VM and its Linux counterparts have great dedication and skill.
• The VM community is passionate and unwilling to ever go quietly into the night.
• There’s still a lot of fun to be had riding this rollercoaster.