Blog

Every single person who logs onto the mainframe when working in a native mainframe operating system gets an identical address space. You can think of each address space as a standalone virtual mainframe system. These address spaces are created instantly and are protected from interfering with each other. Also, many companies create logical partitions of their hardware called LPARs (logical partitions). The logical partitions are controlled by a native firmware hypervisor called PR/SM (pronounced Prism).

The value that LPARS provide is a further separation between environments. This means not only can you separate users, but you can also create distinct computing environments such as those created when virtualizing X86 machines. For example, you can have a shared development environment as well as environments for testing, staging and production. You can allocate resources and security access to a particular environment based on need, just like you would in a data center full of commodity machines. This means you can configure production systems to be more secure and have higher execution priority and more resources such as memory. In fact, according to the U.S. government, from a security perspective, virtual machines created on the mainframe are considered distinctly separate machines.

LPARs running on the same hardware or different hardware under the native mainframe z/OS operating system can be linked together as a Sysplex for redundancy and to easily provide additional capacity. Dynamic Virtual IP Addressing (DVIPA) is used to dynamically route IP traffic across multiple LPARs in a Sysplex. This is analogous to being able to tie together a variety of web servers behind a load balancer or organize them within a Kubernetes cluster. Being able to tie computers together is an important feature for making systems scalable. Also, linking computers across geographic regions increases performance. It’s always faster to work with a machine that’s closer to you than one that’s further away. As mentioned earlier, being able to scale systems up and down is a key feature of cloud-based computing. Dynamic connection, which is part of mainframe architecture, is critical for scaling to work.

The important thing to understand is that the stuff we consider to be modern in terms of the machine virtualization that we rely upon to make the cloud work has been in play for years on the mainframe. Also, virtualization on the mainframe makes it possible for programmers and system administration staff to work in a variety of operating systems and programming languages as we’ve come to expect in commodity cloud instances. Machine virtualization allows mainframes to support Linux. Whether the computing environment is virtualized Linux or native mainframe, developers can code in Java, C/C++, Python, Perl and a variety of other languages, including something as new as Swift. But, then again, the mainframe was always intended to support many programming languages.

Even back in the ’60s and ’70s, programmers could write in COBOL, PL/I, job control language (JCL) and Customer Information Control System (CICS). Also, writing low-level code in Assembler has always been possible. Clearly, the versatility that modern developers enjoy programming in a variety of languages in a cloud environment has been part of mainframe computing for a while. Yet, the benefit has never been at the forefront of developer awareness. Maybe it’s time to change the narrative.

Changing the Narrative About Mainframe Computing in the Cloud

As I stated at the beginning of this article, IBM continues to make mainframes because there’s a growing demand for the technology—particularly as smartphones and IoT devices proliferate worldwide. Netflix might be spinning up containers on a commodity machine in a data center to let you view past episodes of “Orange is The New Black.” But when it comes time to pay for your Lyft ride using your iPhone, that credit card transaction will most likely be processed on a mainframe that’s providing service via some sort of API gateway.

There is little benefit to arguing that mainframes should or can be used interchangeably with commodity servers. That’s like saying a fleet of automobiles can be used interchangeably with a city bus. You can do it, but it doesn’t make much sense. Each technology has benefits that when used together, make the overall transportation system more efficient. The trick is to understand where and how the technologies fit together in the big picture. In this case, the big picture is the cloud.

The old narrative around mainframe technology is based on perceptions that are no longer applicable. The modern mainframe is no more centralized than the single building that houses a regional data center. In terms of programming languages, mainframes moved beyond FORTRAN and COBOL decades ago. Current mainframe virtualization technology makes it possible to run programs written in just about any programming language. And, mainframe emulators, such as ZD&T, make it possible to get the hands-on experience necessary to get familiar working in a mainframe environment.

The Perceptions of the Past are No Longer Applicable

The modern mainframe is a powerful resource that provides extraordinary benefits for cloud computing. The new narrative is that the mainframe is modern, adaptable and well-suited to accommodate a very high volume of transactions at lightning-fast speeds. And, it’s a good way to make good money.

As the current workforce (which is admittedly aging) approaches retirement, a new breed of mainframe developer will be needed to continue the work of making the mainframe a mission-critical resource in the modern cloud. In addition to understanding the essentials of mainframe computing, these developers will need to be well-versed with the modern tools and techniques that drive cloud computing. It’s an investment that I’ve urged many up-and-coming developers to make.

Putting It All Together

Let me share a piece of personal history: A while back I needed to do some programming. So, I went to my desk, logged into my system, wrote some code and ran it. It worked like a charm! I saved my work, finishing up just in time for dinner. A while later, it was again time to program. I went back to my desk, logged into the system and coded away. Just another day of slinging the code, right? Well, not really.

My first coding session was in 1975. “A while later” is today. Forty-four years have passed. Back then, I was working on a mainframe that was 50 miles away. Today, I logged into an online programming environment that was also miles away, maybe even halfway around the world. That’s how it is with cloud computing: The resources are any way at any time.

Interestingly, despite the passage of a significant amount of time, my current developer experience is not that different than the one I had decades ago. It’s still just me and a terminal. Back then, the terminal was a screen full of green characters. Today, my terminal is able to accommodate a variety of media and input devices. But, at the other end of the wire, who knows what’s going on? For all I know, I might still be connecting to the same mainframe I used back in 1975. Which is my point.

Much of what we consider new and wonderful about cloud computing today has actually been around for a while in mainframe technology. In many ways, the mainframe was the cloud before the cloud. And, it’s still a powerful resource for what we have come to know as modern cloud computing. Given its power, versatility, scalability and support for modern DevOps tooling, the mainframe is a perfect resource for today’s and tomorrow’s cloud computing infrastructure. It’s a computing resource that we’ll do well to take advantage of as we move forward in our development efforts on an increasingly connected planet.

This Blog was origionally appeared on DevOps.com and is published here with the kind permission by the owner and the author.

3 Pages