Operating Systems

Cloud has become a critical part of many IT infrastructures, and as such, it needs a robust platform on which to run. System z can provide that platform in a cost-effective, secure and robust fashion, specifically through the use of Linux on System z. Here we discuss the pros and cons of this platform as well as considerations for choosing Linux on System z. 

The Benefits of Hosting Distributed Workloads on System z

It isn’t hard to argue that the mainframe is a great platform for any general Linux workload, but there are specific times when it really has no equal:

• If the applications need Reliability, Accessibility, Security, Stability and Scalability (RASSS). System z was built to provide the ultimate in RASSS and is managed by a team who fully understands how to deliver on this promise.
• If the Linux application needs to communicate with z/OS resources. Obviously, you can run Linux on a distributed server and connect it to the mainframe, but you now have network data making the comparatively long and relatively insecure trip between physical devices. Run Linux on an Integrated Facility for Linux (IFL) and the network packets travel exceedingly quickly, with no risk of being intercepted.
• If you need a highly available environment, perhaps a second location for a backup data center. This often already exists for the mainframe, so leveraging the fact that the infrastructure is already in place can be a huge cost savings versus setting up a new data center.

Cost savings are often at the heart of the business’s focus, and System z can be a very cost-effective solution, especially for software that’s licensed per core such as Oracle. You should also consider that distributed servers are often only loaded at about 30 percent capacity, even in a virtual environment, but it’s quite realistic to run an IFL at close to 100 percent without seeing any degradation in performance, adding to their cost-effectiveness.

The Implications of Adding Linux Workloads to System z

There are few downsides of adding Linux to System z. In fact, moving workloads to this environment can help promote the overall value of the mainframe inside your organization. This can only be a good thing for anyone in the System z field. It’s also worth recognizing that this is no longer a niche field; in fact, Linux hosting is the fastest growing workload in the mainframe market. It’s also important to note that the Linux you run on System z is the same Linux the distributed team runs on their servers; there’s nothing mystical about it.

When adding Linux to a mainframe, it’s normal for companies to employ specialty processors whenever viable. This means you aren’t impacting the general purpose processors or adding to the licensable MIPS of the machine. In the case of Linux, you will be using the IFL specialty processor, but in theory, there’s no reason you can’t stand up z/VM and Linux on your general purpose processors if you have spare capacity.

Delineating Responsibilities for Each Tier

In many larger organizations, the tasks related to the provisioning, management and monitoring of Linux workloads is distributed throughout several departments. Perhaps the mainframe team is responsible for installing Linux but they hand that environment over to another team to install the middleware; in turn, that team hands it off to the final team to install the actual applications. In this model, it might seem like the System z operator’s role is trivial and doesn’t need to be automated or simplified. In reality, the work to just provision “empty” Linux instances, to configure each instance and then monitor its usage and deal with its destruction, can be quite onerous. Unfortunately, this might become apparent only after you’ve already cloned a large number of uncontrolled environments.

Even assuming that all you do is the initial Linux install, you still need a solution that can be automated. This is vital for several reasons. First, one of the prerequisites for a system being called “cloud” is that it has some form of self-service front-end, and that translates into automation at the back-end. Second, you’re almost certainly going to need some kind of approval for each system deployed, and the most efficient way to do this is by using some kind of business process automation system. The third but probably most important reason is so you aren’t spending your precious time doing more low-value, repetitive, mind-numbing tasks. Automating these processes saves you time and speeds up the deployment.

You will also need a way of keeping track of each of the instances you create, including who owns each system, who is being charged for it and what it’s being used for. This is especially useful at the end of a system’s lifecycle when you need to be sure it can actually be decommissioned.

Consider resource allocation and usage; not all Linux systems are created equal. Some will represent production instances of major systems and some just smaller development environments. Also, it’s almost certain that you will need to do chargeback or showback for these systems, so resource usage monitoring is essential.

Finally, it’s important to be able to track the Service-Level Objectives (SLOs) for each system. Some of these systems will have near-zero downtime requirements and others may have less stringent needs. If you don’t know which are which, then you will find it hard to manage them.

Increasing Efficiency, Decreasing Time to Value and Reducing Costs

Most customers using Linux on System z have written scripts to automatically provision environments or they clone copies of “golden images.” A golden image is a Linux virtual machine that’s created with a base set of applications installed and is then replicated. Once replicated, it’s modified to appear as a unique instance. This approach creates snowflake deployments; that is, each of them is unique because once cloned from a limited catalog of golden images, they will typically be customized with individual applications and middleware. This process can also be error-prone and labor-intensive. The cloning process is efficient but the post-cloning steps can be troublesome.

The biggest concern with the cloning approach is actually related to the ongoing management of the environments. A Linux environment will be cloned and customized once, but its components will be patched and upgraded many times during the life of the system.

Application design tools that automate deployment of Linux applications to both x86 and System z can empower you to accelerate service delivery to the “best fit” platform, while also addressing the growing need for RASSS across your enterprise. New technology enables the design of complex Linux applications, including access points into z/OS systems, middleware components, load balancers, firewalls, etc. You simply create instances of these entire applications with varying resource allocations and control each application globally. However, more important, you can manage each of the discrete components of the application centrally. This means that instead of logging into each of the 400 cloned systems to upgrade JBoss, for example, you could find all instances of it across the deployed systems and upgrade them; this would be a huge time saver and reduce the risks associated with upgrades in place.

As a final note, as you evaluate Linux application design and deployment solutions, in addition to platform flexibility, be sure the solution you select has design features that enable you to fully leverage all the unique benefits of the mainframe.