Mainframes have many features that would classify them as a candidate for cloud services—
solid virtualization, capacity on demand, uptimes with lots of 9’s after the decimal point, unmatched failover capabilities, outstanding security, and more. But do these features really classify the mainframe as a candidate to host cloud services in the way they’re marketed and sold today? Not really.
By just offering mainframe services, you aren’t able to offer cloud services “on demand.” Not many mainframe users can offer a complete environment, including operating system, database, transaction processing monitor and other components, by pressing a simple button (the mainframe equivalent of a mouse click).
Effectively Managing Different Workloads
Most analysts and other IT leaders have embraced the ideas of fit-for-purpose computing, hybrid clouds, and fabric computing. These three paradigms basically describe the future of IT because different approaches are better able to run certain workloads than others.
Let’s pick the best approach for each workload and ensure they’re all connected and can deliver real business services. After all, users don’t care where a service runs or what it runs on. They want services delivered quickly and reliably, and IT wants to deliver those services as economically as possible.
Some hardware platforms are better suited to run certain workloads than others. When confronted with new applications, our architects often only look at the mainframe as the enterprise server. Since much of the data remains on the mainframe, the architectural drawing will include a black box that says “mainframe” with one arrow going in that says “request” and one coming out that says “data.” Why? Because, in their minds, and until recently, they were right; the rest of the “flexible stuff” could only be found outside the mainframe. So, to make the mainframe part of a cloud service (offering), some of it has to be brought to the mainframe.
A Flexible Architecture: Essential for Cloud Services
Let’s assume most services would be available on any platform. This would mean that when architecting a solution, we would be able to create an application based on components that were assembled on a fit-for-purpose principle. Parts would run as an external cloud service, other parts on a virtualized internal distributed environment, and other parts on the mainframe. But even when we run internal cloud components, there will always be a great variation in the hardware on which we run these services. Not all hardware has the same processor speed or memory capacity. Not all servers will run Windows, Linux, or UNIX; not all will even run the same hypervisor.
So when designing the cloud components that will eventually comprise the business service, a flexible architecture is essential, as it lets us move components to specific environments dynamically. Because the application will change, we may have many more users, or we might want to change from one public cloud provider to another for cost reasons. Or, we might find that one specific component is so dependent on another that we want to bring them closer to each other. That’s where the mainframe will definitely be used in the near future. IBM’s zEnterprise System enables us to run both Linux on System z and distributed operating systems on the zEnterprise BladeCenter Extension (zBX). The only thing lacking are the tools that allow us to create the cloud components under Linux on System z. That will soon change.