Already, tools that let users move cloud components freely between internal and external cloud infrastructures are available. Soon, the same technology will be available on the mainframe. Consider a scenario revealing how this could work: A new application to support a marketing campaign must be built. During the requirement phase, it becomes clear we will need these components:
- Existing product information must come from DB2 running on the mainframe.
- One or more data servers must provide the graphics for the Web pages.
- One or more transaction servers will be used to drive requests to the mainframe.
- One or more application servers will host various software to drive the backbone of the application.
- Many Web servers will be needed to allow anybody to access your Website from anywhere in the world.
The decision is made to host the Web servers with an external cloud provider. Maximum flexibility, almost unlimited expansion possibilities, and we pay only for what we use. The application servers, transaction servers and data servers, including the middleware (such as firewalls, etc.) will be defined as services on a virtualized internal cloud infrastructure.
Once the site is launched, we quickly learn that the bottleneck is between the mainframe and the transaction servers. So, our first action could be to dynamically move the transaction servers from virtualized distributed servers to blades running on the zBX, reducing network latency and optimizing performance. But after a few hours, it’s clear this isn’t enough, so we decide to move the transaction servers dynamically to an Integrated Facility for Linux (IFL) running Linux on System z. This further reduces network traffic by bringing the two components (the DB2 database and the transaction servers) closer to each other.
At one point, we notice that the data server serving the graphics can’t deliver the graphics quickly enough. Since the mainframe is great at caching, we decide (again dynamically) to move these to the mainframe, too.
Flexibility, Agility With Mainframe and Distributed Cloud Components
In this example, we really do use a fit-for-purpose principle. Whatever runs best on a particular environment at a specific time should be moved there with the least amount of work involved. That’s the promise of cloud, and the dream of many business users.
This example looks like a miracle to many mainframe users, but most of this already exists today in the distributed environment and is used by cloud providers to do what they do best—offer the most economic services in the most flexible way.
If we want the mainframe to be a serious cloud component, we must ensure we can offer the same flexibility and agility as other platforms do—with the added benefits of the mainframe! This means we must ensure we can run the underlying services (such as IFLs) that enable us to implement the components (Linux) that will eventually run these cloud services.
We already have the right tools, but we will use them in a different way—more dynamic and process-oriented instead of simply managing the database or network. Performance management is key because we will be asked to add and remove capacity on demand. Service Level Agreement (SLA) compliance, storage and failover management, security and network management will all have to change due to the unpredictable nature of this new environment.
But isn’t that exactly what you’ve done so well for decades? The hardware, software, and people who make today’s mainframes run the way they do are ready for this next step. With the right software that lets architects and others who design and create new fit-for-purpose environments treat the mainframe as any other platform, they won’t care where services run. If defining, transferring, or removing a service can be done with a click of a mouse, the world of real fit-for-purpose computing is closer than any of us ever imagined.