Too many Linux on System z projects fail for preventable reasons. This article discusses some of those reasons and provides tips on actual Proof of Concept (POC) implementation. We will focus on POC, since most businesses prefer to minimize risk by testing a new technology before completely funding projects. We will explore executive sponsorship, application selection, project scope, cross-organization communication, and crisp success criteria; many of these are general best practices for executing any POC.
A key ingredient for a successful POC is identifying and engaging an executive sponsor. Since a Linux on System z project touches so many different organizations, you need a strong sponsor who can help navigate potential political pitfalls. Cultural ideology from both the mainframe and distributed camps can easily derail a project, so it should be driven from the top down. Many projects suffer from a lack of strong executive support.
Selecting a target application is another critical component of a successful POC, and there are many factors to consider. Picking the wrong application could give Linux on System z a bad reputation and overshadow any benefits. IBM and several business partners have experienced application assessment teams that can help you evaluate candidates and choose one that minimizes the risk of failure. Considerations include:
• Select an application that already has strong mainframe content. Many successful candidate applications have their data on z/OS. There are obvious benefits to reducing the latency of data access by removing physical network hops through virtual network technology such as HiperSockets. Obviously, there are situations, such as in server consolidation, where there’s no existing mainframe data. Server consolidation is an excellent reason to consider deploying Linux on System z, but it can be hard to demonstrate that value in a first project.
• Keep it simple. With higher complexity comes a higher risk of failure. Fewer components in the workload will make it easier to install and configure. Don’t select an unfamiliar application that isn’t well-supported; familiarity and support will be needed if problems arise. Typically, problem resolution is faster in a simple environment where the only new technology is Linux on System z.
• Play it safe. If something goes wrong with this project, you want a second chance. However, if the application is business-critical, you may not get one. Avoid your core business applications; focus on some ancillary process that has lower volumes and lower resource requirements. Avoid brand new applications or new versions of existing applications; you don’t want to be debugging the application at the same time you’re trying to validate the platform.
• Ensure the target application uses technology supported on Linux on System z. Be aware of all potential vendor software, including IBM software, and ensure the correct levels are supported.
Processor and Memory Sizing
There may be more than one application that seems to be a good fit for demonstrating the viability and value of Linux on System z. A good next step is to size the resource requirements for each candidate application. Determine how many Integrated Facility for Linux (IFL) processors are needed and how much memory is needed. Even though many larger companies have capacity planning organizations, this environment is new and may require some help from IBM or a business partner to assess the needs.
You should understand exactly how you plan to test the application to know what system resources are needed. If you plan on doing any stress or performance testing, the environment must be the appropriate size. You can always remove resources to see if fewer are needed, but it can be tough to scramble for additional IFLs or memory once the application is running poorly in the middle of testing. It’s better to demonstrate the feasibility of Linux on System z with smaller environments; if you can select an application that needs fewer system resources, you can improve your chances of success.
For IFL utilization, IBM Techline is a great resource, providing tools to take the exact server model and configuration, along with CPU utilization from the current production environment, and accurately indicate how many IFLs you need to run the distributed workload. IBM Techline can also graph multiple servers together to view the consolidated data. This can be insightful, as frequently, the total number of IFLs can be decreased. That’s because the more workloads you stack, the lower the peak-to-average ratio becomes. With less variability, you can raise the average utilization by lowering the number of processors.
You must also address memory utilization. Often, a distributed platform will lack a robust I/O subsystem. Adding memory is the only alternative to compensate for the lack of a well-architected I/O subsystem. The easy answer is to throw additional hardware at a performance problem. Often, a team running a distributed application requests a guest that needs 8GB of memory because that’s what they have in distributed environments. Without analyzing the actual memory footprint, the guest will be oversized. Linux likes to hold on to memory. If it only needs 1GB but gets an 8GB allocation, it will cache its I/O in the other 7GB. This is a waste of memory from a mainframe perspective. It may also skew a Total Cost of Ownership (TCO) analysis because the cost of running the application on the mainframe may increase due to the additional resources. If you add multiple guests, the numbers become even higher.