Figure 1 shows a common command you can run from a UNIX/Linux server to see memory usage. Quick and easy, it takes little effort, but isn’t sophisticated, so Oracle and WebSphere middleware may need additional memory sizing considerations. In this example, the server is basically idle, and no middleware is installed. It has 1GB of dedicated memory. The Mem: line shows the server has used 233MB and has 763MB free memory.
The -/+ buffers/cache line subtracts the amount of memory considered buffered I/O. The mainframe, with an excellent I/O subsystem, doesn’t need to cache I/O. Under the -/+ buffers/cache line, only 37MB is used. This could be a guest sized at 128MB instead of 1GB. Memory profiling is different for each application. No “rule of thumb” applies because each application must be understood on an individual basis.
Once an application has been selected, create a project plan to scope the project, outline steps and responsibilities, and identify success criteria. Assigning a formal project manager will help ensure these details are addressed. Typically, a project plan should include hardware and software planning, installation and configuration, and identifying specific test cases to be assessed. While it may be important to demonstrate scalability and performance, these tests can add significant cost and risk. Defining test cases and their associated success criteria must be done carefully.
Conduct team meetings regularly to communicate project status; they’re an important way of monitoring progress and solving problems before they need to be escalated to the executive level. The most successful project teams meet at least weekly—sometimes daily during the critical phases. These meetings help solve problems and foster teamwork. Relationships developed in the early validation phase can carry over into future projects.
To prevent scope creep, adopt and stick to a clearly documented project plan. Some projects start out as a simple test of a specific application in the Linux on System z environment, but then grow as new applications or new test cases are added. That isn’t always bad, but you should know and understand the effect on the project plan and required resources.
Consider the case of a client that defined a POC to test the IBM WebSphere Application Server in the Linux on System z environment. The application used DB2 data from their z/OS system, so it was a good candidate, given the proximity of the data. The test was successfully concluded ahead of schedule, so the client decided to add two more environments that were much more complex. This scope creep caused a problem because they didn’t think through the impact to the overall schedule or define the additional success criteria needed. The POC budget was exceeded and the project had a poor reputation even though the original test environment worked well.
Terminology is another key element. The mainframe organization uses different terminology and acronyms than distributed systems organizations. For example, “storage” can mean different things to different teams. Operational diagrams of the POC environment that are clearly labeled can help alleviate this problem. A common glossary can be distributed to all team members.
Success Criteria Definition