Operating Systems

Looping for Performance: A Tuning Methodology

3 Pages

In the computer performance arena, we often think of “looping” in a negative sense. But is all looping really bad? This article suggests using the loop control structure, which is actually one of the most often used control structures in programming, in our daily performance analysis to find the largest users of a system resource.

This technique can help you identify and reduce the CPU portion of your Total Cost of Ownership (TCO). It can be applied successfully to your I/O and memory resources. This article describes how an organization achieved a 30 percent reduction in the MIPS used by a single application that was running on several logical partitions (LPArs). Although the benefits gained will vary from organization to organization, there will be benefits.

Looping for performance is an iterative approach that asks the questions, “What’s the biggest and why” at each level of a properly characterized application workload. Once the questions are answered at one level, we proceed to the next level and ask the questions again. We continue until we find something we can change; it might be a program, a DB2 plan, a parameter, or even something that no longer needs to run (the best of all finds).

 Why do we care about finding resource excesses? Well, how often do you want to ask management for a new system? If you don’t know what’s ravaging your resources, how do you plan? When resources are tight, this process may find enough room to delay that next upgrade or ensure that it’s justified (see Figure 1).

Silos Are Good for the Farmers

Let’s consider why we need an iterative solution and why one-stop shopping just doesn’t cut it in the new Web-driven world. Throughout the last three decades of computer performance analysis, we’ve progressed far in our understanding of the platforms and subsystems we manage. We can take pride in the stability of our systems and in our alert mechanisms that help us when the unexpected happens. We’ve developed groups of subject matter experts, DBAs, systems programmers, etc., who have in-depth knowledge in their areas of expertise. 

But just when we start to get a handle on our environments, they change. Gone are the single subsystem applications. Their replacements tend to be multi-subsystem and even multi-platform. The single-stepped, vertical view into our silos needs to be replaced by a set of processes that can horizontally join the information in our silos. We need this horizontal view because that’s how our applications are being designed and, ultimately, how our customers will see our companies (see Figure 2). 

The first step toward arriving at this horizontal view is to become one with our applications. We need to see our applications for what they are: a complicated, cross-platform mixture of architectures. This may be your hardest challenge. Your early focus should be on characterizing what’s important to your organization. Often, what’s important is the largest user of CPU resources under your jurisdiction. For this article we’ll stick to z/OS and the subsystems that run there; this technique however, is applicable to all environments. 

3 Pages