Operating Systems

Linux on z/VM: Configure for Performance

6 Pages

As installations embrace Linux running on zSeries, there’s widespread lack of performance expertise. VM performance analysts likely learned their trade managing relatively well-understood, smaller workloads, compared to today’s Linux environments; even many current z/OS systems are much smaller than typical Linux servers. Server performance analysts and systems administrators find their skills lacking because their experience is in dedicated server environments where hardware resources are relatively inexpensive—and where adding hardware to solve a performance problem has often been less expensive than performing serious analysis and research.

Many default z/VM options aren’t appropriate for addressing performance issues when running large Linux servers. When tuning Linux on zSeries, any performance guideline or “rule of thumb” from dedicated hardware must be reevaluated for applicability in the shared resource environment. For example, adding memory to a dedicated server can often solve a performance problem, whereas increasing virtual machine size on z/VM means fewer resources are available for other servers.

Configuration Topics

This article discusses three configuration areas:

  • Memory: There are many options when configuring a shared resource environment, and memory is the most complex. Over-committing memory often means Linux servers can share the real memory more effectively, but some network options must be carefully chosen because they impact the ability to share memory, often in counterintuitive ways that may change from z/VM release to release.
  • Linux configuration: When configuring Linux servers in shared resources, memory sizes, swapping configuration, and even the number of virtual processors provided to each server must be carefully chosen. A common mistake in implementing Linux on zSeries is to create one server and replicate it without carefully evaluating its resource requirements.
  • z/VM configuration: z/VM has many configuration options, including expanded storage size, Minidisk Cache (MDC) and MDC storage allocation, z/VM paging and spooling definitions, DASD configuration, DASD hardware cache, and channel subsystem.

Configuring Storage

There are three common environments for supporting Linux on zSeries, each with different requirements.

The first implementation of Linux on zSeries was for enterprise infrastructure, supporting applications such as Domain Name Server (DNS), Samba, and Apache Web servers. I/O rates were low and virtual servers were relatively small, normally in the 64 to 256MB range. Real memory requirements for such environments were less than 2GB. Next, applications such as WebSphere, Domino, and Oracle became available for Linux in 31-bit mode. These applications do more I/O and require much larger virtual memory sizes. These environments required much larger real memory sizes than traditional z/VM systems, and more research in understanding these requirements. Once 64-bit support became available, applications, such as Oracle 10G, WAS 6.02 and SAP, started to be implemented in servers in the 2GB to 8GB range. Many such sites also started to experience mysterious performance problems.

Memory Considerations

When planning for Linux running under z/VM, an overriding requirement is to share memory. Even on systems with memory over-configured to support all the Linux servers, there are still z/VM facilities that use memory below the 2GB line.

Linux was designed to effectively use all memory available, mainly through caching data and avoiding swapping. Linux uses memory on a Least Recently Used (LRU) basis, meaning that when a process needs memory, the memory allocated will likely be the memory that hasn’t been referenced in a long time. But in a shared environment such as z/VM, which may have paged out some memory, the pages most likely to be paged out are the ones that Linux will attempt to use next. This fact alone has a huge impact on Linux memory configuration. The challenge, then, is how to influence memory reference patterns to minimize paging—especially paging to disk.

6 Pages