The x86 node's root file system will be exported to System z via Network File System (NFS). NFS was a pragmatic choice since it was included in all distributions (see first design goal), and its concepts are widely understood (see second design goal). The NFS mount allows full access from System z to the whole x86 environment. It doesn’t allow the x86 node to access the System z node's file system; however, application integration accommodates optional sharing of directories of the System z virtual server with x86 nodes.
A common use case for sharing is home and data directories. To ensure access permissions are consistent, users and groups are kept in sync between the System z master and an attached node. The use of standardized lock files as honored by common Linux libraries and tools will prevent simultaneous and potentially conflicting changes to the user or group databases on both sides. Application integration can change the networking setup of the nodes to employ IP masquerading and establish the System z node as a Network Address Translation (NAT) gateway to x86 nodes.
To invoke an x86 application, the operator can simply start the binary accessible through the file system export. Application integration will get initiative since the x86 binary can’t be executed in Linux on System z directly and the binfmt_misc handler is called, catching this exception. This handler will start the binary on the attached node in its unchanged x86 environment. To represent the application in the Linux on System z environment, the binfmt_misc handler uses code that acts as a shadow process.
Shadow processes serve as proxies on System z to the x86 application's processes. In particular, they don’t consume significant amounts of cycles and memory. Real processes and shadow processes have a 1:1 relationship (see Figure 1). Many process attributes, including effective and real user and group IDs, arguments, environment, and resource limits are used to execute on x86 and are kept in sync between real and shadow processes; however, process IDs will be different on both sides (i.e., pid namespaces are disjoint). The name of the shadow process will correspond (and be kept in sync) to the x86 process name, but will be prefixed to indicate the x86 process ID and hostname.
Figure 2 shows how shadow processes can appear using the “ps” command. The x86 process lifecycle will be mirrored on System z, including events such as forking and process exiting (e.g., if a process on x86 forks, the shadow process will also fork). Threads are shadowed, too. Open files, as used for standard I/O and pipes, and terminal capabilities, are forwarded between shadow and real processes. Signals sent to the shadow process will be passed on to the real x86 process, completing the representation of x86 applications in Linux on System z.
The accurate mapping of real processes to shadow processes allows for monitoring and automation from the Linux on System z console. Resource consumption is provided through a merged /proc file system that includes System z and x86 processes. Application integration-specific tooling (e.g., ai_top) builds on this infrastructure for a consolidated view over System z and x86 resources such as CPU and memory. Examining this x86 resource usage from System z is an explicit task that provides a clear picture of local and remote resource consumption (second design objective).
Programs can be installed through executing installation executables, transparently installing applications on the x86 environment. Alternatively, rpm is a common software packaging and distribution format. Application integration introduces the notion of so-called meta-rpms: x86 rpms (32- or 64-bit) are wrapped in s390x-type meta-rpms. These meta-rpms can be installed on the System z host, and during their installation, the inside x86 rpm will be deployed and installed on an attached x86 node. This allows for a consistent package management procedure for both native s390x packages and meta-rpms.
A local query to the System z rpm database reveals the packages installed on attached x86 nodes through the presence of corresponding meta-rpms. Dependencies of x86 rpms can be mapped in the corresponding meta-rpm namespace. This allows for dependency resolution through meta-rpms on the System z host already, and enables the use of repositories and package managers such as Zypper or Yum.
Consolidated system logging on the System z side and synchronized clocks through a System z-based time server complement the operational unification aspects of Linux on System z. All these facets of application integration provide a combined OS environment that ties x86 operations into Linux on System z management. For some tasks, specific tooling is provided since full transparency can’t be achieved without semantical issues or significant kernel changes—contradicting the basic design goals.
Application integration is freely available for download from the Linux on System z developerWorks Web pages (see www.ibm.com/developerworks/linux/linux390/applint.html), including a manual explaining concepts and use of the packages. It has the status of a technology study, presenting an early phase of application integration to seek early feedback on future development directions. Some limitations may apply, and one of them is still a 1:1 relationship between System z and x86 nodes: one System z image can only attach one x86 image at a time, and one x86 image can only be attached to one System z image. Also, support is provided through an email address (mail to:firstname.lastname@example.org) on a best-can-do basis; everything beyond this level needs discussion in the individual case.
How has this improved the operational situation in mixed architecture solutions? The challenge of handling hybrid environments has been reduced toward the complexity to manage a series of System z images—which many customers have under control today. It will enable consolidation scenarios that previously weren’t possible:
• Applications not available for Linux on System z, such as x86-only components of application suites, virus scanners, software optimized on instruction set specifics, or merely Oracle's Java Environment
• Compute-intensive workloads not running economically on System z (e.g., commercial High-Performance Computing [HPC] and deep analytics software working with System z back-end data), or Extract, Transform and Load (ETL) scenarios with complex transformations
• Home-grown applications that can’t be ported easily; reasons can include endianness issues, the lack of knowledge about the application's internals, and fear of touching the application.
Application integration doesn’t preclude higher-level management products. Where such software works against the resource types synchronized by application integration, x86 applications and resources will enjoy management transparently (e.g., for automation across the hybrid complex). When management software requires explicit introspection in x86 nodes (e.g., using agents), OS-level operation of these agents can still occur via Linux on System z.
While the zManager provides common platform and virtualization management across the hybrid environment, application integration will provide unified operations on the OS and application level. It integrates Linux on x86 workload into Linux on System z environments and lets you focus on a best-fit selection of application components. A question remains to be answered in each individual case: Does zEnterprise emphasize the integration of distributed components into System z from a workload and operational perspective, or will it focus on providing a common home for the System z and distributed worlds? There will always be reason for both.