Apr 6 ’11

The z/VSE Fast Path to Linux on System z

by Editor in z/Journal

Linux on System z has been an important part of z/VSE’s Protect, Integrate and Extend (PIE) strategy for many years. It:

Linux on System z provides many useful functions that z/VSE doesn’t provide. It offers WebSphere, Java, DB2 Universal Database, a rich set of development tools, and a growing selection of packaged applications. On the other hand, z/VSE provides excellent, cost-effective capabilities to run traditional workloads such as CICS transactions or batch jobs.

To allow easy integration of z/VSE with other systems and applications, z/VSE provides a huge set of so-called connectors that allow access to various types of z/VSE data and applications from remote applications and vice versa. Examples for such connectors are the VSE connector server and client, the VSAM redirector, VSE script server, VSE VTAPE, CICS Web support and Web services support, as well as products such as CICS Transaction Gateway, DB2 Server for VSE Client Edition, and WebSphere MQ client and server.

Most of those connectors are based on standard TCP/IP communication. A TCP/IP stack is required on z/VSE and on Linux on System z and a network connection must exist between the two systems. This can be a network cable, a shared OSA adapter, or a HiperSocket network.

Most connectors can be used with various operating systems such as Linux, UNIX, AIX, Windows, etc. However, for best performance, it’s recommended you keep the distance between the two sides as short as possible. It’s certainly a good idea to run the connector solution on Linux on System z right beside the z/VSE systems on the same mainframe server in a separate Logical Partition (LPAR) or z/VM guest, as this shortens the distance between the two sides as much as possible. It also allows the two sides to be interconnected using virtual networks such as HiperSocket, virtual LANs, virtual switches (VSWITCH), etc.

While such virtual networks help reduce network transfer times, latency times, etc., the communication is still based on the TCP and IP protocols. When TCP/IP was designed, the networks were far from being as reliable as today’s networks. So it was designed to allow reliable communication over a non-reliable network. TCP/IP provides mechanisms to handle packet loss, duplicate packets, packet sequence errors, damaged or incomplete packets, etc. To protect against such errors, it uses techniques such as sequence numbers, acknowledgments, and checksums. All those features are required on a real network, but they add a certain amount of processing overhead to the TCP/IP stacks on all involved systems. On z/VSE, the TCP/IP stack runs in a separate partition, so all data sent or received by an application must first be passed (copied) to the TCP/IP stack. This requires some kind of inter-process communication and dispatching, which again adds some overhead. Such processing overhead adds to CPU utilization and can reduce system performance and throughput.

When z/VSE runs side by side with Linux on System z on the same physical mainframe server, the communication between them still works the same way using TCP/IP, including all its processing overhead for checksums, sequence numbers, and acknowledgments. While these things are essential for a real network, why do we have to do all this expensive processing when the two systems run in the same hardware box? Why do we have to go all the way through the TCP/IP stack on the one side, through the (virtual) network and through the TCP/IP stack on the other side, until we reach the target application? Shouldn’t there be a more direct way of communication that doesn’t involve all that expensive processing? The answer is the z/VSE Fast Path to Linux (Linux Fast Path, or LFP) on System z.

Linux Fast Path

LFP is a brand new function provided as part of z/VSE V4.3, which has been generally available since Nov. 26, 2010. It provides for more direct communication (a fast path) between z/VSE applications and applications running on Linux on System z.

Instead of using TCP/IP-based network communication, LFP uses Inter-User Communication Vehicle (IUCV)-based communication. IUCV is a z/VM function that has existed for years and is heavily used by z/VM and other applications. It provides a fast, reliable communication path between z/VM guests running under the same z/VM system. IUCV doesn’t care about checksums, sequence numbers, acknowledgements, packet loss, or damaged packets since it’s just a memory copy from one z/VM guest to another. It causes much less overhead compared to a TCP/IP-based communication.

IUCV only works between z/VM guests in the same z/VM system, so z/VSE and Linux on System z must run under the same z/VM system to use LFP. This works best on an IBM System z10 or IBM zEnterprise 196 in a so-called, z/VM-mode LPAR in combination with z/VM V5.4 or later. A z/VM-mode LPAR allows mixing standard Central Processors (CPs) with specialty engines such as Integrated Facility for Linux (IFL) processors. Before the z10 and the z/VM-mode LPAR, you could define only Linux-only LPARs with just IFL processors, or only LPARs with standard CPs and System z Application Assist Processor (zAAP) and System z Integrated Information Processor (zIIP) for z/OS. You had to define one or more Linux-only LPARs to run Linux on System z, probably under z/VM, and one or more additional LPARs to run the z/VSE systems either natively or under z/VM.

With the z/VM-mode LPAR and z/VM V5.4 or later, you can now run both z/VSE and Linux on System z in the same LPAR and under the same z/VM system. A z/VM-mode LPAR doesn’t just allow using IUCV communication and LFP; it can also help reduce maintenance costs for the z/VM system image. You only need to maintain one z/VM system instead of multiple ones for each LPAR.

Existing z/VSE applications and connector components use socket calls to establish a connection to another application. The socket interface is the common interface to a TCP/IP stack to perform networking functions. The LFP also provides binary-compatible socket interfaces to the applications. From the application perspective, the LFP behaves like a TCP/IP stack. That means existing applications can use the LFP unchanged. There’s no need to change or recompile the application. The applications still “think” they’re using TCP/IP to communicate with the partner, but LFP “intercepts” the socket call and routes it through IUCV directly to Linux on System z without involving a TCP/IP stack on z/VSE.

On Linux on System z, the LFP Daemon (LFPD) must run. It receives the data via IUCV and passes it to the TCP/IP stack on the Linux side. Basically, every socket call performed by a z/VSE application is forwarded to the LFPD on Linux, which then performs that socket call against the Linux TCP/IP stack. If the target application resides on the same Linux system as the LFPD, then the Linux TCP/IP stack automatically establishes a UNIX-pipe type of communication between the LFPD and the local Linux application. This pipe also bypasses most of the TCP/IP processing.

Figure 1 shows the data flow when using regular TCP/IP communication (black arrows), as well as the data flow when using LFP (red arrows). As you can see, fewer pieces are involved on the z/VSE side when using LFP. This reduces the processing overhead on z/VSE.

 

The LFP provides the following socket interfaces:

Any application using one of these socket interfaces can be used unchanged with the LFP.

You can start multiple LFP instances on a z/VSE system. Each instance is identified by a two-digit system ID—the exact same way you identify your TCP/IP stacks. Applications can choose which stack to talk to by supplying the ID of the desired stack. This can be done using the // OPTION SYSPARM=’nn’ statement in the JCL running the application or programmatically when using the INITAPI socket call.

To configure the LFP, the following steps must be performed:

Figure 2 shows a sample configuration for the LFPD on Linux (running under the z/VM guest ID LINR02). This configuration should be given in a file that’s accessible using the name /etc/opt/ibm/vselfpd/confs-enabled/lfpd-LINR02.conf. To start the LFPD, use the command lfpd-ctl start LINR02. The LFPD is now listening for incoming connections from an LFP instance that’s started on z/VSE.

 

Figure 3 shows the corresponding configuration to start up an LFP instance on z/VSE. The z/VM guest ID of the z/VSE system is VSER05. The LFP instance is started and connects to the listening LFPD on Linux using IUCV. When the start job completes successfully, then the LFP instance on z/VSE is ready for socket application use.

 

For more details about the LFP, its configuration and the supported socket APIs, please see the z/VSE V4R3.0 TCP/IP Support manual (SC34-2604-00).