IUCV only works between z/VM guests in the same z/VM system, so z/VSE and Linux on System z must run under the same z/VM system to use LFP. This works best on an IBM System z10 or IBM zEnterprise 196 in a so-called, z/VM-mode LPAR in combination with z/VM V5.4 or later. A z/VM-mode LPAR allows mixing standard Central Processors (CPs) with specialty engines such as Integrated Facility for Linux (IFL) processors. Before the z10 and the z/VM-mode LPAR, you could define only Linux-only LPARs with just IFL processors, or only LPARs with standard CPs and System z Application Assist Processor (zAAP) and System z Integrated Information Processor (zIIP) for z/OS. You had to define one or more Linux-only LPARs to run Linux on System z, probably under z/VM, and one or more additional LPARs to run the z/VSE systems either natively or under z/VM.
With the z/VM-mode LPAR and z/VM V5.4 or later, you can now run both z/VSE and Linux on System z in the same LPAR and under the same z/VM system. A z/VM-mode LPAR doesn’t just allow using IUCV communication and LFP; it can also help reduce maintenance costs for the z/VM system image. You only need to maintain one z/VM system instead of multiple ones for each LPAR.
Existing z/VSE applications and connector components use socket calls to establish a connection to another application. The socket interface is the common interface to a TCP/IP stack to perform networking functions. The LFP also provides binary-compatible socket interfaces to the applications. From the application perspective, the LFP behaves like a TCP/IP stack. That means existing applications can use the LFP unchanged. There’s no need to change or recompile the application. The applications still “think” they’re using TCP/IP to communicate with the partner, but LFP “intercepts” the socket call and routes it through IUCV directly to Linux on System z without involving a TCP/IP stack on z/VSE.
On Linux on System z, the LFP Daemon (LFPD) must run. It receives the data via IUCV and passes it to the TCP/IP stack on the Linux side. Basically, every socket call performed by a z/VSE application is forwarded to the LFPD on Linux, which then performs that socket call against the Linux TCP/IP stack. If the target application resides on the same Linux system as the LFPD, then the Linux TCP/IP stack automatically establishes a UNIX-pipe type of communication between the LFPD and the local Linux application. This pipe also bypasses most of the TCP/IP processing.
Figure 1 shows the data flow when using regular TCP/IP communication (black arrows), as well as the data flow when using LFP (red arrows). As you can see, fewer pieces are involved on the z/VSE side when using LFP. This reduces the processing overhead on z/VSE.
The LFP provides the following socket interfaces:
- LE/C socket interface through an alternative TCP/IP interface phase
- EZA SOCKET and EZASMI interface through an alternative EZA interface phase
- CSI Assembler socket interface through the SOCKET macro (limited support).
Any application using one of these socket interfaces can be used unchanged with the LFP.
You can start multiple LFP instances on a z/VSE system. Each instance is identified by a two-digit system ID—the exact same way you identify your TCP/IP stacks. Applications can choose which stack to talk to by supplying the ID of the desired stack. This can be done using the // OPTION SYSPARM=’nn’ statement in the JCL running the application or programmatically when using the INITAPI socket call.
To configure the LFP, the following steps must be performed:
- Run Linux and z/VSE as z/VM guests under the same z/VM system. At best, use a z/VM-mode LPAR.
- Allow IUCV communication between the involved z/VSE and Linux guests using z/VM commands such as “IUCV ALLOW” and “IUCV ANY.” For details about the parameters, check the z/VM documentation.
- Install and configure the LFPD on Linux. The LFPD is available as an RPM package that’s provided with z/VSE V4.3. The configuration is set using a configuration file with a handful of parameters. Each LFP instance running on z/VSE needs a separate LFPD on Linux.
- You don’t need to install anything on z/VSE. The LFP code comes as part of z/VSE V4.3. You just need to provide a configuration for each LFP instance you want to start. The configuration includes the name of the z/VM guest running Linux on System z, IUCV application names, memory allocations, and the system ID under which the LFP instance is known.
Figure 2 shows a sample configuration for the LFPD on Linux (running under the z/VM guest ID LINR02). This configuration should be given in a file that’s accessible using the name /etc/opt/ibm/vselfpd/confs-enabled/lfpd-LINR02.conf. To start the LFPD, use the command lfpd-ctl start LINR02. The LFPD is now listening for incoming connections from an LFP instance that’s started on z/VSE.
Figure 3 shows the corresponding configuration to start up an LFP instance on z/VSE. The z/VM guest ID of the z/VSE system is VSER05. The LFP instance is started and connects to the listening LFPD on Linux using IUCV. When the start job completes successfully, then the LFP instance on z/VSE is ready for socket application use.
For more details about the LFP, its configuration and the supported socket APIs, please see the z/VSE V4R3.0 TCP/IP Support manual (SC34-2604-00).