Sep 15 ’08

Extending z/OS With Linux: A Multi-Protocol File Exchange Gateway

by Editor in z/Journal

While most of the technical press focuses on the sexy side of the Internet, such as Web 2.0, Service-Oriented Architecture (SOA) and the like, big, boring batch and transaction processing systems remain the bread and butter of many large organizations. The lifeblood of these systems, many running z/OS, is electronic data exchange. The Internet has fundamentally changed the relationship between business partners; mainframes previously connected only to private networks using proprietary communication protocols have been forced into the open systems arena. Simply put, z/OS mainframes are routinely expected to exchange files over the Internet using a wide variety of formats, tools, and protocols, many of which aren’t natively supported on z/OS.

What seems like it should be a simple problem—exchanging files with your business partners—can quickly turn into a complicated mess since each seems to have their favorite combination of protocols, compression methods, and encryption algorithms:

• Protocol: FTP, FTPS, SSH/SFTP, HTTP, HTTPS, etc.

• Compression/packing: ZIP, GZIP, TAR, etc.

• Encryption algorithm: PGP, SSL/ TLS, CMS/PKCS#11, etc.

Solving the exchange format and protocol requirements is only half the battle. When transferring files to platforms other than z/OS, you also must consider:

• Translating from EBCDIC to other codepages

• Converting record-oriented data sets to byte-oriented files: choice of line separators, truncation or wrapping of long lines, trimming of trailing pad characters, etc.

• Support for z/OS data set organizations, record/block formats, and allocation parameters.

In addition, careful attention must be paid to security issues such as:

• Authentication: userids, passwords, key pairs, tokens, etc.

• Authorization: controlled access to files and system resources

• Network security, firewalls, etc.

• “Data at rest”: security of intermediate files created as data is transformed or relayed.

Meeting these requirements with z/OS alone can be a nightmare. Many excellent z/OS products are available to address these issues, but their combinations can be complex and costly. Each often involves a new, unique configuration of tools, Job Control Language (JCL), scripting, coding, testing, and capacity planning.

A solution exists. A wide variety of tools are available on Unix/Linux that make this easy to do and they’re all free. Some organizations are even compelled to completely abandon z/OS and convert to an open systems platform, choosing instead to confront a whole new set of problems.

So why not combine the best of both worlds? In this article, we describe how to use Linux as a gateway for exchanging files over the Internet with your business partners, while retaining z/OS operational control of the processes and data. We show how Linux and free or open source tools can effectively be used to extend proven z/OS technology.

The hardware and software requirements are surprisingly minimal. Here’s what you’ll need:

• Hardware: a 512MB/10G Intel PC or Linux on System z guest or Logical Partition (LPAR)

• z/OS software:

- IBM ported tools for z/OS such as OpenSSH, a free feature

- Co:Z Co-processing toolkit (free Apache 2 binary license)

• Linux software:

- Your favorite distribution of Red Hat, SUSE, Ubuntu, Debian, etc.

- OpenSSH, curl, gpg, gzip, bzip2, infozip (all free open source)

- Co:Z Co-processing toolkit (free open source).

The Co:Z Co-processing toolkit allows z/OS batch jobs to securely launch a process on the Linux gateway, redirecting standard input and output streams to traditional z/OS data sets or spool files. In addition, the process launched on Linux can “reach back” into the z/OS job and access MVS data sets, converting them into pipes for use by other Linux commands.

The Co:Z Co-processing toolkit is installed in two parts: a free binary-only z/OS package and an open source “target system” package. Target system packages are available as Linux LSB RPMs and Windows and Solaris binaries. Written in portable C++, the source can be built on other Unix or Portable Operating System Interface for Unix (POSIX) platforms.

The remaining Linux software (OpenSSH, curl, etc.) is installed with your Linux distribution either by default or using the distribution’s package manager. The examples in this article assume you’re running Linux with bash as your default shell. Other Unix variants and shells can be used, but the examples will need to be modified accordingly.

We’ll be transferring z/OS data sets, stored on the mainframe, but we don’t want to store them even temporarily on the Linux box. Taking this approach addresses “data at rest” security issues and leaves us with fewer things to worry about.

In this configuration, the z/OS system will initiate and control file transfers (both outbound and inbound) with a batch job step. All file transfer messages will be logged as part of the job, and return codes may be used to control the flow of the job stream. A z/OS operator should never have to log onto the Linux machine to determine the status of a file transfer.

In this article, we rely heavily on the Linux curl package to handle the actual file exchange with our business partners. Curl’s flexible command-line interface supports all the standard file transfer protocols and authentication methods. The curl command lets you send or receive files and redirect its file I/O to pipes. Simple Linux shell scripts, coded directly in JCL, can be used to chain together curl with other commands to meet the requirements of exchanging a file with a particular business partner. Specifically, you can use pipes to combine the curl command with:

• The Linux zip or gzip commands to compress or decompress data as it’s transferred

• The Linux gpg or gpgsm commands to encrypt or decrypt data as it’s transferred

• The Co:Z toolkit fromdsn and todsn commands to convert z/OS data sets to or from pipes.

In Figure 1, the Co:Z launcher is executed in a batch job step (1). This creates an SSH session to the Linux gateway machine as user “gwuser” using a public/ private key pair. A Unix shell is started on the Linux gateway, which executes the commands contained in the STDIN DD. The first line (2) runs the fromdsn shell command on Linux, which reaches back into the launching jobstep via the Secure Shell (SSH) connection and converts the data set referenced by DD ORDERS to a stream of bytes. This stream is piped (|) into the curl command (3), which opens an FTP Secure (FTPS) connection to the remote host, partner.com, and uploads the data to “orders.txt.”

 

Let’s consider some of the security aspects of this setup:

• Normal z/OS security controls which data sets and resources are available to this job, which runs as a normal (unprivileged) user.

• The Linux machine can be placed in a network Demilitarized Zone (DMZ). The only connection to z/OS is an encrypted SSH session with the Linux gateway, authenticated by an SSH key pair.

• The data is never stored on the Linux system, but instead simply piped by the curl command over a Secure Sockets Layer (SSL)-encrypted File Transfer Protocol (FTP) connection to the remote host.

By itself, however, this first example isn’t compelling; the z/OS Communications Server/FTP product can, with work, be configured to do FTPS (SSL) directly. Consider Figure 2. In this example, a data set with variablelength binary records is sent to a business partner using HTTP. The -b -l ibmrdw options on the fromdsn command create a binary stream with records delimited by IBM-style record descriptor words. This data is piped into the gzip command for compression. The compressed output data is piped into the gpg command to be encrypted. Finally, curl sends the encrypted data using an HTTP URL. This example shows how Linux pipes can be used to quickly connect powerful open source tools with z/OS data sets, offloading much of the processing to an inexpensive hardware platform.

 

When this job runs, stdout and stderr output from the Linux shell are redirected to the job’s STDOUT and STDERR DDs, which by default are sent to JES SYSOUT files. For this example, the job’s output looks like Figure 3. In this case, gzip and gpg don’t generate any messages, but Figure 3 shows output from fromdsn, todsn, and curl. The condition code from the batch job step is adopted from the Linux shell script exit code (RC=0), so it can be used to influence the flow of subsequent job steps. When using multiple commands connected with pipes, the default behavior is to return the exit code of the last command. The set -o pipefail bash option is used to cause the shell’s exit code to be set to the last non-zero exit code. It’s important to use this option so intermediate errors can be detected.

 

The previous examples show outbound file transfers. Inbound exchanges can be similarly performed. In Figure 4, curl is used first to download a file using SFTP (SSH). The output is piped into the todsn command, which uses RDWs to separate binary records and write them to a z/OS data set. No extra encryption step is required since SSH has already done that.

 

More Information

You can learn more by visiting these Websites:

• “Curl” http://curl.haxx.se/

• “GnuPG” http://gnupg.org/

• “Gzip” http://www.gnu.org/software/gzip/

• “IBM Ported Tools for z/OS: OpenSSH” www.ibm.com/servers/eserver/zseries/zos/unix/openssh/index.html

• “Co:Z Co-Processing Toolkit for z/OS” http://dovetail.com/coz.

Conclusion

The file transfer gateway shown here is just one example of using Linux to extend z/OS; there are many other possibilities. The ability to leverage the flexibility of Linux and its wealth of open source software under the control of z/OS is a topic that hasn’t received much attention, but is ripe for exploration.