Again, it’s been a quiet couple of months in the Linux on System z world. Not a lot of new development from any of the major vendors, and to some extent, that says a lot about the state of Linux in the traditional System z world: We’re there, and everywhere, just as I predicted back in 2000. Take that, naysayers …
A couple of interesting notes in the general Linux world that will affect us in the coming year are worth commenting on, though. First is a dramatic change in the way start-up processing is done. After decades of putting up with the old Berkeley and SVID methods of doing start-up/shutdown processes using init scripts, Fedora is introducing a new method of start-up management called “systemd.” Systemd:
• Provides aggressive parallelization capabilities
• Provides dynamic activation for starting services
• Keeps track of processes using Linux cgroups; this means every executed process gets its own cgroup, allowing processes to be scheduled and managed
• Supports snapshotting and restoring of the system state
• Maintains mount and automount points
• Implements an elaborate transactional dependency-based service control logic
• Provides controls for each process spawned by systemd; controls are provided for a wide range of system resources and features before the application is initiated (that’s hard to do in the current scheme).
There’s a load of interesting documentation on systemd available at http://0pointer.de/blog/projects/systemd-docs.htm; it’s definitely worth a read.
So, why is this important at the enterprise level? It has an impact on how applications are started. This development code is highly likely to be part of the next enterprise distributions from both Red Hat and SuSE, and it’s something that application developers need to adapt their code to deal with. Certainly, it’s something you should be asking your application Independent Software Vendors (ISVs) about, so I’d suggest making systemd compatibility a part of the application evaluations you do in the next year or so. There’s a toleration mode for older scripts, but you should be telling your vendors that’s not an acceptable solution. There’s too much good stuff in systemd to turn it off.
Second, a little undiscovered gem. For those of you who have CA VM:Tape, there’s a nice (if sparsely documented) interface for Linux systems to mount and manage tapes registered in the VM:Tape catalog. The interface allows Linux systems to mount and dismount tapes, with VM:Tape managing tape allocation and interaction with silos and other automation devices. The setup is pretty easy (once you find and decipher the documentation), and it’s very smooth to have VM:Tape deal with sharing drives, especially in environments where tape drives are shared with z/OS systems in other LPARs; VM:Tape and CA-MIM are invoked as necessary to pass around drives, and you never have to deal with DFSMS/VM’s RMM component. The major drawback at the moment is that the documentation is obscure, at best, and it can be tricky to find the right manual. Hint to CA Technologies: Your current documentation format needs some work; the old format is a lot better for mainframe products. We understand that it’s simpler to have all the documentation in the same format for all products, but you lost a lot in the translation.
Last, thank you all for your notes of support from last month’s column. We had hoped to have many more months with Gunther, but he passed away the morning of May 23. He was a good friend and left a definite paw print on our hearts.