Feb 22 ’10
What’s New With Novell’s SUSE Linux Enterprise Server 11 for System z ?: The Rest of the Story
In the October/November 2009 issue, we discussed changes in SUSE Linux Enterprise Server (SLES) 11 to software packaging and software selections, z/VM interoperability, the installer, network configuration, and system management/ configuration. This article examines additional tools and functions.
Install and Maintenance Tool Architecture
Yet another Setup Tool (YaST) and the installer received an internal working over and facelift. The internal work reduced the amount of duplication of code and functions. Many users have told Novell that the old Partitioner was confusing to even experienced Linux users. The external facelift provided a more consistent user interface and a completely new Partitioner interface. For people accustomed to the old Partitioner, it will take awhile to regain familiarity, but new users seem to think it’s much simpler and easier to use.
The update stack shipped with SLES10 was completely reworked. It was slow, CPU- and I/O-intensive, and refreshed itself far too often. Mainframe users were better off disabling ZENworks Management Daemon (ZMD) until they were ready to actually install maintenance. Further, YaST and ZMD didn’t always agree on what was installed or available for update, since they each kept their own set of information. These concerns seem to have been addressed in SLES11. The command line interface that replaces the rug command is “zypper.” There’s no daemon running in the background, consuming resources and waking up idle z/VM guests. The significant changes are that YaST and zypper get their information from the same place, and zypper is much easier on CPU.
Here’s another major heads-up for enterprise customers: As shipped, SLES11 is officially supported only on z9 and z10 hardware. It will run on any zSeries and System z box, but won’t be considered a supported configuration for production use unless on a z9, z10, or future-generation system.
EXT3 is now the default file system during installation or when adding new file systems. Reiserfs and XFS are still included and will be fully supported for the life of SLES11.
Oracle Cluster File System Version 2 (OCFS2) is now a Portable Operating System Interface for a UNIX (POSIX)- compliant file system. That means it can be used as a general purpose file system, and not just for Oracle applications. Using it for anything other than some kind of clustering, whether involving Oracle or not, would be fairly pointless, but possible. Clustered Logical Volume Manager 2 (C-LVM2) replaces Enterprise Volume Management System (EVMS) (alas, EVMS!), which is no longer included. This file system, and OCFS2 and C-LVM2 enablement in openAIS/Pacemaker, has been moved into SUSE Linux Enterprise High Availability Extension (SLE HA) 11 and is no longer part of the SLES11 base.
Dynamic enlargement of a Fibre Channel Logical Unit Number (LUN) is now supported, as is online enlargement of a multi-pathed device. This can save you from having to move a lot of data if it turns out the LUNs were sized too small.
As noted in the October/November article, there also are several technology previews in this release related to filesystems, EXT4, eCryptfs, and read-only root filesystems. The read-only root filesystem is especially interesting with the recent release of the IBM Redpaper documenting the use of the technique in rapidly producing cloned systems.
SLES 11 also provides additional hardware support. Sysplex Timer Protocol support (STP/ETR) was shipped with SLES11, but late in testing, Novell discovered a problem that caused system hangs. Since it was too late to incorporate the fix for the hang, Novell sent out a maintenance update that disables the feature. The fix should be incorporated into SLES11 Service Pack 1 and the feature re-enabled.
Machine instruction-specific updates to the GNU Compiler Collection (GCC) exploit the new hardware instructions introduced with IBM’s z10. Additionally, the compiler supports the option to tune the code it generates for specific models of the z9 architecture. This means better performance of such generated code will be seen on z9 hardware. The GCC back-end now supports the Decimal Floating Point (DFP) instructions introduced with the z9 and z10. The z9 executes DFP instructions in millicode, but there’s native hardware support for them in the z10. However, don’t get too excited yet; the changes necessary to the GNU libc (glibc) to support DFP in math functions such as sin, cos, and printing functions weren’t ready in time for release in SLES11 GA. Novell plans to upgrade the glibc support for those features in Service Pack 1. The binutils package was similarly updated to exploit the new hardware instructions and provide DFP support.
Many companies require a lot of data encryption, which means a good supply of high-quality random numbers. This has historically been difficult to maintain over time, since there are few good sources of entropy in most computers. The latest cryptographic cards from IBM can quickly provide long random number generation, and the support to use that hardware is included in SLES11.
Selective logging of ECKD DASD lets you turn on logging of sense data for only those devices of interest. This reduces the amount of data collected that you must wade through compared to prior versions where logging was either on or off for all devices. This is of most value to Logical Partition (LPAR) systems; VM users have had this capability all along via CP monitor data streams.
High-performance FICON channels are tolerated, but not yet exploited. Exploitation should come with Service Pack 1. Hyper Parallel Access Volume (PAV) support has been added, which should mean easier setup on the Linux side of things and better overall performance. System administration work should be reduced for both LPAR and z/VM installations, since alias assignment is dynamic rather than static.
Vertical CPU management is an attempt to make Linux more aware of the Non-Uniform Memory Access (NUMA) topology of the z9 and z10. Mainframes have had NUMA-like characteristics for quite a while. This became even more pronounced with the z10, so code has been added to try to minimize those effects by working with PR/SM to dispatch work longer than usual on a particular processor. Processors can be designated high, medium, or low vertical. Low CPUs get hardly any real CPU time, while high CPUs get a full real CPU; medium CPUs get something in between. By default, the older scheme, horizontal, is enabled, but this can be dynamically changed via the /sys file system.
Fibre Channel Protocol
Small Computer System Interface (SCSI) over Fibre Channel Protocol (FCP) is seeing rapidly increasing adoption, particularly as more database workload is moved to Linux on System z. This has made performance data collection and analysis more important then ever, which has been a weak point of this technology when compared to FICON. The FCP adapters were modified to provide more performance data, and the zFCP driver was updated to extract it.
Beyond the kernel exploitation of hardware features, user space tools to facilitate the collection of the data the kernel extracts were added to the s390- tools package. All these changes are intended to provide more visibility of the various FCP and SCSI components that affect performance.
Some FCP message cleanup in the zFCP driver also was done. From its inception, the zFCP driver has been rather verbose. Unless you looked at the source code (and even if you did), the meaning of the various messages was unclear and confusing. This drove up technical service costs as system administrators would open problems when they were having issues, and thought the messages coming from the driver were related. The goals of the cleanup were to:
- Remove all messages other than the ones relevant to the system administrator
- Move code and status information to traces instead of syslog
- Improve the content of remaining messages to make them more understandable.
Whether the cleanup reached these goals remains to be seen.
Compared to midrange Linux systems, configuring SCSI over FCP devices was a pain. To reduce this somewhat, automatic port discovery was added to the zFCP driver to enable it to scan the connected fibre channel SAN and activate all available and accessible target ports. The caveat is that if your SAN and Fabric Switch administrators haven’t ensured that proper zoning and masking are in place, your Linux systems will be seeing—and potentially accessing—ports and LUNs they shouldn’t. To further reduce the SCSI configuration pain, two LUN discovery user space tools, lszfcp and zfcp_san_ disc, were added.
The zFCP trace facility was enhanced to help with problem determination. Accessing SCSI devices on a SAN involves a significant number of pieces working together. The enhanced trace facility will provide more insight into which of those pieces might not be working optimally.
For some time, the net-snmp RPM provided by Velocity Software to collect performance information caused the service pack identification (SPident) tool to report that the system wasn’t upto- date. This has been resolved, as Novell appears to have approached Velocity Software about including their Management Information Bases (MIBs) in the net-snmp package shipped with the distribution. The MIBs are now in the snmp-mibs RPM Package Manager (RPM) for all architectures.
SLES9 had additional kernel patches applied to implement Class-based Kernel Resource Management (CKRM). This gives system administrators more granular control over what processes receive higher-priority access to system resources. When SLES10 was in development, the kernel developers hadn’t accepted the CKRM patches into the official kernel source tree. The decision was made to drop CKRM from SLES10 because of this, leaving a gap for people who had become accustomed to the higher level of control. With SLES11, a different way of achieving similar results was introduced, called control groups and CPU sets. There isn’t a one-to-one feature translation, but control groups and CPU sets are the replacement for CKRM, and have been part of the official kernel source tree for some time.
The name CPU sets is a little misleading because virtual storage/memory management also is part of it. CPU sets are based on control groups, but don’t use all the functions control groups can provide. Besides CPU and memory, control groups also can manage disk and network I/O.
As previously discussed, several FCP-related changes were made to either improve performance or provide more insight into how well or poorly SCSI over FCP is performing.
A new module has been added to YaST that gives system administrators a basic way to check the general security health status of a system. Found under YaST -> security and users -> local security, the default selection is security overview. The intent was to provide Bastille-like functionality, but with greater ease-of-use. The overview shows three columns: security setting, status, and security status. The security status is shown as a column of green check marks and red X’s; there’s help for each setting to explain why it’s important.
Security Enhanced Linux (SELinux) is enabled in SLES11. The kernel is built to support SELinux and patches to all common user space packages were applied to work with SELinux; the necessary libraries to support SELinux were shipped. However, limitations include the fact this offering isn’t yet officially supported, no SELinux policies are included, nor is there support for SELinux-specific software packages (e.g., checkpolicy, policycoreutils, selinux-doc).
Quality assurance testing of SLES11 occurred with SELinux disabled, so enabling it without the policies and tooling necessary to manage it will create problems at this stage of support. Another useful observation is that AppArmor and SELinux are mutually exclusive. One or the other must be chosen at boot time via a kernel parameter. The default is AppArmor if no parameter is given.
Kexec is an interesting new feature mainly targeted at midrange systems, but it has interesting possibilities for Linux on System z, too. To quote from the main page for kexec: “Kexec is a system call that enables you to load and boot into another kernel from the currently running kernel. Kexec performs the function of the boot loader from within the kernel. The primary difference between a standard system boot and a kexec boot is that the hardware initialization normally performed by the BIOS or firmware (depending on architecture) is not performed during a kexec boot. This has the effect of reducing the time required for a reboot.”
For System z and z/VM users, this may prove to be an interesting way of delivering an initial Random Access Memory (RAM)-based system where a VM application has populated a device table and done I/O setup, allowing the Linux kernel to further exploit VM’s already extensive knowledge of the virtual hardware environment and become less resource-intensive on boot.
Since Linux for System z installations always occur over a network, whether virtual or real, considerable time has been spent over the last nine years trying to debug network problems. In several shops, this has been hindered by a network security policy that doesn’t allow Internet Control Message Protocol (ICMP) or User Datagram Protocol (UDP) packets to cross the network. Although not specifically intended to help alleviate this problem, traceroute was enhanced to use TCP SYN packets and the usual ICMP or UDP ECHO packets. Specifying the -T switch on the command will switch it into Transmission Control Protocol (TCP) mode, although users should be aware that this behavior may trigger some network intrusion detection systems to indicate a false positive.
SUSE Linux Enterprise Mono Extension
In SLES 11, Novell has moved the Mono application development tools to a separately licensed extension, grandfathering in previous licensees for this release. Mono is a .NET application framework that lets you run .NET-based applications, including ASP.NET, on SLES. It’s available for all the architectures that SLES is built for, including the mainframe. The SLE Mono Extension provides the necessary software to develop and run .NET client and server applications across platforms on Linux, Solaris, Mac OS X, Windows, and UNIX. Mono for Linux on System z can provide reliability, performance, and scalability advantages over Windows.
Mono lets users of Microsoft .NET and Linux-based tools develop on a platform of choice and deploy anywhere .NET or Mono are supported. You can target Linux from Visual Studio or use the tool chain for Linux. The run-time is binary-compatible with .NET on Windows. This gives you the flexibility to:
- Migrate Microsoft .NET desktop and server applications to Linux without significant investment in rewriting code
- Target multiple platforms and increase your addressable market
- Leverage existing expertise in computer languages for more efficient development.
As noted, the Mono packages are still available at no charge from the Mono project Website, so it will be interesting to see how this separation plays out.
SUSE Linux Enterprise High Availability Extension
Another packaging change in SLES 11 is the removal of the HA tooling in SLES 10 to another separate offering, SLE HA 11, which is a collection of robust, open source clustering technologies to deal with the issues of high availability. It includes a cluster-aware file system, OCFS2, and volume manager, continuous data replication, C-LVM2, user-oriented tools, and resource agents.
While this may not seem a desirable step from Novell, especially in a distribution ostensibly oriented specifically to enterprise customers, the market will decide whether Novell will continue to separate basic enterprise function into penny packets.
Overall, SLES 11 continues to offer a significant set of tools for supporting an enterprise Linux deployment. It’s pleasing to see the by-id debacle corrected, and the many new enhancements to tooling and packaging, but the creation of the extensions packages is somewhat troublesome; we’ll see how that plays out in the marketplace. Z