Operating Systems

Some FCP message cleanup in the zFCP driver also was done. From its inception, the zFCP driver has been rather verbose. Unless you looked at the source code (and even if you did), the meaning of the various messages was unclear and confusing. This drove up technical service costs as system administrators would open problems when they were having issues, and thought the messages coming from the driver were related. The goals of the cleanup were to:

  • Remove all messages other than the ones relevant to the system administrator
  • Move code and status information to traces instead of syslog
  • Improve the content of remaining messages to make them more understandable.

Whether the cleanup reached these goals remains to be seen.

Compared to midrange Linux systems, configuring SCSI over FCP devices was a pain. To reduce this somewhat, automatic port discovery was added to the zFCP driver to enable it to scan the connected fibre channel SAN and activate all available and accessible target ports. The caveat is that if your SAN and Fabric Switch administrators haven’t ensured that proper zoning and masking are in place, your Linux systems will be seeing—and potentially accessing—ports and LUNs they shouldn’t. To further reduce the SCSI configuration pain, two LUN discovery user space tools, lszfcp and zfcp_san_ disc, were added.

The zFCP trace facility was enhanced to help with problem determination. Accessing SCSI devices on a SAN involves a significant number of pieces working together. The enhanced trace facility will provide more insight into which of those pieces might not be working optimally.

Performance Management

For some time, the net-snmp RPM provided by Velocity Software to collect performance information caused the service pack identification (SPident) tool to report that the system wasn’t upto- date. This has been resolved, as Novell appears to have approached Velocity Software about including their Management Information Bases (MIBs) in the net-snmp package shipped with the distribution. The MIBs are now in the snmp-mibs RPM Package Manager (RPM) for all architectures.

SLES9 had additional kernel patches applied to implement Class-based Kernel Resource Management (CKRM). This gives system administrators more granular control over what processes receive higher-priority access to system resources. When SLES10 was in development, the kernel developers hadn’t accepted the CKRM patches into the official kernel source tree. The decision was made to drop CKRM from SLES10 because of this, leaving a gap for people who had become accustomed to the higher level of control. With SLES11, a different way of achieving similar results was introduced, called control groups and CPU sets. There isn’t a one-to-one feature translation, but control groups and CPU sets are the replacement for CKRM, and have been part of the official kernel source tree for some time.

The name CPU sets is a little misleading because virtual storage/memory management also is part of it. CPU sets are based on control groups, but don’t use all the functions control groups can provide. Besides CPU and memory, control groups also can manage disk and network I/O.

As previously discussed, several FCP-related changes were made to either improve performance or provide more insight into how well or poorly SCSI over FCP is performing.

Security

4 Pages