Operating Systems

Figures 4 and 5 show some of the new install panels. In Figure 4, the member names are specified, as is the choice of first- or second-level install. Second-level installs are particularly useful for current z/VM users who wish to try out SSI; they don’t require physical CTC connections or shared DASD. Specifying the LPAR name (or first-level userid) at install time is new for 6.2.  The System_Identifier statement in the SYSTEM CONFIG was updated to allow the member name to be mapped to LPAR name, eliminating the need to know model numbers and CPU IDs.

 

Next, the DASD for all four members is identified. Shared and non-shared DASD must be specified here. If different device numbers are used to refer to the common disks, this will be addressed in the next screen.

Figure 5 shows the next install screen for first-level installs. Up to two CTC devices between each member may be specified at install time; others may be added to the SYSTEM CONFIG later. With the second-level install option, virtual CTCs are used and install will create a file with the first-level userids' directory entries and PROFILE EXECs.

For users coming from existing z/VM systems, IBM recommends performing a non-SSI installation to upgrade to 6.2 and then converting to SSI. CP Planning and Administration offers several “use-case scenarios” in Chapters 28 to 33 that detail how to convert various types of systems to SSI clusters.

Service

Once systems are converted to SSI clusters, the savings continue. In an SSI cluster, a single set of minidisks holds the service repository for a release, so applying service is a snap. The Recommended Service Upgrades (RSUs) are applied to the shared 620 disks. Then PUT2PROD may be issued on each member at the administrator's convenience to place the new service level into production. As service and new releases come out, they will be backward-compatible. Every member in an SSI cluster could be running a different set of service, but SSI communications and LGRs will still function smoothly. This gives system administrators great flexibility in scheduling when the service is put into production on each member. They can wait for a convenient downtime, or use LGR to move vital servers before the maintenance is applied.

System Configuration File and User Directory

As mentioned previously, some administration files are now clusterwide, such as SYSTEM CONFIG and USER DIRECT. These reside on the VMCOM1 shared DASD. The System_Identifier statement begins the SYSTEM CONFIG. From there, each statement is either member-specific or shared. Statements such as System_Residence have member qualifiers around them, while others, such as the PRODUCT statements and perhaps some RDEV and VSWITCH statements, may be common to all members. The CP_OWNED and USER_VOLUME_LIST statements are split between common and member-specific, depending on the type of DASD. Chapter 25 of CP Planning and Administration provides many additional examples of how these statements would look.

The USER DIRECT has many updates for SSI, too. Guests are now divided into two categories: multi- and single-configuration. Single-configuration virtual machines may only be logged on to one member at any time and are identified by the USER keyword. Multiconfiguration virtual machines may be logged on to multiple members simultaneously. Their directory definitions are in two
sections. One part, under the IDENTITY keyword, contains statements common to the guest across the cluster, including the userid, password, and all privileges and authorizations. The other part, under the SUBCONFIG keyword, contains statements that only apply when the guest is logged on to a particular member, such as distinct read-write minidisks for each instance of the multiconfiguration virtual machine. These two parts are linked together via a BUILD statement (see Figure 6).

4 Pages