Operating Systems

Managing zEnterprise

2 Pages

Last year, IBM introduced the new zEnterprise with great fanfare, touting it as a “… revolutionary new design … (that) gives enterprises the ability to unify and centrally manage multi-tier applications ...” (You can view the announcement at www-03.ibm.com/systems/z/news/announcement/zenterprise.html.)

The zEnterprise is really an innovative approach with an evolutionary result. That’s actually a good thing because revolutionary change is usually followed by massive social, political, and economic upheaval. Nobody wants any of these occurring in the data center, which is, by nature, conservative and mindful of the overarching need for availability and reliability. So, what kind of next-generation management is needed to maximize its added value to IT organizations and ultimately to the business?  

Before we can talk about managing zEnterprise, we need to understand the IT problems it’s intended to solve and then examine the architectural elements that make up the environment and distinguish their purpose. Then, it should become clear that the application of these architectural elements with proper management can actually be greater than the sum of its parts.

Note: For the purposes of this article, the terms zEnterprise, zEnterprise System, and zEnterprise Ensemble are used interchangeably, meaning a set of system resources that are managed under a single zEnterprise Unified Resource Manager (zManager) umbrella.

The Complexity Challenge

In the beginning, all computing was mainframe computing. During the ’60s, the words “computer” and “mainframe” were essentially interchangeable. Over time, technological advances have created new markets and opportunities for computers of various shapes, sizes and costs. The variety of choices has led to a competitive market where there are several clear segments, but the segments don’t necessarily have clear-cut boundaries (see Figure 1).

Businesses across the board have generally adopted an “all of the above” approach to solving problems. When a problem (aka an “opportunity”) arises, businesses will assess many different solutions along several different dimensions, with the most important being cost and risk. In a perfect world, these criteria would be expressed in objective terms, but the decision-making process encompasses a mix of fact, opinion, mythology, sheer force of will, and time constraints. Often, second- and third-order effects of decisions aren’t considered, usually because they aren’t conceived of at the time. How do you figure out what you don’t know?

Every environmental variable changes over time and the business wants to squeeze every penny’s worth of value out of each IT investment. IT equipment, software, and networks are usually complex; changes to existing systems must account for inter-dependencies and linkages.

Data centers have become aligned around a multi-tier architecture where the best (or some value of “best”) technology is used for each tier. These tiers are loosely organized around Web serving, application serving, database serving, and specialty operations such as data mining or Business Intelligence (BI). These ad hoc specialty operations ultimately become part of the mainline, formalized business services and IT landscape (see Figure 2).

One of those second-order effects alluded to earlier is that not all aspects of managing the environment were necessarily considered when the infrastructure was being built. In general, traditional IT responsibilities seem to fall along platform (hardware and operating system) lines, network management or application delivery (see Figure 3). But, as technicians, we often forget that the business doesn’t care about what’s humming under the covers. They have a business to run and the IT infrastructure is there to serve business needs. Most of the time, IT hums along without a hitch. It all holds together amazingly well, but issues do arise when something goes wrong.

2 Pages