Feb 3 ’12

Why Cross-Platform Tools Matter

by John Barnard in Mainframe Executive

Unified, enterprisewide management tools—also referred to as cross-platform tools—are helping IT organizations finally cross the chasm in managing isolated technology areas. This abyss exists not only between mainframe and open systems, but also between silos and open systems and the IBM System z environment. The shift to cross-platform tools helps break down the silos in IT and improve efficiency by letting primary business and IT processes flow across the enterprise.

Getting Started

Cross-platform tools are having the most significant impact in several key areas, including event management, application performance management, transaction monitoring, capacity management, and workload automation.

According to BMC Software’s 2011 survey of mainframe customers, having unified management tools for event management is a top priority. Exceptions, events, and alerts can arise from anywhere in the enterprise. The challenge is in relating exceptions to specific business services, processes, and applications. A cross-platform tool must be able to bring exceptions into a central location, where they can be tagged, analyzed, correlated, and collated with the applications that use them.

Application performance management and transaction monitoring are some of the other areas that can benefit from cross-platform tools. Business processes flow across platforms and the technology stacks and silos that exist in many enterprises. That’s why it's important to understand how transactions flow, how the business processes around those transactions function, and how these processes can impact revenue. Cross-platform tools are also important for capacity management to help IT understand the impact of business growth as measured by transactions or application flows.

Finally, workload automation is also critical for managing asynchronous and synchronous processes throughout the enterprise from a single place where IT operations can go to help manage those applications. With enterprise workload automation, IT can have the visibility to understand that they have related processes running on System z and z/OS, Linux on System z, Windows, UNIX, and Linux distributed, and on various other applications and open platforms. Enterprise workload automation can present that platform-agnostic view of workflows.

Improving Efficiency and Reducing Costs

These cross-platform tools can dramatically improve efficiency at the subject-matter expertise level to the extent they’re able to help associate specific IT problems with business problems. It doesn’t help much at the enterprise level if disparate tools report or discover problems in their own silos. The user needs to know: What are we evaluating? What do these performance and availability problems relate to?

Cross-platform tools by themselves have zero value unless they lower costs. Without those platform tools that can help isolate problems or issues across the enterprise and tie them into the business, you can spend an inordinate amount of time and energy on problem determination. The cross-platform tools can accelerate identification of the problem at the application and enterprise level. Small problems can appear to be huge without the appropriate optics in place. With the appropriate optics, small problems remain small because they can be isolated and identified as irrelevant to the business or a specific application.

Here’s an example of how enterprise workload automation can make a real difference in a seemingly small problem. An airline had several problems occur during some of the nightly runs. Some of these happened when a batch reporting job didn't work well while the UNIX server simultaneously had an outage. The large problem, from one perspective, was the UNIX server being out. IT operations spent considerable time trying to get it back online, repaired, and set up. But because the job appeared to be just a batch job doing batch reporting, the function wasn’t considered to be important.

Yet, in this case, the small problem was actually much more important than the server being down. The batch job was responsible for delivering weather reports for the airline across the global scope of its operations. Weather reports are essential for airlines because they provide insight into how much gas must be put into the airplanes before they take off. If the weather will be bad, more fuel must be on board because more air time may be required. Without awareness of weather conditions of the various areas in which this airline operated, all planes had to take off with a full gas tank.

Since you can't land with much gas in the tank, some pilots had to dump their fuel over the ocean. This wasted fuel and incurred extra fuel costs, not to mention the environmental impact. So, here, a small problem was actually large. That’s why understanding how work processes in an IT organization tie into a particular business. That understanding provides great value—financial value in this case.

Considering Vision and Expanding Coverage

Applications have become increasingly complex. The tooling—whether it's capacity or workload automation, or performance or transaction monitoring—needs to accommodate all forms and aspects of all applications, wherever they exist.

So, how do you best address a specific type of problem, taking into account the complexity of the infrastructure? One way is to recognize and understand the vision a particular vendor has with the tools that are available. Is it a set of products that are somehow integrated to provide the appearance of broad or enterprisewide coverage? Or, is there a vision with products that let you join into a cross-platform initiative with the understanding that the vendor can carry that strategy forward over time, even as technology changes? It’s important to look at deliveries of features and functions and how a particular solution set and product fit into the vision of the technology.

Because the processes, applications, and business services flow across platforms, the tools should facilitate as much coverage across those stacks as possible. It’s not necessary to provide deep knowledge, but for the purposes of working across platforms, breadth of coverage should be considered first.

The tools should also offer an agnostic view of IT as it relates to the business. If you examine capabilities provided based on business service management, the tools should provide a top-end or high-level view of access to knowledge and understanding about the behavior of business services or applications as they exist in the infrastructure. The underlying technologies and the IT infrastructure are aspects of IT operations that require support. The vendor should focus on management as a platform itself.

Because of the complexities of the IT and business environments, no perfect unified tool exists today that can answer all the problems you may have, or predict the future, or diagnose the past with 100 percent certainty. So it's  important to look for vendors with complete vision. Ask these questions: What is the vendor’s vision? How has this vendor begun to deliver on that vision with the products and solutions offered?

Overcoming Issues As Your Data Center Evolves

As workloads evolve, new ones can be created, and existing ones can evolve in ways that were difficult to manage in the past. Data centers must also evolve to accommodate the new increased business requirements and new workloads. So the transformation of data centers over time provides a data center automation view of IT that accommodates these new and increased business requirements.

In BMC’s survey of mainframe customers, cost optimization and reduction have been top priorities for IT organizations since the survey began in 2006. They’re required to reduce their costs and become more efficient and aligned with the businesses they support. Consequently, data centers are becoming more proactive. These priorities ultimately result in more automation. As cross-platform tools become agnostic, they can help with data center evolution by fostering the notion of efficiency across the platforms the IT organization needs to continue supporting the business.

One of the issues with platform agnosticism and cross-platform initiatives is that organizations aren’t yet completely set up to handle single products or single solution sets across those boundaries. Typically, an IT organization handles the mainframe. That group may have a team that handles DB2 or IMS or manages WebSphere, application servers, Windows devices, boxes, and so on. In effect, the organizations themselves may be isolated, and the tooling they use is most frequently also isolated because of their subject-matter expertise in those areas.

Recognize that, higher up in the IT organization, some of the challenges are self-imposed and may require giving at least some thought to the boundary problems that can exist in IT organizations. Try to solve those problems so the IT tools themselves can be more effectively used. Here’s an example related to the deployment of Linux on System z at one company:

The CTO needed a decision tree of new applications. When a new application is being developed, proposed or contemplated, a decision matrix is used. One of the questions in the matrix is, “Will this application use a back-end database; e.g., DB2 or IMS?” If the answer is “Yes,” then, by definition, that new application will run on Linux on System z  because it’s close to the back-end databases.

IT overcame some of the organizational challenges about how to manage Linux on System z by letting the distributed people and help desk continue using the tooling they had in the mainframe environment, or Linux on the System z  environment. An accommodation was made. Perhaps the CTO would have preferred to have a single toolset from the mainframe that would include Linux on System z, but to solve some of the organizational and political problems that would result, he compromised. The application side gave a little, the IT infrastructure side gave a little, and the effort was successful.

Managing Risk

Any change that happens in the data center involves a certain amount of risk. The key is to minimize the risk while maximizing support to the business. That’s easier said than done. But you can minimize some risk by effectively planning and understanding the changes. Capacity management, for example, offers predictive capabilities as a result of just observing IT operations in the systems they manage.

Does it make sense to adopt cross-platform tools in advance of the emerging need to do so? Yes. When you think about the hybrid data center, specifically as it relates to the mainframe, it’s clear that an evolution is occurring with the zEnterprise platform, specifically in the hybrid computing model. It hasn’t reached fruition, but it seems to be emerging in the hybrid data center, which is really the confluence of mainframe technology with UNIX and Linux on what are known as zEnterprise BladeCenter Extension (zBX) devices. It’s important to evaluate newer technologies for a platform technology in advance of your decision to actually deploy new technologies.

You should understand when change is occurring or imminent. If you can get ahead of the problem, you can begin to adapt before change actually occurs. You can minimize risk and further increase efficiencies by using cross-platform tools.

Reducing risk in the cloud has much to do with being platform-agnostic. If you look at a cloud as a platform, you can ask: Will platform-agnostic tools deployed today alleviate some of the risk and stress of implementing cloud and cloud services? The answer should be, “Yes.” Look at management as the platform. As data centers evolve, even through the cloud, the previously positioned platform-agnostic tools could help you make the transition less risky and, therefore, provide quicker time to value.

Looking Ahead

As the hybrid data center matures, it will be easier to pick the best platform for the type of work you're running. Workloads and platforms historically have been selected because of their suitability for particular types of jobs. Many workloads were built on platforms that were best-of-breed at the time but aren’t that way today. The hybrid computing model tolerates the differences in applications that cross platform boundaries. This model focuses on having the capability—which includes faster I/O, faster speed, fewer moving parts in the network, and so on—to create a platform out of the set of hybrid platforms that exist in the enterprise world.

Applications span workloads and workloads span platforms. In the future, the hybrid computing model will increasingly present itself as a platform for heterogeneous applications and will, at each step along the way, generate efficiencies from them.

Everything will become faster as the hybrid model evolves. There will be more complexity, more capacity, more speed. The challenges in IT and in business will continue and our goal will be to create efficiencies that outpace and overcome the growing complexities. Cross-platform management tools will help you meet this challenge.