When we talk to customers, they describe the need to balance innovation and strategic initiatives with ongoing, short-term IT demands as one of their biggest challenges. To help address this ongoing challenge, we advise them to focus on lowering their operational costs and use the savings to fund their strategic projects.

For years, the computer industry has been attempting to control growing operational costs by:

• Consolidating servers and storage (reducing the number of servers they need to manage)
• Virtualizing servers (to get more utilization out of the systems they already own)
• Reducing energy costs (by reducing data center footprint) 
• Automating systems management tasks (for increased operational efficiency).

In many data centers, systems management accounts for a significant portion of all operational costs. Therefore, simplifying management, increasing efficiency and reducing the number of people needed to manage information systems offer the greatest payback. A look at the systems management tools in the market today shows that much has been done to automate various management functions. There are solutions that:

• Automate virtualization and provisioning
• Monitor resources and automatically issue alerts when problems arise
• Automatically track the flow of cross-platform applications. 

We see continued automation as a key to both enabling and further reducing management costs of the dynamic data center of the future. And when manual tasks are required, a dynamic data center empowers users with information they need at their fingertips to accelerate service delivery and problem resolution.

A new approach has evolved, which focuses on these capabilities to reduce the time required to troubleshoot system problems and lower the learning curve to develop new system management skills (both translating into lower labor costs). This new approach combines rich visualization with analytics, integration and collaboration capabilities to simplify and streamline system management processes and break down barriers between IT silos. If you think about computers, you might consider them to be electronic brains. So why not combine these electronic brains with human expertise to empower system administrators to perform root cause analysis and problem resolution in a fraction of the time? Using system data and trend analysis, problems can be predicted and prevented. And if issues do occur, historical data and a comprehensive knowledge database can resolve these problems more quickly. In addition, useful metrics can be gathered to help further tune systems for better performance. Are we getting close to the day when systems will actually tune and optimize themselves based on policy set goals? We think so, and we think this is essential to fully realize the value of dynamic data centers.

As more and more integration and automation are introduced into systems management, this frees up resources to focus on more strategic IT initiatives. The whole idea of the data center of the future is to remove internal systems silos and instead empower IT to dynamically move workloads on to systems with have characteristics that can most efficiently serve those workloads. By running workloads on the most efficient systems, enterprises can significantly reduce total cost of application ownership.

In short, a heterogeneous mainframe and distributed systems environment can make an enterprise more “agile” in supporting strategic initiatives such as Big Data, mobile and cloud computing. As a result of building a dynamic data center, IT executives will find that their highly tuned, hyper-efficient information systems will be able to respond to changing market conditions more quickly, enabling rapid adjustments to meet competitive pressure (or enabling cloud service-enabled information systems to be used to create new competitive pressures). Mainframes will continue to be highly instrumental in helping build these dynamic data centers of the future.