IT Management

Ever since open systems platforms began offering performance levels approaching lower-end mainframes more than 12 years ago, we have been involved in numerous projects where mainframe and midrange users of various proprietary platforms were moving some or all of their applications to open systems. Yes, the price/performance value has been there, but at a cost of somewhat lower reliability. Since then, open systems platforms have become more robust and offer performance that equals all but the largest mainframe configurations. However, that continuing reliability question combined with an improving mainframe price/performance profile has slowed the movement of applications off the mainframe, particularly core business-critical applications. Arguably, most of the applications that were appropriate candidates for moving off the mainframe have already been moved. In fact, we have seen a couple of cases of applications moving back to the mainframe.  

Until recently, organizations faced an all-or-nothing proposal regarding moving applications to open systems. It simply did not make sense to move part of an application off the mainframe, niche exceptions notwithstanding. However, advances in data sharing technology, including both Gigabit Ethernet and dual channel connections to both mainframe and open systems, now make it time to reappraise this situation.

In this article, I will argue that it may now make economic sense to move specific subsets of an application off of the mainframe where it did not make sense in the past. This includes moving select parts of business-critical applications to open systems, while leaving the balance on the mainframe.

Exactly how technically practical and economically attractive can this be? With the advent of job control systems that allow open systems jobs to kick off after the completion of mainframe jobs, it can be possible to have batch programs ported from the mainframe execute largely as they do now, just on the open platform. Indeed, we have recently seen sites that can replace 10 to 20 percent and sometimes more of their processing load with Intel or RISC platforms, costing much less for the same throughput while maintaining all the historic reliability benefits of a mainframe application.

With the new workload licensing terms and “pay for what you use” outsourcing initiatives from IBM, this can result in saving millions of dollars per year, starting from the day of implementation. Payback can be in a few months, with a net cash savings enjoyed in the current year. At one client site where we recently did exactly this, the stark choice was between mainframe offloading and staff layoffs. The expected $1 million savings during calendar 2003 will be used to pay a number of salaries to critical personnel, a very satisfying outcome for all of us involved.

Application Platform Decisions

Enterprises face major problems in efficiently distributing applications across the most architecturally appropriate platforms. Applications that require high bandpass access to mainframe data must still be resident on the mainframe or demand expensive connectivity to provide the necessary throughput from the mainframe. Distributed platforms typically carry stand-alone applications, those applications integrated only with other distributed applications, or those requiring relatively modest volumes of mainframe data. Optimal application integration remains limited by performance, cost or both.

Ideally, applications should be placed where they make the most sense architecturally. Business-critical applications that require 24x365 availability and absolute data integrity arguably should still be on the mainframe platform, as should heavy I/O batch applications. Applications that provide decision support, ad hoc query, Business Intelligence (BI), Customer Resource Management (CRM), or similar non-mission- critical facilities, as well as transactional applications that can be offline for a few minutes now and then, can take advantage of the superior cost-effectiveness of the open systems platforms. The question that needs to be answered is when should this be done and how, in order to be assured of a positive return on the investment and without impacting either reliability or business- critical processing.

Direct Data Connect

Direct data connect products, such as IBM’s DB2 Connect and its competitors, allow programmers to use the mainframe as the database server for distributed modules. These products are based on IBM’s Distributed Relational Database Architecture (DRDA) and form an appealing alternative — keep the data in one place and access it from whatever platform requires it. The mainframe becomes a database server to distributed users.

5 Pages