DB2 & IMS

As businesses try to do more with less and maximize their return on hardware and software investments, optimizing mainframe infrastructure is key. It can offer immediate benefits in performance and revenue, especially to those facing increasing transaction volumes and tight batch-processing windows. Maximizing the Millions of Instructions Per Second (MIPS) a mainframe processes can annually save companies millions of dollars. In situations where mission-critical applications process hundreds of thousands of transactions per hour, maximizing MIPS is imperative.

Companies can take several approaches to optimize their mainframe environments, increase capacity, and reduce costs. This article focuses on the use of in-memory tables and table-driven design to address data access inefficiencies and maintenance issues.

In-Memory Tables Overview

In-memory table management complements the features of a DBMS and adds programming power and performance to virtually any application running on z/OS. Production systems often suffer from poor performance or expensive maintenance requirements. To reduce costs and promote responsiveness to business changes, these systems are easily updated with in-memory tables. The greatest benefits occur when the transaction volumes are large or complex.

An in-memory table management system that enables externalization of rules from the applications is a powerful advantage for any application designer. Many applications can benefit when program behavior is remotely managed by the people who use the system.

Improving Performance

A daily database update process may flawlessly perform its automated task, but if it takes 25 hours to execute, it’s useless. This situation is a real concern to large organizations with rapidly increasing transaction volumes. Often, appropriate use of memory-resident tables has dramatically dropped the elapsed time of an otherwise well-tuned process from 12 hours to 40 minutes or three hours to five minutes.


 

Implementing in-memory table access is often a straightforward task that involves replacing tabular data in an external file with corresponding main memory tables of identical design. The purpose is to minimize I/O by buffering the entire table in memory and dramatically reduce the instruction path for accesses. The same logic extends to replacing tables for any file organization or DBMS, assuming the data meets the requirements for reference data or temporary data.

This approach doesn’t have functional requirements beyond those already available with standard file or DBMS processing; it doesn’t require legacy-oriented application programmers to change their approach. It merely requires a simple one-to-one replacement of file or DBMS accesses with the equivalent in-memory-table access call.

5 Pages