By their nature, some reference tables must be available in sets because data relationships are spread over several tables and entire reference table sets must be retained for transaction reversals and audit requirements. Properly designed in-memory table management supports reference tables that are updated in related sets. z/OS transactions running in a non-stop environment serving data to Web servers must be able to continue running until completion with an existing set of in-memory tables while newly started transactions continue with a new set of tables—all simultaneously and without interruption or delay.
Temporary Data Class
Temporary data—the smallest class of the total amount of data used by application programs—is the essence of process-related data. Examples include:
- In-memory arrays (defined in application programs)
- Data passed between steps of a transaction
- Online transaction work areas
- Online buffer sorting
- Online inter-transaction storage
- Batch-working storage buffers
- Dynamic alternate data views
- Batch data reduction and sorting.
Temporary tables can be re-created should a Logical Partition (LPAR) application fail. So, the highest application performance is possible if all the temporary tables are placed in the memory of an LPAR or in the memory allocated to the application. Most applications developers struggle with program arrays to accomplish searching and sorting to achieve acceptable application performance. The alternative, using a DBMS for temporary tables, hampers performance and is largely inappropriate.
An application that temporarily makes use of in-memory, sortable, and indexable table objects can often add value. Examples include program trading, billing by telecommunications companies, consolidated statement production, tax processing, and much more. An in-memory table used by an investment banker enabled parallel processing steps and reduced an application’s nightly batch run-time from 10 hours to two.
An in-memory table manager has a rich set of information-building functions the application uses to dynamically define, populate, organize, and manipulate information in the memory of an application or in the memory of an LPAR shared by all applications. An in-memory table manager supports memory management when thousands of transactions are simultaneously creating and collapsing multiple table objects with multiple indexes.
Making small changes for much-improved performance is a common goal, but the primary motivation for re-engineering usually focuses on flexible application logic and reduced maintenance. High performance is a prerequisite for flexible, table-driven design. However, the most enduring benefits of in-memory tables occur when they control process flow and process decision-making. This is the discipline of table-driven design. Although DBMS tables can be used for implementing table-driven designs, processing time can be seriously compromised by using DBMS tables for this purpose.
When mainframe applications are modernized, they must be able to adapt to changing conditions and be inexpensive and easy to maintain. Preferably, these improvements should be made while minimizing risk and modernization cost, something that can be accomplished using in-memory tables.
In-memory tables enable: