Over the last several years, there has been tremendous growth in the use of online banking, smartphones, tablets and other remote devices. In addition, there has been astronomical growth in the development of applications that can access business information via the Internet. This has led to an increasing demand for online transaction processing (OLTP) data via remote access.
Before the advent of smartphones and tablets, most remote data requests were generated using interactive screen-oriented transactions. Today, customers can use smartphones or tablets to access account information, make purchases or just browse Websites much like customers once window shopped. Many Fortune 500 companies now report remote access OLTP transaction rates in excess of 50 percent with a few in excess of 75 percent—double the rates reported several years ago—and in one case, 90 percent (as was reported in DB2 expert Robert Catterall’s blog post; see the “References” section). This is almost double the rates reported several years ago. In this day and age, what business doesn’t allow you to buy their products or access their services via the Internet?
Many organizations are finding themselves in a remote access growth dilemma because the increase in daily online transactions results in greater utilization of central processor (CP) hardware and mainframe software, which increases the total cost of ownership (TCO) of the mainframe.
DB2 10 and 11 for z/OS provide an opportunity to re-engineer high-volume, remote access transactions with native SQL stored procedures, thus, providing a service-oriented approach to lower the TCO of the mainframe.
IT Heritage of Remote Access Transactions
If you looked at the core IT systems of many Fortune 500 companies today, you would be surprised by their IT heritage. By IT heritage, we mean the year those core systems were first developed and the hardware and software used to put them into production status. This refers to basic U.S. industries, such as banking, insurance, manufacturing, the food industry and government.
Many of today’s remote access online transactions reflect the IT technology that was available when the application was first developed. The application may have been updated to use new IT innovations at various intervals along the application’s historical legacy journey to today’s relational environment. In many cases, the motivation for re-engineering is typically designed to improve application capability, rather than simply to save money. Just take a look at your major legacy core systems and answer these questions:
• Was your current online system or application originally a batch system?
• What type of file structure was originally used? Flat files? A network database management system (DBMS)? A hierarchal DBMS?
• What was the input source? Batch? Terminals? If terminals, what type?
• What language were they originally written in? Assembler or a high-level language?
• What language and type of DBMS is primarily used today for data access?
• Have these applications been retrofitted to run under relational DBMSs?
• Does your application access one record at a time or have they been re-engineered to fully exploit the benefits of relational technology and relational set theory?
• Do these applications access data from other relational systems via Distributed Relational Database Architecture (DRDA)/Distributed Data Facility (DDF)?
By understanding the answers to these questions and the current value of your application to your business, combined with transaction CP consumption rates, you can create a prioritized application list for re-engineering.
Relational Database Access Connections via DRDA/DDF
DB2 uses DDF to allow an application program that’s connected to one DB2 system to access data at a remote DB2 system, or any other relational DBMS that supports DRDA.
Let’s look at just one major relational database feature, native SQL stored procedures, that may provide a very attractive return on investment (ROI) incentive to re-engineer some of your remote access DRDA/DDF transactions.