Mar 14 ’14

Native SQL Stored Procedures: Reduce System z Hardware/Software Cost and Improve Efficiency

by Nate Murphy, Elwin Harrison in Enterprise Tech Journal

Over the last several years, there has been tremendous growth in the use of online banking, smartphones, tablets and other remote devices. In addition, there has been astronomical growth in the development of applications that can access business information via the Internet. This has led to an increasing demand for online transaction processing (OLTP) data via remote access.

Before the advent of smartphones and tablets, most remote data requests were generated using interactive screen-oriented transactions. Today, customers can use smartphones or tablets to access account information, make purchases or just browse Websites much like customers once window shopped. Many Fortune 500 companies now report remote access OLTP transaction rates in excess of 50 percent with a few in excess of 75 percent—double the rates reported several years ago—and in one case, 90 percent (as was reported in DB2 expert Robert Catterall’s blog post; see the “References” section). This is almost double the rates reported several years ago. In this day and age, what business doesn’t allow you to buy their products or access their services via the Internet?

Many organizations are finding themselves in a remote access growth dilemma because the increase in daily online transactions results in greater utilization of central processor (CP) hardware and mainframe software, which increases the total cost of ownership (TCO) of the mainframe.

DB2 10 and 11 for z/OS provide an opportunity to re-engineer high-volume, remote access transactions with native SQL stored procedures, thus, providing a service-oriented approach to lower the TCO of the mainframe.

IT Heritage of Remote Access Transactions

If you looked at the core IT systems of many Fortune 500 companies today, you would be surprised by their IT heritage. By IT heritage, we mean the year those core systems were first developed and the hardware and software used to put them into production status. This refers to basic U.S. industries, such as banking, insurance, manufacturing, the food industry and government.

Many of today’s remote access online transactions reflect the IT technology that was available when the application was first developed. The application may have been updated to use new IT innovations at various intervals along the application’s historical legacy journey to today’s relational environment. In many cases, the motivation for re-engineering is typically designed to improve application capability, rather than simply to save money. Just take a look at your major legacy core systems and answer these questions:

• Was your current online system or application originally a batch system?
• What type of file structure was originally used? Flat files? A network database management system (DBMS)? A hierarchal DBMS?
• What was the input source? Batch? Terminals? If terminals, what type?
• What language were they originally written in? Assembler or a high-level language?
• What language and type of DBMS is primarily used today for data access?
• Have these applications been retrofitted to run under relational DBMSs?                                 
• Does your application access one record at a time or have they been re-engineered to fully exploit the benefits of relational technology and relational set theory?                                                     
• Do these applications access data from other relational systems via Distributed Relational Database Architecture (DRDA)/Distributed Data Facility (DDF)?

By understanding the answers to these questions and the current value of your application to your business, combined with transaction CP consumption rates, you can create a prioritized application list for re-engineering. 

Relational Database Access Connections via DRDA/DDF

DB2 uses DDF to allow an application program that’s connected to one DB2 system to access data at a remote DB2 system, or any other relational DBMS that supports DRDA.

Let’s look at just one major relational database feature, native SQL stored procedures, that may provide a very attractive return on investment (ROI) incentive to re-engineer some of your remote access DRDA/DDF transactions.

A DB2 stored procedure consists of encapsulated logic that can be invoked from within your application that’s stored on the database server. This is one of the most underutilized features in the relational world today. It has taken DB2 stored procedures years and several releases of DB2 to reach full maturity (see Figure 1). Native SQL stored procedures will set you on the “Yellow Brick Road of Success.” For those of you who don’t understand this reference, it’s to the late Bob Yelavich (the Godfather of CICS), who was a perfectionist in everything he said or wrote.

This is how Linda Claussen of Themis Education Services Inc. described the creation, structure and execution of native SQL stored procedures in her August 2012 IBM Data Magazine article titled “DB2 Native SQL Procedures: The Future of Computing?” (see the “References” section):

“With DB2 V9 and V10 new function mode, when you create a native SQL procedure, its procedural statements are converted to a native representation that is stored in the DB2 catalog and directory. As with other SQL statements, this representation is bound into a DB2 package. The parameter list and procedure options are stored in the database catalog tables, as in the prior releases. When you CALL a native SQL procedure, DB2 loads the native representation (package) from the directory and the DB2 engine executes the procedure. No workload management (WLM) address space is required! Everything is stored in the package, under DB2 control. Native SQL procedures are simply packages that include runtime structures for the SQL statements to be executed. When you invoke a native SQL procedure, DB2 finds and loads the package and executes the statements (see Figure 2).”

Linda has been a long-time Themis Education Services instructor of native SQL stored procedures, and it’s easy to understand why. Native SQL stored procedures can be more CPU efficient than interfacing through WLM. You avoid waiting to be scheduled in WLM, then going across the WLM to DBM1 address space to address each individual SQL call. The only drawback is that COBOL is currently more efficient than SQL Procedural Language (PL). If you don’t have an enormous amount of SQL PL code to write, stick with native SQL stored procedures.

The Benefits of Native SQL Stored Procedures

Developing native SQL stored procedures wherever possible can provide many performance benefits, including: 

• Reduced network traffic
• Increased application scalability
• Simplified application development
• Consistent performance, resulting in better data access control and access path stability
• Improved security
• Lower CP usage
• Lower mainframe software cost.

The IT industry has a tendency to approach the implementation of new DB2 database features with caution, especially when it involves modifying production applications that are meeting current service level agreements (SLAs). The old saying goes, “If it ain’t broke, don’t fix it.” Unfortunately, followers of that saying have created an environment that underutilizes the features and functions of System z mainframe hardware, software and operating systems. It also doesn’t help lower the TCO of the mainframe.

Therefore, many of today’s OLTP transactions were written or retrofitted to relational systems without the current benefits and operational efficiency functions of DB2 10 and 11 native SQL stored procedures.

IBM first introduced external stored procedures in DB2 Version 7, which were imbedded in high-level languages and ran under the task control block (TCB) of the language, such as COBOL. TCB workloads aren’t IBM z/OS Integrated Information Processor (zIIP)-eligible; therefore, neither are external stored procedures. Native SQL stored procedures were first introduced in DB2 9. IBM declared part of the native SQL stored procedure workload zIIP redirect-eligible if invoked via remote access DRDA/DDF because it used an enclave service request block. 

What Are zIIPs?

In 2006, IBM introduced zIIPs, which are specialty engine (SE) microprocessors designed to allow selected types of workloads to be redirected from the CP mainframe to execute on the lower cost zIIP microprocessor. Typically, zIIP processors are priced at 25 percent the cost of the CP. They were available on all IBM zEnterprise, System z10 and System z9 servers. The zIIP processors were designed to free up general computing capacity and lower overall total cost of computing for select data and transaction processing workloads for business intelligence (BI), enterprise resource planning (ERP), customer relationship management (CRM) and select network encryption workloads on the mainframe. CP workloads that are redirected to be executed on the zIIP microprocessor aren’t included as part of the price of the mainframe software monthly usage charges. The process of defining zIIP processor-eligible workloads has been an evolutionary journey totally controlled by IBM. Several years ago, it was fairly uncommon for a customer’s zIIP processor workload to exceed 25 percent of their zIIP processor capacity. With the increase in zIIP processor eligible workloads, it’s not uncommon to find customers exceeding 50 percent of their zIIP processor capacity. DB2 native SQL stored procedures are one type of DB2 for z/OS workload that has benefited from zIIP processor redirect (see Figure 3). Until recently, customers’ zIIP processor processing capacity wasn’t allowed to exceed the capacity of their CP. With the introduction of the zEnterprise BC12 and EC12 machines, IBM will increase the zIIP processor to CP capacity ratio to 2:1. That means you will be able to have two zIIP specialty engines with the same capacity as your CP.


Finding zIIP Gold in DRDA/DDF Remote Access Transactions

External SQL stored procedures generate very few zIIP processor redirect benefits. If you’re currently executing high-volume external stored procedures via remote access DRDA/DDF, you could save significant CP resource by converting them to native SQL stored procedures. Because IBM has deemed this workload as zIIP processor-eligible, this means up to 60 percent of the native SQL stored procedure’s CPU execution time can be redirected to the less expensive zIIP processor machine in DB2 10 (see Figure 3). In practice, 55 percent zIIP redirect is common with DB2 10 new function mode.

How much CP resource could you save by converting external stored procedures accessed from a remote access DRDA/DDF location to native SQL stored procedures? Only you can answer that question, but it’s fairly quick and simple to answer. It just requires turning on the SMF accounting trace class 2 records for a short period during a peak online transaction period.

You can trace the DDF transactions that invoke external stored procedures using your favorite DB2 performance monitoring tool. The DB2 monitor will show you the amount of general-purpose-engine CPU time consumed, on average, by external stored procedures invoked via your DDF transactions. Utilize your daily transaction rates to estimate your potential general-purpose CPU savings if your DDF external stored procedures workload was converted to Native SQL stored procedures in DB2 10 (general-purpose CPU time x 60 percent). The daily numbers can then be extrapolated to project potential monthly and yearly CPU savings.

Robert Catterall wrote an extremely detailed blog describing this process in detail (see the “References” section).

How to Harvest the zIIP Processor Gold

Now that you’ve calculated how much potential gold there is in converting from external to native SQL stored procedures, it’s time to develop a low-risk, high-ROI re-engineering plan to convert the stored procedures. Remember the 80/20 rule, which says 80 percent of the benefits will come from 20 percent of the transactions. Irrespective of how many external stored procedure transactions you re-engineer, here are some suggestions:

• Automate the conversion of external stored procedures to native wherever possible
• Automate the identification of native SQL stored procedure candidates in legacy DDF transactions
• Rewrite external stored procedure DDF transactions containing small amounts of high-level language code in SQL/PL
• Wherever possible, write all new DDF transactions as native SQL stored procedures
• Train programming staff in SQL PL.

There are numerous benefits to developing repeatable processing code, such as Assembler macros, including:

• Improved code quality
• Reduced debugging time
• Clear function understanding
• Improved performance
• Improved testing.

One example of the use of Assembler macros is the development of the most complex air traffic control system in the world for the National Airspace System (NAS). Developed in the late ’60s, NAS went into production in 1971. It still controls most of the high altitude air traffic across the U.S., as was evident on 9/11.

High-quality code conversion can be generated automatically by recognizing code patterns in one language and translating them to another language. Developing repeatable processing code and using code pattern techniques provide the best quality of consistent code. In order to develop the best quality of SQL PL code, provide your best people with the best training available. 

IBM developed SQL PL as a set of commands that extend the use of SQL in the IBM DB2 database system. It provides procedural programmability in addition to the querying commands of SQL.

A native SQL procedure is a procedure whose body is written entirely in SQL. The body is written in the SQL procedural language. A native SQL procedure is created by issuing a single SQL statement, CREATE PROCEDURE. Native SQL procedures don’t require any other program preparation, such as precompiling, compiling or link-editing source code. Native SQL procedures are executed as SQL statements that are bound in a DB2 package. Native SQL procedures don’t have an associated external application program. The following statements represent the type of statements that can be included in a DB2 native SQL stored procedure body:

• Most SQL statements can be coded in an SQL stored procedure. Some SQL statements are valid in a compound statement but aren’t valid if the SQL is the only statement in the procedure body.
• Assignment statements, such as SET, SELECT INTO and FETCH, can be used to assign values to output parameters or SQL variables. An SQL variable is defined and used only within the body of an SQL stored procedure.
• CASE statements can be used to select an execution path, or even a specific cursor to be opened for processing, based on the evaluation of one or more conditions.
• IF statements can be used just like the CASE statement.
• The GOTO and LEAVE statements can be used to skip over sections of SQL and transfer program control out of a loop or a block of code.
• The LOOP, FOR and WHILE statements execute a single statement or a group of statements multiple times. They each have unique features that can determine how they will perform. The RETURN statement can be used to return a status in the form of an integer value to the invoking application.
• The CALL statement can be used to call another SQL stored procedure.
• Compound statements can be coded that contain one or more of any of the other SQL procedures language statements. Additionally, compound statements can contain SQL variable declarations, condition handlers and cursor declarations. Compound statements can be nested.

When coding compound statements, you must code the component statements in the following specific order:

1. SQL variable
2. Condition declarations
3. Cursor declarations
4. Handler declarations
5. Procedure body statements (CASE, IF, LOOP, REPEAT, WHILE and other SQL statements).

Additional details can be found in Craig Mullins’ book DB2 Developer's Guide: A Solutions-Oriented Approach to Learning the Foundation and Capabilities of DB2 for z/OS (6th Edition).

The Benefits of Procedural SQL

The procedural flow control statements are among the most useful procedural extension to SQL. The flow of procedural SQL is controlled by the typical programming constructs you can add along with standard SQL statements. These typical constructs enable programmers to:

• Create SQL statements that execute within a loop
• Group SQL statements into executable blocks
• Test for specific conditions and perform one set of SQL statements when the condition is true, another set when the condition is false (IF...ELSE)
• Branch to other areas within the procedural code.

The procedural commands also allow you to create a more flexible SQL application development environment. Often major applications can be written completely in SQL PL. As more applications are developed in native SQL stored procedures, developers will become more proficient in the use of the SQL PL features.

Common Complaints About DB2 SQL/PL

Some developers don’t like to use SQL PL because of current limitations in the SQL procedural language. They’re typically making the comparison to other high-level languages that were first developed decades ago, such as COBOL (1959) and PL/1 (’60s). Among the functions developers would like to have are the ability to include external files and a PERFORM capability. Like any new or emerging programming language, these capabilities will come with time. The existing high-level languages, such as COBOL and PL/1, initially didn’t have many of the features developers think must be there to write efficient applications. Every release of DB2 since the introduction of SQL PL has included SQL PL enhancements. There are ways to work around current limitations, but that’s a topic for another day.

Sample SQL Stored Procedures

To better understand SQL PL, take a look at the example SQL stored procedure in Figure 4. This SQL stored procedure declares a cursor on the EMP table and fetches the rows from the cursor.

Creating native SQL stored procedures is quite simple. All that’s required is to write the SQL and submit it to DB2. The process used to create an SQL procedure is made even easier if you use a tool, such as IBM Data Studio, to guide you through the steps of specifying the source statements for the SQL procedure, defining the SQL procedure to DB2 and deploying the SQL procedure for execution.

Summary

Just as early gold prospectors found gold in the streams at the bottom of hills and mountains, DB2 10 and 11 offer many new opportunities to claim zIIP processor redirect gold. Remote access DRDA/DDF native SQL stored procedures are just one of those streams that contain a high content of high-quality zIIP processor redirect gold nuggets.

You have to determine the maximum CPU savings that can be derived from converting external to native SQL stored procedures and its impact on leveling off your LPAR 4 hour rolling average mountain of monthly mainframe software charges. The IBM Optim Query Capture and Replay tool can be used to validate that the newly converted native SQL stored procedures will run and perform successfully in your production environment. The IBM InfoSphere Optim Workload Replay tool can be used to validate that the newly converted native SQL stored procedures will run and perform successfully in your production environment. It offers a solution that helps enterprises deploy infrastructure changes with confidence, on time and within budget. InfoSphere Optim Workload Replay for z/OS is designed to deliver realistic and repeatable workload testing, helping to minimize risk in production environments and reduce time and effort involved in creating realistic enterprisewide testing.

You have zIIP processor redirect gold dust just waiting to be reclaimed. Happy prospecting!

References

• DB2 expert Robert Catterall’s blog post at http://robertsdb2blog.blogspot.com
• IBM Data Magazine article titled “DB2 Native SQL Procedures: The Future of Computing?” by Linda Claussen, August 2012, http://ibmdatamag.com/2012/08/db2-native-sql-procedures-the-future-of-computing
• Robert Catterall blog titled, “DB2 for z/OS: Get Your DDF-Related Dispatching Priorities Right,” Aug. 9, 2013, http://robertsdb2blog.blogspot.com/
• DB2 Developer's Guide: A Solutions-Oriented Approach to Learning the Foundation and Capabilities of DB2 for z/OS (6th Edition) by Craig Mullins (Kindle Locations 17863-17876).