As the table becomes more populated, data access would become less optimized, affecting response time performance. NPGTHRSH, when set, tells DB2 that if the table had less than some number of pages, to try an index access rather than a tablespace scan. A suggested starting value is 10. DB2 will attempt index access for all tables with 10 or fewer pages. This lets DB2 use standard cost-based optimization for all objects greater than 10 pages. There’s also a special setting for this keyword, -1, which will all but disable DB2’s cost base optimization. So this may not be your best choice. 

A negative aspect of using this keyword is the fact that it’s systemwide, which means DB2 attempts index access for all tables with low or invalid statistics. DB2 V8 introduces an alternative solution with the new CREATE TABLE keyword, VOLATILE, which could reduce the use of NPGTHRSH in some instances because it turns on index access for all objects, if an index is available, defined with the keyword. With V8, changes to NPGTHRSH can now be activated using the -SET SYSPARM command. 

There are parameters that control warning messages for non-scrollable cursors, how IN-list processing occurs, rolling up accounting records for parallel tasks, controlling measured usage pricing, start join processing, outer join processing, and more. Which opaque keywords are available for change depends on what version of DB2 you’re running. 

The Really Ugly

Just because a parameter exists doesn’t mean you have to tune it. Often, these hidden parameters were added as serviceability aids for DB2 Level 2 support.   They’re available to solve specific DB2 issues with guidance from support personnel and aren’t for the general public to play with. They can come and go from release to release. In fact, they may even disappear during the life of a single release.

Often, when you mention hidden ZPARMs, just like a Pavlovian response, you’ll almost invariably have someone inquire about the hidden EXPLAIN tables—that mystical set of tables that, if you could only access them, would answer all your SQL performance questions. Of course, that’s not going to happen. You still can’t formally access them, and if you could, those hidden tables wouldn’t be the panacea they’re perceived to be. 

The SQL EXPLAIN statement was first introduced in March 1986, with the availability of DB2 V1.2. It gave DBAs and programmers their first official glimpse at what the optimizer was up to (or more correctly, how access path selection was working). 

It didn’t take long for folks to learn there was a lot more information about processing of an SQL statement that wasn’t available through the externalized EXPLAIN tables. This was considered “secret” information being written to a set of “hidden tables” that DB2 used internally. You could enable these tables by “turning on” a hidden ZPARM and creating several tables with no externalized Data Description Language (DDL). So the quest was on to “discover” how to gain access to these mysterious hidden EXPLAIN tables and the hidden ZPARM that controlled their access. Everyone seemed to know they existed, but the hidden tables proved elusive. To this day, it’s still a challenge to locate them, as well it should be. 

It’s probably no surprise that this article doesn’t address how to access the hidden EXPLAIN tables or modify the hidden ZPARM to enable access to them. But rather, we’ll ask the question: “Why do you really want to access these tables and, if you could, should you?” There’s never been any real advantage to having access to the “hidden” EXPLAIN tables. 

Do they really contain information that will be useful in solving all your tough SQL performance issues? Probably not. These tables are a service aid for DB2 Level 2 support and were never meant to be externalized. Deciphering what’s in them isn’t as straightforward as one might hope. In fact, using these hidden tables could easily take you down the wrong path, wasting valuable time, looking at areas that aren’t involved in the performance issue you’re attempting to remedy. If you still think you need to solve this mystery, check out the new V8 Visual Explain. Much of the information you’re seeking may already be available to you. 

Data Sharing Considerations

Then, of course, there are always the concerns that might arise from using data sharing. In a data-sharing environment, each data-sharing member will have its own DSNZPARM module. This is necessary because each data-sharing member runs its own address spaces independently and each set of address spaces is probably running on a different processor. It’s beneficial because it allows each data-sharing member to be configured uniquely. You should be careful to ensure that each member’s ZPARM source is kept unique, possibly using a different naming convention for each data-sharing member.


You should take advantage of RACF, or some equivalent product, to secure everything associated with your DSNZPARM module. The load library containing the load module should always be write-protected in all environments to ensure the DSNZPARM load isn’t accidentally deleted or overlaid. Starting a DB2 subsystem could be challenging without a DSNZPARM module. 

Don’t forget about the DSNZPARM source, the macros, and their current value settings. The source needs to be protected. You should control who can modify it. You should be able to keep multiple versions. Changing the source used to create the load module must be coordinated so one person’s changes don’t unintentionally supersede another’s. You should also get into the habit of using BROWSE when you’re simply “looking” at the source. That could prevent a keyword from accidentally getting changed. In addition, you should restrict the SET SYSPARM command for control over when a DSNZPARM module is switched. By default, the ability to issue this command is part of the privilege set belonging to SYSOPR, SYSCTRL, and SYSADM.   


Take time to become familiar with the effects of changing the parameters in DSNZPARM, learn the use of the -SET SYSPARM command, and always keep a copy of the  DB2 Installation Guide nearby. With every release of DB2 and the introduction of certain PTFs, new DSNZPARM parameters are introduced, existing parameters are changed, and some parameters are removed, so make sure you’re staying current with maintenance. Follow these simple practices, and DSNZPARM will always be “good” to you and will never show you its “ugly” side.

5 Pages