Oct 5 ’10

How DB2 for z/OS Application Requirements Affect Database Design

by Editor in z/Journal

For most DBAs, the question, "Do application requirements affect the database design?" has an obvious answer: Of course!

Regrettably, important things are forgotten or ignored during data modeling and database design. This article discusses common post-implementation issues that can be avoided by giving them proper consideration during the design process.

Database Design Practices

Database design flows from business rules, application requirements, and your data modeling standards such as normalization and referential integrity enforcement.

What about a set of tables implemented to augment a current batch system? These will probably be fully normalized per current database design standards. Is this a new database in an enterprise data warehouse? Most likely, the design will include fact, dimension, and key tables with star join access paths. Will this new data support a mission-critical application with tens of thousands of users? Perhaps there will be some denormalization of the design for performance reasons.

DBAs tend to design databases based primarily on functionality and performance. Will the database support the business rules of the application? Will it perform based on the Service-Level Agreement (SLA)?

This prioritization is backward. There are just as many, if not more, important issues that should be addressed during database design. Factors such as recoverability and data availability are either missed or given lower priority.

After implementation, when recovery and availability issues arise, DBAs and application support staff find themselves in the unenviable position of considering a database re-design—usually accompanied by an extended application outage.

By prioritizing these issues in order of their importance during the database design phase, we can greatly reduce the need for re-design after implementation.

Following are some common post-implementation issues broken into sections corresponding to the laws of database administration (see "The Laws of Database Administration" by Lockwood Lyon, which appeared in the October/November 2006 issue of z/Journal, at www.mainframezone.com/applications-and-databases/the-laws-of-database-administration). Database design practices that respect the first law (data must be recoverable) come first, followed by the remaining laws in priority order.

Design for Recoverability

Recoverability means the ability to recover a specific subset of application data to a specified point in time. Reasons for such a recovery vary from large-scale disasters (such as loss of a data center) to undoing incorrect data updates caused by an application.

In the case of disaster recovery, the DBA must take into consideration that total recovery time begins with physical data/media recovery, availability of the operating system, tape and recovery resources (if applicable), and DB2 subsystem or group recovery. Once DB2 is available, any utilities in-flight may need to be terminated and in-doubt threads resolved. In a data sharing environment, failed members may hold retained locks. Once these items are addressed, the DBA can review remaining application data recovery issues.

Database design must account for the required application data recovery service level for all scenarios.

Are there new tables in the design? Many of these must be added to recovery scripts. Are tables being expanded, or will there be an increase in data or transaction volume? Perhaps this will affect the total recovery time; the DBA may need to revisit existing recovery procedures. Figure 1 lists some things to consider during database design that may affect the DBA’s ability to recover the application’s data.

Design for Availability

After recovery, data availability is the next highest priority. Here, the DBA is concerned with minimizing locking and contention issues and providing current data to applications as quickly as possible.

There are several scenarios where an application requires high data availability; those that call for high-speed data update and retrieval (perhaps in a 24x7 environment), include a data archiving or data retention process, or require coordinated DB2 utility execution with SQL access.

In each of these cases, there are database design choices that may alleviate future contention problems. The most common design solutions involve combinations of horizontal and vertical data partitioning with a method of assigning clustering keys. Such a solution helps avoid data hot spots, which are areas in memory and on DASD where multiple I/Os from multiple transactions may cause contention or locking issues.

Horizontal partitioning is the separation of table rows into two or more physical data sets. It may be accompanied by a similar physical partitioning of some indexes. DB2 partitioned tables let the designer choose a row storage strategy based on the partitioning. Horizontal partitioning will affect the database backup and recovery methods and elapsed times. Figure 2 lists some strategies and their usefulness.

Vertical partitioning involves splitting a table vertically—designating some columns to remain in the table while others are moved to another table or tables. In this way, a single table is denormalized into two or more tables, each containing a subset of columns from the original. This is usually accompanied by replicating the primary key of the original table across all the resulting tables. To re-create an original row, the tables must be joined on their primary key columns. The DBA usually chooses vertical partitioning for performance reasons; Figure 3 lists additional decision-making factors.

Key clustering is a technique the DBA uses to ensure inserted rows don’t create hot spots in the data. One common way this happens is when new records are assigned a key based on a sequentially ascending value (such as an identity column or a sequence) and the table is clustered on this key. The result is that newly inserted rows are added at the physical end of the table.

Assuming the DBA implements page-level locking, a row insert requires an exclusive lock on the page where the row is placed, and the lock is kept until the transaction issues a commit. Until the commit, the page is locked, preventing SQL access to rows on that page.

If transaction volume is low enough, or commits happen immediately after inserts are complete, then this may not be a problem. However, for a high-volume, critical business application, this may not be possible. Instead, the DBA has several options for spreading inserts across the pageset to minimize hot spots.

One possibility is to cluster the table based on a column other than the key. Indeed, this is typical for many applications since clustering is best used in instances where SQL statements access many rows, allowing DB2 to use prefetch to increase performance.

Another possibility is to cluster by a surrogate key chosen semi-randomly. Some choices include:

You can also use these methods of key value creation to assign rows to partitions, assuming table horizontal partitioning is accomplished using the key column.

The DBA must coordinate such key clustering techniques with index choice because physical clustering of the table is dependent on designating one of the indexes as the clustering index. Since adding a surrogate key to a table to cluster it necessarily means creating a clustering index for that key, this may affect the total number of indexes for the table; it could increase recovery time or decrease SQL insert performance.

The last issue confronting the DBA in the area of data availability is coordinating SQL access and transaction commits with utility execution, especially the reorg utility. Reorg with the sharelevel change option allows concurrent execution with SQL insert, update, and delete statements. However, at the end of reorg execution, the utility enters the switch phase. In this phase, the utility attempts to drain all table partitions involved in the reorg. (A drain is the act of acquiring a locked resource by quiescing access to that object.) Since concurrent SQL statements acquire locks on tables, the reorg utility needs these locks released to complete the switch phase. If the utility doesn’t acquire the drain in a certain amount of time, it fails.

The result is that long-running transactions or transactions that delay commits can negatively impact reorg utility execution. The DBA may need to take this into account during  database design. Apart from ensuring that long-running transactions don’t occur, the most common method of mitigating this issue is implementing a sufficient number of horizontal partitions so each partition that’s reorged has only a small footprint. Another alternative to avoid this type of contention is implementing third-party tools or utilities that offer more efficient switching options.

Excessive horizontal partitioning may affect recovery processes and also increase the number of open data sets. This may increase the amount of logical and physical data set closings or openings DB2 does to avoid hitting the maximum number of data sets that may be open (specified by the system parameter DSNUM).

Design for Security

Security concerns are divided into three levels: prevention, detection, and correction. We usually prefer to prevent security breaches; if this can’t be done thoroughly, then we require methods of detecting and eventually correcting them.

DB2 security mechanisms, such as authorization-ids and the ability to grant read and write authorities at the table or column level, are usually sufficient to limit application access to authorized users. Therefore, prevention and correction of unauthorized access is best handled by external security packages or a security administrator.

Database designs involving security considerations typically address the second level—detection—by implementing a change history accumulation process. The DBA may include table columns for audit trail purposes such as LastUpdateDate and UpdateByUser. Another common design might include an audit history table, which would contain rows of before and after images of changed data.

More elaborate schemes of gathering change history include:

Vendor software provides another advantage, as these products are more difficult to circumvent than most database design strategies. If your environment has strict compliance requirements or is subject to a certification or governing body, then standard vendor solutions may provide the best answer.

Design for Performance

Many database design alternatives that enhance application performance are well-known and have appeared in previous articles. The most common arise when dealing with resource constraints; the DBA documents current constraints, analyzes possible trade-offs, and makes design decisions that take the constraints into account while ensuring compliance with the first three laws (recoverability, data availability, and security).

For example, assume that I/O throughput and application elapsed time are constrained. A possible design change to alleviate the constraints is to add indexes that support search predicates and join predicates, perhaps even providing index-only access. The DBA would first ensure that adding new indexes wouldn’t compromise data recoverability (additional indexes add to recovery time) or negatively affect data availability (the new indexes may create hot spots).

Summary and Best Practices

The following is a list of best practices for database design when considering application requirements:

DBAs and data modelers traditionally considered performance issues when developing database designs. They were influenced by several factors, including limited access path choices, few high-performance hardware or software options, and unsophisticated SQL optimizers. Today, hardware, software, and database management systems have matured; databases can potentially grow to multi-terabyte size, and enterprises are implementing critical applications whose recovery and data availability now assume much more importance. Database designers must consider these factors. If they don’t, they risk implementing databases that are unrecoverable or that require expensive and time-consuming redesign.