DB2 & IMS

Minimizing Change in Your DB2 for z/OS Database

3 Pages

For updates, DB2 logs from the first byte changed to the last byte changed, for fixed length rows or rows with VARCHARs, where the updates don’t cause the row length to change. If the row has a VARCHAR and the update could cause length change, you can minimize logging by placing the VARCHAR before the regularly updated columns. For fixed length rows, logging can be minimized by placing the regularly updated columns together. This way, you can minimize the amount logged. If you take a transaction that runs a million times a day doing updates, this design can prove beneficial.

Figure 1 shows how the table was originally designed and Figure 2 shows the table designed for minimal logging. The original design shows the frequently updated columns mixed in with those that never change but the high-volume update transaction that updates the account balance will log all the data because there’ll be a change to the timestamp and last update user ID columns, and you have to log from the first byte changed to the last.

In Figure 2, we’ve moved the three fields together that are always changing so that, for the high-volume update transaction, we’re logging the minimal amount of data.

Be careful with compression because DB2 will treat the rows as varying length—with length change. Logging occurs only from the changed column to the end of the row, so you may want to consider placing highly updated columns toward the end of the row.

Splitting High-Update Tables

Another way to minimize change and reduce logging and maintenance costs is to separate tables that contain both static columns and heavily updated columns into two separate tables. This type of design works for high-volume tables and requires enough analysis to determine which columns are updated and how often. This works best if there are columns that are always static and columns that always change. Moving the columns that always change to their own table can help reduce logging as well as the amount of data that needs to be REORGed.

Figure 3 shows how analysis of changes to a high-volume table resulted in the splitting of the table into a highupdate and low-update table. The highupdate table could then be adjusted with its own free space settings and REORG schedule, while the low-update table had less aggressive free space and REORG schedule settings.

How DB2 Avoids Change and Taking Advantage of It

DB2 knows when you’re actually changing data. DB2 will actually compare the values of the columns being SET in an update statement to the current values for the row updated on the table. If an update isn’t actually changing the value of the column being updated, then DB2 won’t physically update the column. If possible, DB2 also won’t log the non-updated column. We can look at the projected update frequencies of the columns in our tables and adjust our column ordering to take advantage of this DB2 efficiency. So, we want to take the column ordering discussed earlier to a more granular level by making sure the less frequently updated columns are before the more frequently updated columns. Figure 4 shows our efficient column design to minimize logging and how it takes advantage of the frequency of the updated columns.

Dealing With Blind Updates

3 Pages