Design for Recoverability
Recoverability means the ability to recover a specific subset of application data to a specified point in time. Reasons for such a recovery vary from large-scale disasters (such as loss of a data center) to undoing incorrect data updates caused by an application.
In the case of disaster recovery, the DBA must take into consideration that total recovery time begins with physical data/media recovery, availability of the operating system, tape and recovery resources (if applicable), and DB2 subsystem or group recovery. Once DB2 is available, any utilities in-flight may need to be terminated and in-doubt threads resolved. In a data sharing environment, failed members may hold retained locks. Once these items are addressed, the DBA can review remaining application data recovery issues.
Database design must account for the required application data recovery service level for all scenarios.
Are there new tables in the design? Many of these must be added to recovery scripts. Are tables being expanded, or will there be an increase in data or transaction volume? Perhaps this will affect the total recovery time; the DBA may need to revisit existing recovery procedures. Figure 1 lists some things to consider during database design that may affect the DBA’s ability to recover the application’s data.
Design for Availability
After recovery, data availability is the next highest priority. Here, the DBA is concerned with minimizing locking and contention issues and providing current data to applications as quickly as possible.
There are several scenarios where an application requires high data availability; those that call for high-speed data update and retrieval (perhaps in a 24x7 environment), include a data archiving or data retention process, or require coordinated DB2 utility execution with SQL access.
In each of these cases, there are database design choices that may alleviate future contention problems. The most common design solutions involve combinations of horizontal and vertical data partitioning with a method of assigning clustering keys. Such a solution helps avoid data hot spots, which are areas in memory and on DASD where multiple I/Os from multiple transactions may cause contention or locking issues.