You are on page 1of 43

Backup and Recovery with IBM DB2 for Linux, UNIX, and Windows in an SAP Environment Part 2 Backup

p Best Practices
Version 1.0

SAP DB2 for LUW Platform Team

Revision date: 3.3.2011 Authors: Sven-Uwe Kusche Malte Schnemann Thomas Matth Friedemann Albrecht

DB2 for LUW Backup Best Practices

About This Paper


The database backup is one of the most important service procedures in a database because backup jobs can impact the performance of the system that is being backed up. For this reason, backup jobs usually run when the regular database workload is minimal. If you improve your backup performance and use a proper backup strategy, you can reduce the backup duration and in this way increase your overall business productivity. This whitepaper describes the following: Backup internals Possibilities to tune the backup performance Basic backup strategies Redirected RESTORE operations

DB2 for LUW Backup Best Practices

About the Authors


Sven-Uwe Kusche Sven-Uwe is a certified SAP Technology and OS/DB Migrations Consultant and has been working for IBM since 1997. Before joining the SAP DB6 Center of Excellence in 1999, he was involved in numerous migration projects. Sven-Uwe is a graduate engineer (Informatics) of HTM Mittweida, Germany. Malte Schnemann Malte is a certified DB2 Administrator and has been with IBM since 1993. He has been working in the SAP DB2 development support since 1997. Malte has studied physics at TH Karlsruhe, Germany, and at ETH Zurich, Switzerland.

Friedemann Albrecht Friedemann has been working for IBM since 1991; since 1993 as an SAP Technology Consultant. He is a certified SAP Technology and OS/DB Migrations Consultant. He gathered work experience in numerous SAP basis and development projects and joined the SAP DB6 Center of Excellence in 2006. Friedemann is a graduate engineer (Mechanical Engineering) of Technical University Chemnitz, Germany, and a specialist engineer (Production Process Control) of Otto-von-Guericke University Magdeburg, Germany. Thomas Matth Thomas is a certified SAP Technology Consultant and DB2 Administrator and has been working for IBM since 2001. Before joining the SAP DB2/UDB platform team in 2006, he was responsible for the integration of Informix databases with SAP. Thomas has studied Electrical Engineering at TU Karlsruhe, Germany.

DB2 for LUW Backup Best Practices

Table of Contents
ABOUT THIS PAPER ............................................................................................................................. 2 ABOUT THE AUTHORS......................................................................................................................... 3 1 2 INTRODUCTION.............................................................................................................................. 5 DB2 BACKUP TECHNICAL OVERVIEW ....................................................................................... 5 2.1 2.2 2.3 2.4 2.5 3 3.1 3.2 3.3 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 5 Configuration............................................................................................................................. 6 Monitoring ................................................................................................................................. 8 Backup Image Verification ........................................................................................................ 9 Instance Configuration and Registry....................................................................................... 10 Aborting an Online Backup ..................................................................................................... 10 Required Files for a Physical Backup ..................................................................................... 12 Flash Copy Backups ............................................................................................................... 13 Creating a DB2 Backup Image from a Physical Copy ............................................................ 13 File System Caching Considerations ...................................................................................... 15 Extent Size Considerations ..................................................................................................... 17 Influence of the High-Water Mark (HWM)............................................................................... 17 Tablespace Distribution .......................................................................................................... 18 Prefetcher Configuration ......................................................................................................... 19 Database Size Considerations................................................................................................ 19 Backup Tuning Example ......................................................................................................... 20 Backup Optimization Recommendations ................................................................................ 21

BACKUP AT PHYSICAL LEVEL .................................................................................................. 12

BACKUP TUNING ......................................................................................................................... 15

BASIC BACKUP STRATEGIES AND BEST PRACTICES .......................................................... 22 5.1 Avoiding Data Loss and Ensuring Recoverability ................................................................... 22 5.2 Backup Scheduling and Concurrency Considerations............................................................ 23 5.3 Types of Data to be Backed Up .............................................................................................. 23 5.3.1 Transaction Log Files ...................................................................................................... 23 5.3.2 Configuration Information ................................................................................................ 25 5.3.3 Recovery History Information .......................................................................................... 26 5.3.4 Tablespace Data.............................................................................................................. 27

SPECIAL BACKUP AND RESTORE SCENARIOS ..................................................................... 33 6.1 Adaption of the Database Manager and the Database Configuration Parameters after a Restore to a (Smaller) Test System................................................................................................... 33 6.1.1 Database Configuration Refresh ..................................................................................... 33 6.1.2 Buffer Pools ..................................................................................................................... 33 6.1.3 Configuration Setting for Memory.................................................................................... 33 6.1.4 Log Files .......................................................................................................................... 34 6.2 Redirected RESTORE ............................................................................................................ 36 6.2.1 Creating a Script for Redirected RESTORE Using SAP Tool brdb6brt........................... 36 6.2.2 Common Scenarios for the Use of brdb6brt .................................................................... 36 6.2.3 Example ........................................................................................................................... 39 6.2.4 Creating a Script for Redirected RESTORE from a Backup Image ................................ 40 6.2.5 Adapting the Generated Redirected RESTORE Script ................................................... 40 6.2.6 Performing the Restore.................................................................................................... 42

DB2 for LUW Backup Best Practices

1 Introduction
The following chapter highlights some DB2 architectural components with respect to the backup and recovery architecture. It explains the DB2 objects that are relevant for backups and the different backup methods such as offline backup or online backup. Furthermore, the backup process flow and how to influence the interaction of these processes is described as well as the backup architecture and the process of DB2 log archiving. Based on this information, you can develop your backup and recovery strategy. This white paper is part of the IBM Backup and Recovery White Paper Collection. For more information about backup and recovery, see A Practical Guide to DB2 LUW Backup and Recovery in SAP Environments Part 1 Backup and Recovery Overview.

2 DB2 Backup Technical Overview


DB2 database backups are performed under the control of the DB2 database manager (known as DB2 Instance). Due to this architecture, the native DB2 backup utility only reads the content of the containers for the specified tablespaces up to the high-water mark and the configuration information (metadata such as container location, container sizes, and database configuration values) and writes this data into a DB2-specific backup image. Note that the DB2 backup utility does not copy database container files to a target directory or target device. The following is an example of a backup to disk: - UNIX and Linux: Each backup is created in the backup directory as specified in the backup command and is identified by a unique name that consists of a concatenation of several elements separated by periods. <Backup_Directory>/<DB_alias>.<Type>.<Inst_name>.NODE<nnnn>. CATN<mmmm>.<timestamp>.<Seq_num> Example: NL4.0.DB2NL4.NODE0000.CATN0000.20090822120112.001 - Windows: The backup image is created in the backup directory as specified in the backup command. The backup creates its own subdirectory path in this backup directory, for example: <Backup_Directory>\DB_alias>\<Type>\<Inst_name>\NODE<nnnn>\ CATN<mmmm>\<timestamp>.<Seq_num>

A corresponding log entry is written to DB2s backup and recovery history file. To decrease the size of the backup image, you can use the COMPRESS parameter. The backup architecture works as follows: The db2agent engine dispatchable unit (EDU) communicates with and controls two special EDUs (db2med and db2bm) that are started with the backup. The db2med EDUs are media controllers. They transfer data from backup buffers that are full. The number of db2med EDUs depends on the number of backup devices or sessions. The db2bm EDUs are buffer manipulators. They move data from tablespaces to backup buffers (not buffer pools). One buffer manipulator works on a tablespace at a time. The largest tablespaces are backed up first. To read data from containers, the db2bm EDUs use prefetchers.

The backup processing has been improved multiple times over the DB2 releases and Fix Packs as follows:

DB2 for LUW Backup Best Practices

DB2 Version Traditional behavior for all previous DB2 Versions

Behavior One db2bm EDU is working on one tablespace at a time. One prefetcher reads from one container of a stripe set; several containers can be read in parallel. One prefetcher reads one extent at a time. If in use, file system caching (read-ahead) improves read performance. Latch contention can occur during online backup. As a result, prefetchers on the second and subsequent container files might often have to wait.

DB2 9.1 FP7 and higher DB2 9.5 FP4 and higher

One db2bm EDU is working on one tablespace at a time. During online backup, one prefetcher reads a set of consecutive extents (horizontally) that are distributed over the containers of a stripe set (a so called mini range); several prefetchers can work for one db2bm EDU at a time. Prefetchers still execute a one-extent IO at a time. File system caching (read-ahead) does not contribute to backup performance. Latch contention is significantly reduced. As a result, all prefetchers can work simultaneously. Tests show a significant performance increase. The backup performance increase, however, can only be achieved with stripe sets that do not contain more than 32 containers.

DB2 9.1 FP10 and higher DB2 9.5 FP6 and higher DB2 9.7 FP1 and higher

DB2 detects the number of containers per stripe set. For tablespaces with stripe sets of more than 32 containers, DB2 switches back to the traditional (pre 9.1 FP7 / 9.5 FP4) way of backup read processing.

One db2bm EDU is working on a tablespace in a stripe set of a tablespace at a time. One prefetcher reads from one container of a stripe set; several containers can be read in parallel. Each prefetcher reads multiple extents at a time. No latch contention occurs during online backup since the prefetchers work independently for separate db2bm EDUs. This way, the highest read performance of online backup can be achieved. If there is only one container in the tablespace or per stripe set, DB2 switches back to the traditional (pre 9.1 FP7 / 9.5 FP4) way of backup read processing.

Generally we recommend that you upgrade to the latest version of DB2 to take advantage of the most recent improvements.

2.1

Configuration

The configuration of the database manager and the database determine the following resources that are related to backup performance:

DB2 for LUW Backup Best Practices

Utility heap The database configuration (DB CFG) parameter UTIL_HEAP_SZ governs the amount of memory available for backup, restore, and load operations. Note that you cannot run any of these utilities on the same data at the same time. Configured numbers of prefetchers Data read requests for DB2 backups are served by the DB2 prefetchers as configured through database parameter NUM_IOSERVERS. Throttling A database backup can lead to a significant performance impact of concurrent application processes. Using utility throttling, you can regulate the impact of database utilities during production periods in the database. The database manager configuration (DBM CFG) parameter UTIL_IMPACT_LIM restricts the impact of a utility on the database to a certain percentage depending on the parameter value. For example, a value of 10 causes a backup process to impact the system workload to no more than 10 percent. If you use utility throttling, you need to take into account the following: - Backup throttling only occurs if you assign a priority value between 1 and 100 to the UTIL_IMPACT_PRIORITY parameter of the BACKUP DATABASE command or if you apply a priority subsequently to an already running backup as follows: db2 set UTIL_IMPACT_PRIORITY for <utility_id> to <priority> (for the utility_id check the output of db2 list utilities show detail) - Minimizing the impact of a utility by throttling has the following disadvantage: Utility throttling causes utilities, and in this way backup activities, to run longer.

As of DB2 Version 8.2, the backup utility is enabled for self-tuning. If not specified, DB2 automatically chooses the number of buffers, the buffer size, and the parallelism depending on the number of processors, the database configuration (DB CFG), and the configured utility heap memory (the DB CFG parameter UTIL_HEAP_SZ). If you want to configure the number of buffers, the buffer size, and the parallelism manually, you should consider the following: Parallelism The parameter PARALLELISM indicates the number of tablespaces being backed up in parallel, starting with the largest tablespace. In particular, if you have a tablespace that consumes a large amount of the total database size, this tablespace can influence the runtime of the backup. You should not configure the number of parallelism parameters to be higher than the number of tablespaces in your database. If you configure a higher parallelism than NUM_IOSERVERS, the backup utility cannot benefit from the high degree of specified parallelism. Backup buffers The backup buffers collect the data to be written to the backup image. The number and size of backup buffers is provided by the command line parameters WITH <number of buffers> BUFFERS and BUFFER <buffer-size>. To determine the number of buffers, you can use the following formula: #backup buffers = #output devices + parallelism + 2 The backup buffer size should be a multiple of the largest extent size. The SAP default extent size is 2. The DB2 diagnostic log file contains information about parallelism, backup and recovery heap (BAR heap), buffer size, and number of buffers as of the following Fix Packs: DB2 V9.1 FP7 and higher DB2 V9.5 FP4 and higher DB2 9.7 FP0 and higher

With earlier Fix Pack levels, the db2diag.log shows the following information with DIAGLEVEL 4 only.

DB2 for LUW Backup Best Practices

2009-04-07-10.49.21.597395+120 E253547G483 LEVEL: Info PID : 7307 TID : 1195371424 PROC : db2sysc 0 INSTANCE: db2w50 NODE : 000 DB : W50 APPHDL : 0-12 APPID: *LOCAL.db2w50.090407084918 AUTHID : DB2W50 EDUID : 35 EDUNAME: db2agent (W50) 0 FUNCTION: DB2 UDB, database utilities, sqluxGetDegreeParallelism, probe:507 DATA #1 : <preformatted> Autonomic backup/restore - using parallelism = 16. Autonomic BAR - heap consumption. Targetting (90%) - 8973 of 9970 pages. Autonomic backup - tuning enabled. Using buffer size = 553, number = 16.

2.2

Monitoring

To monitor the backup process, you use the following command: db2 list utilities show detail pcibm16:db2w50> db2 list utilities ID = Type = Database Name = Partition Number = Description = Start Time = Throttling: Priority = Progress Monitoring: Estimated Percentage Complete = Total Work = Completed Work = Start Time = show detail 489 BACKUP W50 0 online db 08/11/2009 09:38:22.280464 Unthrottled 14 8034538965 bytes 1093299133 bytes 08/11/2009 09:38:22.386954

In addition, you can also check the tablespace status during an online backup. This option does not work during an offline backup because it requires a connection to the database. Only the following statuses are normal during a backup: Normal and Backup in Progress. To check the tablespace status, use the following SQL statement: db2 "select TBSP_ID, substr(TBSP_NAME,1,14) as TBSP_NAME, TBSP_PAGE_TOP as HWM_IN_PAGES, substr(TBSP_STATE,1,18)as STATUS from SYSIBMADM.TBSP_UTILIZATION" The status Backup in Progress means this tablespace is being backed up at the moment or is still in the backup queue. The order of tablespace backups is determined by their size, beginning with the largest tablespace available.

DB2 for LUW Backup Best Practices

You can obtain an adequate output using the DBA Cockpit. To do so, call transaction DBACOCKPIT and choose Space Tablespaces in the navigation frame of the DBA Cockpit.

2.3

Backup Image Verification

db2ckbkp -H <backup image> ... Server Database Name Timestamp Database Partition Number Instance Backup Mode Includes Logs Compression Backup Type Backup Gran. Status Flags ...

-----------

W50 20081203121747 0 db2w50 0 0 0 0 0 1

To check the integrity of a backup image and to determine whether the image can be restored, you can use the db2ckbkp tool. The db2ckbkp tool can also be used to display the metadata that is stored in the backup header. The output of the db2ckbkp shows the characteristics of the backup image: Backup Mode: Include Logs: Compression: Backup Type: Backup Gran[ularity]: 0=offline, 1=online 0=exclude logs, 1=include logs 0=no compression, 1=compression 0=full, 3=tablespace 0=normal, 16=incremental, 48=delta

DB2 for LUW Backup Best Practices

You can run db2ckbkp for backup images on disk and on tape. It does not work for images located in external storage systems, for example, Tivoli Storage Manager (TSM) or Legato. Use the VERIFY option of the db2adutl command to check the integrity of a backup image stored in TSM.

2.4

Instance Configuration and Registry

If a database RESTORE is performed on a target system that is using a backup created from a source system, the database manager configuration is not included in the backup. To synchronize the database manager (DBM) configuration parameters between the database manager configurations of the source and the target system, you can use the tools db2cfexp and db2cfimp. You export the DBM configuration from the source system using db2cfexp and then import the settings on the target system using db2cfimp. These tools export or import the following configuration information into the configuration profile: Database information Node information Database manager configuration settings Registry settings To export the database manager configuration, enter the following command on the source system (as user db2<dbsid>): db2cfexp <filename> BACKUP At this point, you can make changes to the exported database manager configuration. If you are moving to a different hardware, you might have to adapt several settings, for example, the DBM CFG parameter INSTANCE_MEMORY, the number of agents, or several paths such as LOGPATH. To import the exported database manager configuration parameters on the target host, enter the following command on the target system (as user db2<dbsid>): db2cfimp <filename> BACKUP After importing the database manager configuration to the target instance, you may want to adapt your database manager configuration parameters according to your needs before you start the database manager. Some parameters in the exported database manager configuration might cause problems after having been imported to the target system. In particular you should check the following parameters: Parameter SVCENAME The database manager configuration parameter SVCENAME usually contains a port name (e.g. sapdb2<dbsid>) that is resolved to a port number over the services file. Check if the respective entry in the services file exists on your target system or update the parameter to an existing service name. Parameter INSTANCE_MEMORY (DB2 9.5 and higher only) Generally, you have to set INSTANCE_MEMORY to an appropriate value on your target system, depending on your hardware capabilities. For details, see chapter 6.1. (Adaption of the database manager and database configuration parameters after a restore to a (smaller) test system).

2.5

Aborting an Online Backup

To stop an online backup, you have to retrieve the application handle number of the backup. To do so, use the following DB2 command: db2 list applications show detail Alternatively, you can retrieve the application handle number by using SAP transaction DBACOCKPIT and choosing Performance Applications in the navigation frame of the DBA Cockpit.

10

DB2 for LUW Backup Best Practices

On the command line and in the DBA Cockpit, you can see an application with the description Backing Up a Database or the request type Backup (as of DB2 9.7). To stop the online backup, note the application handle number. To abort the application, enter the following command on the command line: db2 "force application (<application-handle-number>)"

11

DB2 for LUW Backup Best Practices

3 Backup at Physical Level


In the previous chapter of this document, we described parameters and architecture of the DB2 backup utility. It is, however, also possible to take backup of your database by taking a physical copy of the database files. In this case, you have to consider the following:

You need to ensure that the data is not changed during the copy processing to guarantee data consistency. You need to ensure completeness of the data to be copied. This means, all data required for the database to be operational needs to be copied.

You must prevent the database manager from updating any files by either taking the database offline or setting the database to status write suspend. If you do not prevent the database from writing to the files the physical copy will be useless. To set write suspend mode for a database, use the following command through an existing database connection: db2 set write suspend for database This command causes the database to block all write access to database files until you resume I/O. After finishing the file physical copy process, you have to enable the database write processes by issuing the following command: db2 set write resume for database Both commands act on the database partition that you are connected to. To suspend I/O on all database partitions in a partitioned database server, use the db2_all utility with the following syntax: db2_all "db2 set write {suspend | resume} for database" While in write suspend mode, the database will not perform any writes to any database files, including the DB2 log files. As a result, you will observe, that all current work will be paused at commit time. In addition all snapshots against a database that is in write suspend mode do not return any data, any attempt to increase a tablespace container fails. Once you created a physical copy of your database you can use it as a backup image for later recovery. Or you can create a DB2 backup image from the copy. You can create a physical copy of your database on file level, e.g. by using operating system copy commands like cp or by using advanced snapshot or flash copy technologies from your file system or storage subsystem. Only the second option will outperform a DB2 backup.

3.1

Required Files for a Physical Backup


Content of the database directory Content of the log path All DMS tablespace containers All SMS container paths including content Content of automatic storage paths

For a valid backup, the following data is required:

As of DB2 V9.1 you can use the administrative view sysibmadm.dbpaths to obtain a complete list of all paths and files required for a physical backup.

db2 "select dbpartitionnum as part, substr(type,1,18) as type, substr(path,1,64) as path from sysibmadm.dbpaths" PART TYPE PATH ---- ------------------ ---------------------------------------------------------------0 LOGPATH /db2/DEV/log_dir/NODE0000/

12

DB2 for LUW Backup Best Practices

0 DB_STORAGE_PATH 0 TBSP_CONTAINER 0 TBSP_CONTAINER 0 TBSP_DIRECTORY 0 DBPATH 1 LOGPATH 1 DB_STORAGE_PATH 1 TBSP_CONTAINER 1 TBSP_CONTAINER 1 TBSP_DIRECTORY

/db2/DEV/sapdata1/ /db2/DEV/sapdata4/NODE0000/DEV#FACTI.container003 /db2/DEV/sapdata4/NODE0000/DEV#FACTD.container003 /db2/DEV/sapdata4/NODE0000/temp16/PSAPTEMP16.directory003/ /db2/DEV/db2dev/NODE0000/SQL00001/ /db2/DEV/log_dir/NODE0001/ /db2/DEV/sapdata1/ /db2/DEV/sapdata4/NODE0001/DEV#FACTI.container003 /db2/DEV/sapdata4/NODE0001/DEV#FACTD.container003 /db2/DEV/sapdata4/NODE0001/temp16/PSAPTEMP16.directory003/

0 LOCAL_DB_DIRECTORY /db2/DEV/db2dev/NODE0000/sqldbdir/

1 LOCAL_DB_DIRECTORY /db2/DEV/db2dev/NODE0001/sqldbdir/ 1 DBPATH /db2/DEV/db2dev/NODE0001/SQL00001/ 416 record(s) selected.

3.2

Flash Copy Backups

There are disk storage systems and file systems that allow you to take instantaneous copies of the file systems that contain the database files. Some of these storage subsystems use mirrored disks (i.e. pairs of disks) and allow you to perform a split of this mirror to obtain a copy of your database files. Others allow you to take a very fast snapshot of the database files, which can then be copied to separate disks later. To back up your database with such devices, you perform the following actions:
1. 2. 3.

Set the database to write suspend. Take a complete mirror copy or snapshot of the file systems. Resume I/O on your database.

This procedure is commonly referred to as backup via flash copy, or backup via split-mirror. Throughout this paper we use the term flash copy for this procedure. As of DB2 9.5 you can also use DB2 Advanced Copy Services (ACS) to take flash copy backups using DB2 commands. DB2 ACS fully automates the complete flash copy procedure. In addition it also writes any flash copy backup to the DB2 history file and supports automated recovery from a flash copy backup. Refer to the following documentation for more information about DB2 ACS: DB2 V9.5: http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.admin.ha.doc/doc /c0052870.html DB2 9.7: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.ha.doc/doc /t0052887.html

3.3

Creating a DB2 Backup Image from a Physical Copy

Once you have taken a file system copy or flash copy from your database, you can create a DB2 backup image from the copy. Throughout the following procedure we use the term source machine for the machine where the database files were copied and the term backup machine for the machine where the copied files are mounted to create the DB2 backup image. First, execute the following preparation steps on the backup machine:

Install DB2 software on the backup machine with the same major version as on the source machine. The same Fix Pack level is not required. However, a recent level of code is beneficial.

13

DB2 for LUW Backup Best Practices

Make the physical copy available on the backup machine in the same locations as they were on the source machine. If you performed a flash copy backup, mount the file systems to mount points on the backup machine with the same paths as on the source machine. Create an instance owing user and a DB2 instance.

To reach a consistent state for the database, you need to perform the following steps on the backup machine: 1. Catalog the database. In an SAP environment, you usually use the SAP system ID as database name. Therefore, use the following command: db2 catalog database <sapsid> on /db2/db2<sapsid> 2. Start the database instance: db2start 3. Bring the database into a consistent point in state Rollforward Pending using the following command: db2inidb <dbsid> as standby 4. Run your backup command on the split off database that is mounted to your backup server. Note that the database is in status Rollforward pending. To create a valid backup image that can be used for recovery with log files from the source database, you must keep the split off database in this state. If you want to use the source machine as your backup machine, you will need to use different mount points. You then need to use the RELOCATE USING option to adjust your copied database files to the new mount points. The following list provides access to supplementary information: For information about flash copy, see the following article: http://www.ibm.com/developerworks/data/library/techarticle/0204quazi/0204quazi.html This article provides a good overview of the features available with suspended I/O. The following document (IBM ESS, see the article Using IBM TotalStorage Enterprise Storage Server (ESS) with the IBM DB2 UDB Enterprise Edition V7.2 and SAP R/3) provides a good example of a complete split mirror procedure: ftp://ftp.software.ibm.com/software/dw/dm/db2/0210nomani/0210nomani.pdf

14

DB2 for LUW Backup Best Practices

4 Backup Tuning
There are various options to tune the backup process. The backup performance is influenced by factors such as the amount and the distribution of data that is being backed up as well as by the method of how data is accessed. You can modify some of these influencing factors by changing the configuration of your database or your operating system. Other backup factors are governed by the layout of the tablespaces or the file systems that are hosting the database. To tune the backup process, you need to understand how the backup process works. For operational processing as well as for backups, data is retrieved by the prefetcher processes. The buffer manipulators collect data in the backup buffers. The buffers are then written to the backup device. In this way, the size of the backup buffers is the key size unit that influences the backup runtime for a database with the given layout. The backup target device can be a tape device, a storage system, or a file at file system level. Tablespace parallelism This is the number of tablespaces being backed up at the same time. The command line parameter PARALLELISM determines this value. Data retrieval This is the number of prefetchers that feed a backup buffer. This value derives from the number of containers for the tablespace that is currently being backed up and the number of prefetchers configured.

For data retrieval there are two levels of parallelism:

As the prefetchers serve online transactions as well as backup requests, improper configuration can lead to contention for the prefetchers during online backups. It might therefore be meaningful to increase the number of prefetchers, if you are running online backups during heavy system activity.

4.1

File System Caching Considerations

The traditional approach used in all operating systems is to buffer the data that is read from or written to disk via the file system layer in memory. This memory buffer is called the file system cache. The file system decides according to internal rules when the data in the file system cache is to be synchronized with the disk. The file system also ensures basic data integrity through a locking mechanism that serializes data modification at file level. DB2 has its own caching in memory through the buffer pools. The file system cache will add little value but result in double caching, which consumes extra memory and adds processing overhead. Therefore, it will typically be beneficial to avoid the use of file system caching for data that is buffered in the DB2 buffer pools. In addition, DB2 already ensures integrity of its tablespace container files. The locking mechanism of the file system does not add additional value but causes additional contention. Such contention typically becomes noticeable as a performance slow-down when a container exceeds a size of 15 20 GB. Therefore it is advisable to turn off file system locking if possible. On the other hand file system caching might have a positive influence on backup performance. This is because file system caching works on larger portions of data, which are typically larger than the DB2 extent size. The performance of the backup process will then benefit from the read-ahead mechanism of the file system cache. This performance benefit for the backup process needs to be weighed against the additional memory cost and locking overhead. There are two different flavours of operating DB2 on a file system without file system caching: Turning off both file system caching and locking is generally referred to as Concurrent I/O (CIO). If CIO is available and supported, we recommend this setting. It will positively influence performance of your online transaction processing and avoid overhead from double caching and file system locking.

15

DB2 for LUW Backup Best Practices

If only file system caching is turned off but locking is retained, this is known as Direct I/O (DIO). For databases with a high amount of concurrent processes as in SAP environments, there is relatively little benefit in the use of DIO as opposed to the use of file system caching.

Refer to the following documentation to understand if your file system supports CIO or DIO: DB2 V9.1: http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.d oc/doc/t0023622.htm DB2 V9.5: http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.admin.dbobj.doc/ doc/c0051304.html DB2 9.7: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.admin. dbobj.doc/doc/c0051304.html

Theoretically you can activate CIO or DIO using a mount option for a complete file system. If you configure CIO or DIO via the mount option, the DB2 settings do not have any effect. We strongly recommend that you only use DB2 means to configure caching at tablespace level. You configure file system caching on DB2 level per tablespace using the CREATE TABLESPACE or ALTER TABLESPACE command: {CREATE | ALTER} TABLESPACE ... [NO] FILE SYSTEM CACHING ... Note: For the activation of the new settings, all connections must be terminated. You can determine the current setting of the file system caching attribute from the tablespace snapshot as follows:

db2 get snapshot for tablespaces on <dbname> Tablespace Snapshot ... Tablespace name Tablespace ID ... File system caching ...

= SYSCATSPACE = 0 = No

The same information is available if you use the snapshot table function SYSIBMADM.SNAPTBSP. The desired information is displayed in the FS_CACHING column. Possible values are 0, 1, and 3: Value 0 1 3 Meaning Use file system caching Do not use file system caching Use the default for the file system being used This value also means that the file system caching attribute has not been changed since tablespace creation. For the tablespace, there is no such command as return to default.

You can use the following SQL query: db2 "select tbsp_name, fs_caching, dbpartitionnum from sysibmadm.snaptbsp"

16

DB2 for LUW Backup Best Practices

TBSP_NAME FS_CACHING DBPARTITIONNUM ---------------------------------------- ---------- -------------SYSCATSPACE 1 0 TEMPSPACE1 0 0 USERSPACE1 1 0 TEMPSPACE1 0 1 USERSPACE1 1 1 5 record(s) selected.

4.2

Extent Size Considerations

The tablespace extent size influences backup performance in two ways. Firstly, the db2 backup utility reads in extents. Larger extents will result in larger I/O blocks, which may increase backup performance. On the other hand a larger extentsize will increase space consumption and might result in wasted space, in particular in tablespaces with very many empty or small tables. Larger I/O blocks can also be achieved by turning file system caching on (at the expense of additional memory consumption and possible locking contention). Secondly, larger extent sizes reduce the amount of fragmentation. If your tablespace mostly consists of small groupings of extents with data, separated by small groups of free extents, the tablespace is said to be fragmented. Such fragmentation can affect the performance of an online backup, because the backup processes one contiguous block of used pages in the tablespace at a time and then moves on to the next block. As the tablespace data might have changed in between, the list of contiguous blocks has to be rebuilt each time, which can increase the backup runtime. Fragmentation can occur after an operation returned entirely free extents to the tablespace. Such operations are REORG, TRUNCATE TABLE DROP STORAGE (DB2 9.7 and higher) and DROP TABLE. Possible fragmentation from DROP TABLE might also occur if you convert many empty tables in a tablespace to virtual tables. If you are using DB2 9.7 with reclaimable storage tablespaces, DB2 will remove fragmentation holes and compact the tablespace on an alter tablespace reduce operation. Generally you should stay with the SAP standard extent size 2, unless you have clear evidence that a larger extent size can help you increase backup performance significantly. You can find more information about extent sizes in SAP Note 1493932.

4.3

Influence of the High-Water Mark (HWM)

The high-water mark (HWM) is the highest extent used in a tablespace. For every tablespace, the backup processes all pages below the high-water mark, but only saves the extents containing data. The HWM therefore has a direct influence on the backup runtime and the amount of extents processed. Use SYSIBMADM.SNAPTBSP_PART to obtain information about the HWM, as shown in the following: SELECT SNAPSHOT_TIMESTAMP, TBSP_NAME, TBSP_PAGE_TOP, DBPARTITIONNUM FROM SYSIBMADM.SNAPTBSP_PART If you have free space in your tablespaces below the HWM you should consider reducing the HWM to the extent possible to reduce backup run time. Prior to DB2 9.7 there is no reliable way to reduce the HWM except complete move of all data of the tablespace to a new tablespace. You can use the DB6CONV tool to move tables online or offline from one tablespace to another. For more information, see SAP Note 362325. As of DB2 9.7 you can use reclaimable storage to compact your tablespaces and return unused storage to the system. Reclaiming storage is an online operation; it does not impact the availability of

17

DB2 for LUW Backup Best Practices

data to users. For automatic storage tablespaces, an automatic reduction of the high-water mark is included when you reduce the tablespace size. For DMS tablespaces use ALTER TABLESPACE LOWER HIGH WATER MARK. You can check the reclaimable storage attribute as follows: db2 "SELECT varchar(tbsp_name, 20) as tbsp_name, reclaimable_space_enabled from table(MON_GET_TABLESPACE('',-2)) as t" In the output, the tablespace name and the attribute are displayed. An attribute value 1 means the tablespace uses reclaimable storage. TBSP_NAME RECLAIMABLE_SPACE_ENABLED -------------------- ------------------------SYSCATSPACE 1 PSAPTEMP16 0 SYSTOOLSTMPSPACE 0 SYSTOOLSPACE 1 DEV#DDICD 1 DEV#DDICI 1

4.4

Tablespace Distribution

The distribution of the overall amount of data in the database among the tablespaces influences the backup runtime. The backup process processes multiple tablespaces at a time, but every single tablespace is processed sequentially. An equal distribution of data among the tablespaces therefore helps to speed up your backup, especially with DB2 versions up to and including DB2 9.7 FP0. If your tablespaces are very different in size, the largest tablespace will likely determine the overall backup runtime. If you want to increase backup performance you may want to redistribute your data using R3load to achieve an equal data distribution. You can use the DB6CONV tool to move tables online or offline from one tablespace to another. For more information, see SAP Note 362325.

18

DB2 for LUW Backup Best Practices

4.5

Prefetcher Configuration

During normal workload, prefetchers (also called IO servers) are used to perform asynchronous reads from disk into the buffer pool. Online backups, however, also use prefetchers to read the data from the tablespace containers. That means, in case of online backups, applications compete with online backups. To prevent a performance bottleneck, you can tune the number of prefetchers manually by increasing the database configuration parameter NUM_IOSERVERS. Rule of thumb: (Maximum number of tablespace containers over all tablespaces) x 2 For more information, see the respective information in the IBM DB2 Information Center for your database release: DB2 V9.1: http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.d oc/doc/c0011965.htm DB2 V9.5: http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin. dbobj.doc/doc/c0011965.html DB2 9.7: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.admin. dbobj.doc/doc/c0011965.html

4.6

Database Size Considerations

The easiest way to reduce the backup runtime is to reduce the database size. You can achieve a reduced database size through one of the following measures:

19

DB2 for LUW Backup Best Practices

Conversion of empty tables to virtual tables Deferred table creation means that empty tables are replaced by virtual tables. A virtual table is stored as a special view that contains the table structure information. In this way, you can avoid the allocation of initial space for an empty table. For more information about deferred table creation, see SAP Note 1151343. DB2 compression DB2 provides several compression techniques which allow you to reduce the disk space allocation for tables (DB2 9.1 and higher), indexes (DB2 9.7) and LOBs (DB2 9.7). For typical SAP databases you may expect an overall compression ratio of 50-70%. Archival If you regularly archive data you reduce the space needed in your database. You can use SAP archival to archive older data and keep your database small. SAP NetWeaverBW Nearline Storage As of SAP NetWeaver 7.01 SP6 you can use a separate DB2 database as nearline storage device for your SAP NetWeaver BW system. Similar to archival it keeps your active database small by archiving data from the BW database to a separate DB2 database, but it keeps the data fully accessible for online queries

If you apply one or more of the database size reduction techniques above you should afterwards compact your tablespaces and reduce the HWM as described in chapter 4.3.

4.7

Backup Tuning Example

The following is an example of a real-life customer backup tuning.

Source system (DB2 V9.5) Size: Compression: Tablespaces: Largest Tablespace: Online Backup Runtime: 1,6 TB On (compressed after load, no possibility to reduce the high-water mark) 29 900 GB 8h

The system had a size of 1.6 TB with 29 tablespaces. The largest tablespace had a size of 900 GB and the next smaller tablespace was 300 GB. This means, the tablespace distribution was not optimal. The fragmentation of the tablespaces represented an additional problem because row compression had been performed after the load, leaving many empty gaps behind. This led to a lot of free space in the tablespaces and a high-water mark that could not be lowered. To reduce the backup runtime, we performed the following steps: To determine a useful maximum tablespace size, we checked the system for the largest table. Afterwards, we created a new tablespace layout with 48 tablespaces where the largest had a size of 71 GB. The next step was the export of the system with R3load. During this step, we changed the tablespace layout in preparation of the import. During the import, we used ADC (available as of DB2 V9.5) to compress the data during the load and to prevent data fragmentation. The resulting database size was 1.1 TB. The compression rate was lower than the one on the source database, but there was no fragmentation and no free space in the tablespaces.

20

DB2 for LUW Backup Best Practices

The next online backup tests showed a runtime reduction to 3 hours and 10 min:

Target System (DB2 V9.5) Size: Compression: Tablespaces: Largest Tablespace: Backup time: 1,1 TB On (compressed during load, high-water mark equal tablespace size) 48 71 3 h 10 min

We then performed two additional tests: 1. During the backup to the file system, we mounted the target file system without file system caching. This reduced the runtime from 3 h 10 min to 2 h 17 min. 2. During full daily workload, we tested different values of NUM_IOSERVERS. With the automatically calculated value of 8, the backup runtime was 3 h 56 min. An increased number of NUM_IOSERVERS of 20 reduced backup run time slightly to 3 h 6 min.

Test 1: File system caching off on target device (file system) Backup time: 2 h 17 min

Test 2: Backup test with parallel workload Backup time with NUM_IOSERVERS = AUTOMATIC (8) Backup time with NUM_IOSERVERS = 20 3 h 56 min 3 h 06 min

4.8

Backup Optimization Recommendations


Keep your database as small as possible through use of DB2 compression, SAP virtual tables and frequent archival Use DB2 9.7 reclaimable storage to compact your tablespaces and lower the tablespace high-water mark Disable file system caching if you are using a CIO capable file system Increase the number of prefetchers (NUM_IOSERVERS) if you are using online backups during normal system operation hours

To achieve optimal online backup performance we recommend the following:

21

DB2 for LUW Backup Best Practices

5 Basic Backup Strategies and Best Practices


This chapter provides an overview of possible database backup scenarios and strategies in an SAP system environment. Only aspects of database-dependent data are covered. Other important areas such as the backup of SAP executables, SAP archived data, or operating system data are not part of this section. However, these should be part of an overall backup strategy. Data recovery considerations are not in the focus of this paper and will therefore only be mentioned marginally. The strategies described here are best practices gained from customer system environments. All given hints and considerations can only be considered as general guidelines and you have to evaluate them for your specific system environment. You have to review and test a backup strategy on a regular basis. Especially with an increasing database size, backup strategy decisions might have to be reconsidered. Make sure that you pay special attention to backup considerations in DPF database environments. Since multi-partition databases consist of multiple parts that each have their own particular data containers, log files, database configuration, and recovery history file, you have to consider each partition individually by default. If there are specific DPF aspects to be taken into account or if there are DPF solutions available, we mention them accordingly.

5.1

Avoiding Data Loss and Ensuring Recoverability

One of the most important issues is how database-related data is stored on file systems and storage subsystems. Whatever can be done to avoid the loss or corruption of data should be done. Prevention is always better than a difficult and expensive recovery action. In general, the key factor to avoid data loss is the redundancy of storage. To provide redundant storage for certain types of DB2-related data, there are several technical solutions, for example: Mirroring at file system level (for example, performed by a logical volume manager) Several levels of redundancy in storage subsystems (represented by certain RAID techniques) Double storage of files by means of the database itself (such as the mirroring of the online log directory or double log file archiving)

All kinds of DB2-related data should be stored using a type of redundancy. Mirroring provided by the file system or by the storage system is generally faster than mirroring using software means. If software-provided mirroring is used, you should place the original and the mirrored directory on different file systems or different storage locations. Not only container files and log files should be stored using redundancy but also the instance directory (/db2/<SID>/db2<sid>/) including the underlying database directory (/db2/<SID>/db2<sid>/NODEnnnn/SQL00001/) and the home directory of the DB2 admin user (/db2/db2<sid>/). Data separation helps to guarantee the recoverability of the database. Separation means storing the tablespace data apart from transaction log information and backup image data as well as online log files apart from archived (offline) log files. The reason behind is that the DB2 restart recovery is dependent on existing online logs. The restore of an online backup cannot be recovered without archived log files, and all previous backups need archived logs to be rolled forward to a current point in time. The decision on which types of data to store on which separate storage media or storage locations depends on several factors, for example, on the required I/O performance, data safety considerations, retention time, and expenses for storage media.

22

DB2 for LUW Backup Best Practices

5.2

Backup Scheduling and Concurrency Considerations

A backup procedure needs to ensure that the data being backed up is consistent. For an offline backup, this is no problem as there are no concurrent activities on the database. An online backup makes use of locking mechanisms of the database to ensure data consistency of database objects. The following activities lock with an online backup: REORG, LOAD, TRUNCATE with DROP STORAGE, IMPORT REPLACE if DB2_TRUNCATE_REUSESTORAGE=IMPORT is not set, and any not logged operation.

5.3

Types of Data to be Backed Up

The following sections describe what type of database-related data you have to consider if you develop your own database backup strategy: Transaction log files Configuration information Recovery history information Tablespace data

For each type of relevant data, you will find a short summary of actions to be taken, tools and methods of backup results verification, and information on backup frequency and retention time.

5.3.1 Transaction Log Files


Approach: Online transaction logs reside in the "Path to log files" as displayed in the database configuration (usually in /db2/<DBSID>/log_dir/NODEnnnn/). They are archived automatically as soon as they are completely filled with log records. Log file archiving is performed by the DB2 log manager (process / thread name: db2logmgr) or, with the legacy log file management, by the DB2 logging user exit (db2uext2). Af t er ar c hiv i n g, o n l in e l o gs r e ma i n i n t he lo g d ir ec t or y as t h ey m ig ht b e n e ed e d for r o ll b ac k p ur p os es . Note: In case of a log archiving problem, do not manually remove log files from the log directory (/db2/<DBSID>/log_dir/NODEnnnn/) because this will damage your database. Instead, correct the problems of the log archiving process. As a preventive measure, you can specify a temporary log archiving directory using the database configuration parameter FAILARCHPATH. This parameter is used in case the log archiving destination is unavailable, and it is emptied automatically after recurrence of the original archiving destination. You can configure the DB2 log manager for direct archiving of transaction logs. That is, a log file is copied directly to a storage management system such as TSM when it is full. You configure direct archiving by setting the database configuration parameter LOGARCHMETH1 or LOGARCHMETH2 to "TSM:" or "VENDOR:" . The advantage of direct archiving is that the number of potentially failing components in the archiving process is minimized. Archived logs are then also retrieved automatically and directly by the DB2 log manager in case they are needed for recovery. Note: Read SAP Note 1493587 if you want to use "infinite logging" by setting the database configuration parameter LOGSECOND to -1. Infinite logging means, online logs that are already archived can be reused (that is, they are overwritten) even if they might still be required for rollback activities. This increases your log space to an unlimited size, but log files required for rollback might have to be retrieved from the storage management system and therefore, probably from tape first, which might slow down crash recovery. You can configure the DB2 log manager for indirect archiving of transaction logs. That is, a log file is copied to a disk location as soon as the log file is full. You configure the database for indirect archiving

23

DB2 for LUW Backup Best Practices

by setting the database configuration parameter LOGARCHMETH1 or LOGARCHMETH2 to "DISK:". Since the disk location (for example, under /db2/<DBSID>/log_archive) fills with archived log files, it must be cleaned by the DB2 tape manager (process/thread name: db2tapemgr) or by a customer-specific solution that is moving the log files to tape or to a storage management system. The DB2 tape manager is not enabled for the interaction with storage subsystems. If you want to create your own solution to move archived logs from disk to a permanent storage location, you must consider that as of DB2 Version 8.2 log files are stored in a chain directory (for example, /db2/<DBSID>/log_archive/db2<dbsid>/<DBSID>/NODEnnnn/Cmmmmmmm). The number m of the chain is incremented with each database restore. Moving archived logs from a disk location is not automatically triggered. You should establish an automated solution that is based on a time schedule or, even better, on a free space check against the disk location (file system). The advantage of indirect archiving is to provide a fast and reliable first archiving step to avoid that the archiving process is interrupted during periods of heavy system usage. On the other hand, with indirect archiving, you cannot retrieve archived logs automatically for recovery purposes if they have already been moved from disk. Generally we recommend direct archiving. A special way of archiving log files is to include them in a DB2 online backup image: db2 "backup db <dbsid> online include logs" With this option of the backup command, the range of log files required to roll forward this image (after a restore) to the earliest consistent point in time is included in the backup image. Since these included log files provide the absolute minimum of rollforward information regarding this backup image only, they are usually kept as a means of "double security" (that is, in addition to the regular log archives). They also allow for restoring and rolling forward the database (with only the backup available) in the course of a database copy. Since including logs can lead to problems in situations with heavy update activity during backup processing, you can also exclude logs from being backed up. Note: If log inclusion into a backup fails, for example, because the respective logs have already been archived and cannot be retrieved in time, the backup aborts with an error message. In this case, the backup image is automatically deleted if it was written to disk. Verification: The recovery history file contains a protocol entry for every performed log archiving operation. To check for archiving related entries in the history file, use the following command: db2 list history archive log [all | since <timestamp>] for db <dbsid> Alternatively, use the respective information provided in the DBA Cockpit, that is, call transaction Overview Archived Log Files. DBACOCKPIT and choose Backup and Recovery Especially in DPF environments, a query such as the following is helpful to gather log archiving information from all or from particular partitions of a database: db2 "select firstlog, sqlcode, entry_status from sysibmadm.db_history where operationtype in ('F','1','2') and operation = 'X' and dbpartitionnum = <partition-number> and firstlog >= '<name-of-firstlog-file-needed-for-recovery-of-this-partition>' " You can initiate an on-demand archival of the currently used online log file using the following command: db2 archive log for db <dbsid> This command can be used for a log archiving function testing as well. Log inclusion into an online backup image can be checked using the db2ckbkp command as follows: db2ckbkp H <backup-image-or-tape-device>

24

DB2 for LUW Backup Best Practices

The resulting backup image header information includes a line "Includes Logs included.

-- 1" if logs are

The names of the included logs can be retrieved from the backup image as follows (issue eight dots in the grep argument): db2ckbkp a | grep "S........LOG" | cut -f2 -d'"' | sort | uniq

Frequency & Retention Time: Archiving log files is always an on-demand action. Therefore, the components included in this backup process are automatically triggered (for example, the DB2 log manager) or you have to automate them (for example, the DB2 tape manager or customer-specific archiving solutions). All components that you use have to fulfill the performance requirements of periods with highest system usage and should be monitored. The retention time requirements for archived log files strongly depend on the database backup cycle. In general, log files have to be retained as long as they are required to recover the oldest database backup in a complete backup cycle. To determine the earliest log file that you have to retain as a function of the date when your earliest backup had been taken, use the following list command: db2 list history backup since <date-of-earliest-backup> for <dbsid> The entry under "Earliest Log" specifies the beginning of the queue of log files to be retained. If the recovery history file is lost, you can as well retrieve the number of the earliest needed log file from the backup image as the following example shows: db2ckbkp a <backup-image> | awk ' /BACKUP.START.RECORD.MARKER/ { getline; print $2 } ' The image must be available on disk or tape. In the output of db2ckbkp, you have to find the line containing the string "BACKUP.START.RECORD.MARKER". In the following line, you find the number of the earliest needed log file after an " extNum:" field descriptor.

5.3.2 Configuration Information


Approach: DB2 database configuration information is part of every DB2 backup and is restored during a DB2 restore if necessary. In case you are performing a file level backup (also referred to as "split image", "snapshot backup" or "flash copy backup"), make sure that the database directory containing the database configuration file is included in your backup copy. By default, the database directory is /db2/<DBSID>/db2<dbsid>/NODEnnnn/SQL00001 . However, the profile registry and database manager configuration information is not included in a DB2 backup image. As already mentioned in chapter "Backup Instance Configuration and Registry", you can save registry and database manager settings using the db2cfexp command. The resulting export file in ASCII format additionally contains database directory and node directory information. Verification: Check the database backup containing the database configuration information using command db2ckbkp or by performing a test RESTORE. An exclusive RESTORE of the database configuration settings from the backup image is not possible. You can check the database manager and profile registry settings by opening the db2cfexp export file using an editor. The configuration export file contains the profile registry aggregate variable DB2_WORKLOAD=SAP, but not the DB2_WORKLOAD-related individual parameters.

25

DB2 for LUW Backup Best Practices

Frequency & Retention Time: Generally, you should back up and retain configuration settings in connection with the regular database backup. Additionally, if you make ad-hoc configuration changes (for example, in the context of an SAP Early Watch Service), configuration settings should be backed up immediately. Configuration changes are not recorded by DB2 logging and therefore are not re-applied during a rollforward action. Therefore, if you have changed the configuration settings between two regular backups, you should record these changes by any means, that is, even by manually writing them down. In DPF environments, there is one shared (central) instance configuration file. However, the database configuration of each database partition can contain individual settings and must be considered separately.

5.3.3 Recovery History Information


Approach: The content of the DB2 recovery history file is part of every DB2 backup. It is only restored during a DB2 restore if the existing history file is either damaged or empty. To restore the recovery history file separately, you can use the following command: db2 "restore database history file" If you are performing a file level backup or "flash copy backup without using DB2 ACS, make sure that the database directory containing the recovery history file is included in your backup copy. By default, the database directory is /db2/<DBSID>/db2<dbsid>/NODEnnnn/SQL00001 . There is one recovery history file for each database partition in a multi-partition database (DPF scenario). A lost history file causes problems with all types of automated recovery features, such as INCREMENTAL AUTOMATIC restore or automatic retrieving of archived logs during ROLLFORWARD DATABASE and RECOVER DATABASE processing. Unfortunately, the latest available history file that is included in the most recent backup image is not suitable for automated recovery until a current point in time - either because it does neither contain the latest backup run nor the subsequent log archiving events. Therefore, it is useful to create an up-to-date backup of the recovery history file from time to time. Do not copy the history file using commands on operating system level such as cp or tar because there can be locks on this file that are held by DB2 processes. As an alternative, create an empty tablespace and perform a DB2 backup of this dummy tablespace as frequently as necessary. The tablespace-level backup image contains an up-to-date version of your recovery history file that you can restore separately. Verification: Test the database backup, which contains the recovery history information, for restorability using the db2ckbkp command. You can perform complete backup image verification only if you use db2ckbkp together with a lowercase command option. Frequency & Retention Time: Backing up the recovery history information within your regular DB2 backup should be sufficient. In an emergency case, you can also recover your database without the history file using basic DB2 features such as RESTORE DATABASE, ROLLFORWARD DATABASE, and db2ckbkp. To use enhanced DB2 recovery features such as the DB2 recover command, perform frequent backups of the history file using the workaround described above. Since you always need the latest available version of your recovery history file, it does not make sense to retain copies of this information for a very long time.

26

DB2 for LUW Backup Best Practices

The amount of data that you keep in your history file can influence the performance of recovery processing. You can regulate retention using the database configuration parameter REC_HIS_RETENTN [in days; default = 60]. This parameter must refer to your database backup cycle. To access the history file in the best possible way in terms of performance, you should keep it as small as possible.

5.3.4 Tablespace Data


5.3.4.1 Backup Using the DB2 Backup Command
The most common way of saving database content in an SAP system environment is to back up the tablespace data using the DB2 BACKUP command, which has the following advantages: High degree of internal parallelism for backup read and write processing A backup history entry is created in the recovery history file and enables automated backup check mechanisms as well as automated recovery features (for example, the DB2 RECOVER command, the DB2 RESTORE INCREMENTAL AUTOMATIC command). Backups can be written directly to storage management systems. In this way, they are stored on reliable and cost-efficient media (for example, on tape). Backups can be compressed during backup execution to save space on storage media and/or to reduce the data volume that is transferred to the storage subsystem or to the backup device. You can check DB2 backup images for restorability without performing a test restore operation. Free space in the tablespaces (beyond the tablespace high-water mark) is not stored in the backup image. You can use a DB2 backup image to perform a redirected RESTORE and therefore to change the storage structure of the tablespaces. On small databases and on test or development systems, you can use DB2's automatic backup feature that starts backups automatically in a predefined online or offline maintenance time window triggered by recoverability and/or performance conditions. To set up automatic Automatic Maintenance backups, call transaction DBACOCKPIT and choose Configuration Settings -> Automatic Backup. You can easily observe the progress of the backup run using the following command: db2 list utilities show detail As of DB2 V9.5, you can start a parallelized backup on all database partitions in a DPF environment using the following single command: db2 "backup db <dbsid> on all dbpartitionnums " This type of backup considers the preceding backup execution on the catalog partition and provides all partition backup images with a common backup timestamp.

Since there are logical dependencies between the data objects in several tablespaces, the preferable way of backing up SAP system databases is to take a full database backup. Full database backups in either online or offline mode allow for a minimum restore time effort and complexity. Nevertheless, a combination of full backups and incremental and/or delta backups is possible as well as a combination of full backups and tablespace-only backups. There is no need for offline backups in your regular backup cycle. With the log retention mode enabled, you can rely on online backups only. Only in some rare cases of database administration, for example, after switching on the database log retention mode, the creation of a full offline backup image is required. However, an online backup consumes CPU and memory resources and needs prefetchers (I/O servers) to read the data from the containers. Therefore, it might considerably influence the performance of business transactions conducted at the same time as the backup.

27

DB2 for LUW Backup Best Practices

Approach: To schedule your regular full online backup, you should use a period with low database workload. To find out whether the data read performance is the bottleneck of your backup run, you can tentatively direct a backup to the dummy device /dev/null : db2 "backup database <dbsid> to /dev/null" If the database backup performance is satisfying in this test case, you know that the database is able to provide the data to be backed up fast enough. To schedule a regular full online backup, use scheduling tools such as the UNIX crontab or the SAP DBA Planning Calendar of the DBA Cockpit (transaction DBACOCKPIT, Jobs DBA Planning Calendar). To reduce the amount of data to be backed up during one backup run, you can use the INCREMENTAL or the INCREMENTAL DELTA option of the DB2 backup command. With the INCREMENTAL option specified, only data that was changed since the last full backup run is included in the backup image. With the INCREMENTAL DELTA option, only data that was changed since the last successful DB2 backup run of any type or granularity is included. Note: To run incremental or incremental delta backups, you must set the database parameter TRACKMOD (tracking of data page modifications) to ON. The amount of time you can save by running incremental (delta) backups is not equivalent to the reduction of the data included in the backup. The backup runtime might be longer than expected because all data pages in the database have to be checked for potential changes. Incremental (delta) backups can be helpful to reduce backup runtime as long as the largest incremental backup image of one backup cycle or the largest incremental delta backup image does not become larger than 20% of the size of a full database backup image.

Backups of individual tablespaces or groups of selected tablespaces are technically possible. As of DB2 V9.1, the database provides the appropriate tools to conveniently handle a combined restore of full and partial backup images using the REBUILD WITH ALL TABLESPACES IN DATABASE option of the RESTORE command. Nevertheless, combining full and tablespace backups is not a recommended way of backing up databases in an SAP system environment due to recovery time and simplicity aspects. Verification: The recovery history file contains a protocol entry for every performed DB2 backup run. To check for backup-related entries in the history file, use the following command: db2 list history backup [all | since <timestamp>] for db <dbsid>

Alternatively, use the respective information from the DBA Cockpit: Overview Database Backup. Call transaction DBACOCKPIT and choose Backup and Recovery Note that the number of info sections displayed per backup run equals the number of backup devices or sessions used in parallel. Especially in DPF environments, a query such as the following is useful for gathering backup history information from all or from particular partitions of a database: db2 "select dbpartitionnum, start_time, firstlog from sysibmadm.db_history where operation = 'B' and objecttype = 'D' and operationtype in ('F','N') and devicetype <> 'N' and sqlcode is null and entry_status = 'A' order by dbpartitionnum, start_time desc" Test the database backup for restorability using the db2ckbkp command. Note that only with a lowercase command option db2ckbkp performs a complete backup image verification. To make sure that your database can be restored properly, you can also perform a test RESTORE into a different DB2 instance or on a different machine.

28

DB2 for LUW Backup Best Practices

Note: Neither the DB2 backup command nor a restore, or a db2ckbkp run provides information on potential page corruptions in your database. To check for database consistency, you should therefore use the offline utility db2dart or run an online check using the INSPECT CHECK DATABASE command. Frequency & Retention Time: Try to back up your complete database on a daily basis. You can make exceptions for small-size databases with low requirements regarding recovery time effort and for test system databases that do not necessarily need to be restored. In these cases, for example, a weekly backup might be sufficient. Note: This consideration does not cover typical try-and-error test scenarios where backups are retained to return to a system status before a certain test run. In this case, the intentional use of log file chains can also be helpful. We do not consider backups that are retained to reconstruct partial data (for example, a single table) of a former state, or backups as a means of system archiving. Practical experience shows that even after successfully backing up data and checking the backup image, backup data can be lost or damaged. Therefore, we strongly recommend that you keep several generations of database backups. The required retention time of log files directly depends on the earliest full database backup that was kept in your backup cycle. In an extreme case, you must be able to recover this oldest full backup using the archived log files. Daily full database backups: With respect to the potential time effort for recovery, you usually keep the data backups of the past few days, for example, of one week. However, the weekly full backups, for example, from Sundays, should be retained for a longer period of time, that is, approximately for a month as a secret reserve. The following figure shows that in this case, all archived log files that are required to recover the earliest backup in the cycle have to be retained:

29

DB2 for LUW Backup Best Practices

recent month: 1 backup / week recent week: 1 backup / day

- daily full backup image, retained - not retained

Figure: Example for the retention time of daily full database backups.
Combination of full and incremental (delta) backups: The minimum backup generations to be kept are the latest two full database backups together with the subsequent incremental (delta) backups. Incremental backups can only be restored together with the preceding full backup (which could be damaged) and delta backups depend on the whole "queue" of preceding backups right up to the recent full backup. As with daily full backups, it is advisable to retain further full backups, for example, those of the previous month.

Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. today - not retained

- archived logs of a day, retained

30

DB2 for LUW Backup Best Practices

recent month: 1 backup / week recent week: 1 backup / day

f/i

- daily full / incremental backup image, retained - not retained

f/i

Figure: Example of the retention time of daily full / incremental database backups.
Combination of full database backups and tablespace backups: The previous considerations apply here, too, since the recoverability of the complete database depends on the last full backup. Only individually backed up tablespaces can be restored to a later point in time, whereas the "rest" of the database has to be restored from the last available full backup. Therefore, the focus is on the generations of full database backups that should be retained, too. A potential recovery is more complex in case of a mixture of full database backups and tablespace backups.

5.3.4.2 Backup at File Level


For backup performance reasons, you might decide to use a file level backup of the database using "flash copy" techniques. Basically, a file level backup is based on a fast file duplication mechanism that is provided by the underlying file system or storage subsystem (for example, a snapshot or flash copy solution) in combination with a temporary suspension of all page cleaning activities on the DB2 side. You can only recover file level backups that were created this way with the help of the respective transaction log files. This type of backup processing has the following advantages: Only a minimum outage of database activity has to be considered (in comparison with a potential performance degradation lasting for hours during an online backup run of a large database). Use of the high-performance copying mechanisms of the underlying storage solutions. Mounting a file level backup image back ("over" the lost production database files) is much faster than a native restore of the database. To create a temporary copy of the database that is then backed up using the standard DB2 backup command, you can also use a file level database backup that has quickly been taken using the split-mirror apporach. This way, you can combine the minimum database performance impact of a file level copy with the advantages of a native DB2 backup image. As of DB2 V9.5, an integration of the file level backup processing into one DB2 command is available: db2 backup db <dbsid> use snapshot This backup implementation is based on the DB2 Advanced Copy Services (ACS). For

Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. Sat. Sun. Mon. Tue. Wed. Thu. Fri. today - not retained

- archived logs of a day, retained

31

DB2 for LUW Backup Best Practices

storage solution restrictions in connection with ACS, see the IBM DB2 V9.5 Information Center at: http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin. ha.doc/doc/t0052799.html . Note: A split image of your database that is stored somewhere on disk is a comparatively expensive type of backup. You have to consider the question of cost especially with respect to the necessity to retain several backup generations. Therefore, you should use available features to compress the split image (or to transfer data to tape) if possible. Approach: To store the split image of your database as a backup, you only need additional storage resources. On these storage resources, you have to retain several copy generations of the split image that contain the sapdata<n> directories (database container files) and the database instance directory (/db2/<DBSID>/db2<dbsid>/) with all subdirectories. You must not include online and offline log file paths in the split image. After copying or mounting back the image files over their originals in case of a database damage, you have to initialize the production database using the db2inidb command with the as mirror option and then roll forward the production database using the production log files. To create a native DB2 backup image from a file level backup image, you have to follow the first steps for the creation of a hot-standby database. That is, you create a split image that contains the sapdata<n> directories (database container files) and the database instance directory (/db2/<DBSID>/db2<dbsid>/) with all their subdirectories. You mount this split image on a different machine on which you create a corresponding new DB2 instance using the db2icrt command. Afterwards, you initiate the split image database using the db2inidb command with the as standby option. Now, a native DB2 online backup can be taken from that "standby" database. Note that this database copy is in rollforward pending mode, but must not be rolled forward before the online backup is finished. For a detailed description of the procedure, see chapter 3.3 "Creating a DB2 Backup Image from a Physical Copy ". Verification: The creation of a file level backup image is generally not logged in the recovery history file. By default, you have to record these backup actions using your own solutions. Only with DB2 V9.5, the command db2 backup db <dbsid> use snapshot creates a history file entry and uses its own repository when gathering information about backed up objects and their status. This repository can be listed, monitored, and updated using the db2acsutil command (manage DB2 snapshot backup objects). File level backups of the database must be evaluated by a test restore operation on a different machine. You can check DB2 backup images that were created from a split image as mentioned above using the db2ckbkp utility. Frequency & Retention Time: Since file level backup is a type of full database backup, similar recommendations regarding backup frequency and retention time apply as with DB2 full database backups (see above). You should keep a daily file level backup that is retained in several generations as well as some reserve backup images, for example, of a week sequence from the previous month. Note: Pay special attention to the retention of the respective log files. You can only recover DB2 file level backups that are created "online" using the set write suspend command if the necessary logs are available. The earliest retained file level backup image determines which log files are needed in the most extreme case of recovery.

32

DB2 for LUW Backup Best Practices

6 Special Backup and Restore Scenarios


6.1 Adaption of the Database Manager and the Database Configuration Parameters after a Restore to a (Smaller) Test System

A common case of restoring backups regularly is the refresh of test systems from production systems. Especially in this case, test systems often have only lower hardware capabilities. To complete the rollforward operation successfully, you might therefore have to adapt some database configuration parameters. If you have restored and rolled forward your target system successfully, it might also be advisable to adapt some configuration parameters again. You also have to consider other configuration aspects, for example, when you want to restore your database on a test system that is running on a different host. This chapter provides a collection of configuration aspects that might not all be relevant for you. We recommend, however, that you check them against your scenario and that you identify those that apply to your environment.

6.1.1 Database Configuration Refresh


DB2 assigns a unique identifier called database seed to a new database when it is created or when a RESTORE is performed and the target database does not yet exist. If the target database already exists, the current database configuration is used for the RESTORE (in terms of parameterization). During the RESTORE, the configuration is overwritten by the database configuration stored in the backup image. In the common scenario of refreshing a test system from a production system, the test database already exists and has a different seed than the production database. Therefore, DB2 copies the database configuration each time from the backup image. If you have adapted your database configuration because the test machine has lower hardware capabilities than the productive machine, your changes get overwritten each time you refresh your test system by performing a RESTORE of the production database.

6.1.2 Buffer Pools


If you have enabled STMM and set your buffer pools to automatic STMM automatically reduces the buffer pool settings according to your hardware capabilities. For systems with a fixed buffer pool size, you have to consider the following: While the database configuration is located outside the database in a database configuration file and can be updated without connection, you can only change the buffer pools using SQL statements. Therefore, a CONNECT to the database is required. After a RESTORE of an online backup, the database is in rollforward pending mode where a database connect is not possible. If the buffer pools on the source system are that large that buffer pools of this size cannot be allocated on the target system due to hardware limitations, DB2 allocates only the system buffer pools (16 pages) that might not be sufficient to successfully perform the roll forward recovery. To avoid this, you can set the DB2 registry variable DB2_OVERRIDE_BPF to a reduced number of pages according to the hardware capabilities if the original buffer pool size cannot be allocated.

6.1.3 Configuration Setting for Memory


When restoring the database to a machine with lower hardware capabilities, memory settings on the source database can be too high for the target machine. Therefore, you might have to adapt the memory settings to complete the rollforward process successfully. Note that the setting of the INSTANCE_MEMORY parameter is not part of the database configuration and it is not restored. Therefore, you have to set the INSTANCE_MEMORY parameter to an appropriate value on your target system depending on your hardware capabilities.

33

DB2 for LUW Backup Best Practices

If your source database is enabled for DB2s self-tuning management (STMM), all memory consumers that are STMM-tunable (parameters database_memory, locklist, maxlocks, pckcachsz, sheapthres_shr, sortheap) and that are set to AUTOMATIC are automatically reduced to valid values depending on your hardware configuration. This should avoid problems when you perform a restore and rollforward operation on a test system with lower hardware capabilities. You can disable STMM and set the memory consumers to fixed values based on the main memory of your source system. If your target machine has lower hardware capabilities, you should adapt these parameters after the restore before you start the rollforward recovery. A rule of thumb for an initial setting to perform the rollforward recovery is to reduce these values by the ratio of the main memory of the source and target machines. You can adjust these values later according to system tuning activities that are not part of this white paper. The same applies for the parameter UTIL_HEAP_SZ, which you cannot set it to automatic. If DATABASE_MEMORY is set to COMPUTED, DB2 calculates the amount of memory using the settings of the other memory consumers and allocates it at database activation time. This means, concerning the information above, the COMPUTED setting is equivalent to a fixed value setting. Note: To check the plausibility of your database configuration parameters (also after a reduction because of a RESTORE to a system with lower hardware capabilities), see the following SAP Notes that specify the standard parameter settings for each DB2 release: DB2 UDB Version 8 DB2 9.1 DB2 9.5 DB2 9.7 SAP Note No. 584952 SAP Note No. 899322 SAP Note No. 1086130 SAP Note No. 1329179

6.1.4 Log Files


Changing the number or the size of the database log files has no effect until the restore and rollforward recovery are completed. It is, therefore, required that your target system has enough free file system space to allocate all required log files. After the rollforward recovery is completed, you can reduce the number or size of the log files on your test system. In the following is a list of the most important db cfg parameters concerning log files together with a short explanation:

Parameter

Description Specifies the media type of the primary destination for archived log files. Possible values are: DISK:<path> TSM:<TSM management class> VENDOR:<vendor library> USEREXIT LOGRETAIN

LOGARCHMETH1

34

DB2 for LUW Backup Best Practices

Parameter

Description Specifies the media type of the secondary destination for archived log files. If this variable is specified, log files are archived to both this destination and the destination that is specified by the database configuration parameter LOGARCHMETH1. Note Only the destinations DISK, TSM, and VENDOR are allowed for this parameter. Specifies the options for the primary destination specified in LOGARCHMETH1 for archived log files (if required). You can use this parameter, for example, to specify an additional TSM parameter, for example, fromnode <node> fromowner <owner>. Specifies the options for the secondary destination specified in LOGARCHMETH2 for archived log files (if required). Intermediate location for log files that cannot be archived to either the primary or (if set) the secondary archiving destinations because of a media problem affecting these destinations. Note The specified path must reference an existing disk location. If mirrorlogpath is configured, DB2 creates active log files in both the log path and the mirror log path. All log data is written to both paths.

LOGARCHMETH2

LOGARCHOPTS1

LOGARCHOPTS2

FAILARCHPATH

MIRRORLOGPATH

If you have set some of these parameters, you should check them for validity on the target system. They might contain settings that are not valid anymore on the target system. Example: MIRRORLOGPATH = /db2/PRD/log_mirror This path is not valid on the target machine as it does not exist. It can be adapted to: MIRRORLOGPATH = /db2/TST/log_mirror

In the context of log file archiving, you have to consider the following aspects: 1. During roll-forward recovery, you have to retrieve the required log files from the backend configured in LOGARCHMETH1 or OVERFLOWLOGPATH. Therefore, you have to configure LOGARCHMETH1 or OVERFLOWLOGPATH correctly to complete the roll-forward recovery. After completing the rollforward recovery, you have to store the new log files in the backend that was configured in LOGARCHMETH1. If you want to store the new log files in a different location, you can do so by changing the settings of LOGARCHMETH1.

2.

There are other dependencies that can influence the correct work of the DB2 storage manager connection such as the following: - The test system host must be able to access the storage manager backend. - The storage manager client has to be installed on the test system host. If the storage manager backend performance is the bottleneck of the rollforward performance, you should consider restoring your log files manually to a temporary directory and then adjusting LOGARCHMETH1 accordingly to that directory. In this way, the required log files are retrieved from there during rollforward recovery. If you have set FAILARCHPATH, MIRRORLOGPATH in your source system, it might be necessary to adjust them on the target system because usually the paths are not valid on the target system.

35

DB2 for LUW Backup Best Practices

6.2

Redirected RESTORE

The DB2 BACKUP utility saves the database configuration including the tablespace layout. The DB2 RESTORE uses this information to create the database with exactly the same configuration and layout as the source database. However, in some cases you might want to change the database name or the tablespace layout, for example, when creating a test system out of the production database. To do so, you can use the DB2 redirected RESTORE command. A redirected restore operation consists of the following steps: 1) Issue the RESTORE DATABASE command with the REDIRECT option. DB2 reads the configuration of the source database from the backup image. 2) Use the SET TABLESPACE CONTAINERS command to define tablespace containers for the restored database. DB2 creates and formats the empty database using the containers you defined. 3) Issue the RESTORE command again, this time specifying the CONTINUE option. DB2 then writes data into the newly created tablespace containers. Once the RESTORE has successfully finished, the database is brought into rollforward recovery pending mode. You can now use the ROLLFORWARD command to apply the log files from the source database to bring the new database online. Creating a script for a redirected RESTORE can be very time-consuming especially in SAP environments with 30 or more tablespaces and sometimes hundreds of tablespace containers. To create a script for a redirected RESTORE you can use the SAP Tool brdb6brt or the generate script clause of the DB2 RESTORE command.

6.2.1 Creating a Script for Redirected RESTORE Using SAP Tool brdb6brt
You can use the brdb6brt tool for a redirected RESTORE to create a database backup and to generate a CLP script, which you can then use to perform a redirected restore operation of this backup image. Approach: You execute the brdb6brt tool to take a backup of the database and to retrieve container layout information. With brdb6brt, you create a backup of the database and a CLP script to restore this backup. The created script corresponds to the container layout of the database at the time of the backup. You can modify the script according to the requirements of the database to be restored. That is, you can change the number, size, or location of the containers for a RESTORE of the same database. Finally, you perform the redirected restore operation by executing the generated script.

6.2.2 Common Scenarios for the Use of brdb6brt


brdb6brt -BM BACKUP [-ol] [-nn ALL] This option performs a backup of the database. The switch [-ol] specifies an online backup. If you have specified the switch [-nn ALL] in DPF environments (backup on all partitions) and your database version is DB2 V9.5 or higher, a single system view backup is performed. For older database versions, the catalog partition is backed up first and afterwards all other partitions are backed up in parallel. brdb6brt -BM RETRIEVE [ -replace <OLD_DBSID>=<NEW_DBSID> ] Called with this option, brdb6brt generates a redirected RESTORE script. The brdb6brt tool queries the history file for the last backup and the system catalog for a tablespace list, and generates a script containing the redirected RESTORE command.

36

DB2 for LUW Backup Best Practices

For tablespaces that are enabled for automatic storage in an automatic storage database, no set container path statements are generated. In this case, the important section of the redirected RESTORE script is the on <path> clause that specifies the autostorage paths. The following is an example of such a generated script (some comment lines of the generated script are deleted): UPDATE COMMAND OPTIONS USING S ON Z ON X72_NODE0000.out V ON; SET CLIENT ATTACH_NODE 0; SET CLIENT CONNECT_NODE 0; ECHO @./X72_NODE0000.scr@; -- ************************************************************************ -- ** Part 1 : General redirected restore procedure -- ************************************************************************ RESTORE DATABASE X72 -- ** Instance owner user id ( db2<dbsid> ) -- S############################## -- USER <user> USING <password> -- E############################## -- INCREMENTAL AUTOMATIC -- ** Path or device where the backup image is stored -- S############################## USE TSM OPEN 1 SESSIONS -- ** Timestamp (when was the backup image taken? ) -- ** use the given format: YYYYMMDDhhmmss -- S############################## TAKEN AT 20090910160004 -- E############################## -- ** Specify the automatic storage path list. -- ** one or more drives/paths separated by commas -- S############################## ON /db2/X72/sapdata4 ,/db2/X72/sapdata3 ,/db2/X72/sapdata2 ,/db2/X72/sapdata1 -- ,<new storage path> -- E############################## -- ** If you want to restore into a new database, -- ** you can uncomment the following lines and specify -- ** the drive where the new database files should be written -- ** Replace the the drive with a full qualified drive name -- ** This command will be ignored if you restore into -- ** a database that already exists. -- S############################## -- DBPATH ON /db2/X72 -- E############################## -- ** New Database name -- S############################## INTO X72 -- E############################## -- ** If you want the logfiles to be be written to a new -- ** directory, uncomment the following line and specify -- ** the path name where the new primary logfiles should be written -- ** Replace the log path with a full quallified path name -- S############################## -- NEWLOGPATH /mnt/vol_nfs/db2/X72/log_dir/NODE0000/ -- E############################## -- ** Specify the number of buffers to be used for the restore procedure -- S############################## WITH 2 BUFFERS -- E##############################

37

DB2 for LUW Backup Best Practices

-- ** Specify the size of the buffers used for the restore. -- S############################## BUFFER 1024 -- E############################## REDIRECT -- ** Specify the degree of parallelism used for restore -- S############################## -- PARALLELISM 1 -- E############################## -- ** If the database should not be set to 'rollforward pending' state -- ** after the restore action, the following line has to be uncommented. -- S############################## -- WITHOUT ROLLING FORWARD -- E############################## <...> -- ************************************************************************ -- ** Part 3 : Complete the restore (and rollforward the database). -- ************************************************************************ RESTORE DATABASE X72 CONTINUE ; -- ************************************************************************ -- ** If you want to rollforward the database, you have to -- ** uncomment the 'ROLLFORWARD DATABASE ...' line(s) below. -- ** For more information about the rollforward process, see -- ** the documentation for BRDB6BRT tool or the -- ** 'ROLLFORWARD DATABASE' command in the Command Reference of DB2 -*************************************************************************** -- S############################## -- ROLLFORWARD DATABASE X72 TO END OF LOGS; -- E############################## ECHO ***********************************************************; ECHO ** THE RESTORE PROCEDURE HAS NOW FINISHED SUCCESSFULLY **; ECHO ***********************************************************;

The following is an example of the redefinition section for container paths in the generated script (DMS tablespace and SMS tablespace). SET TABLESPACE CONTAINERS FOR 2 -- ************************************************************************ -- ** 'IGNORE ROLLFORWARD' would specify that ALTER TABLESPACE operations -- ** in the log are to be ignored when performing a roll forward. -- S############################## -- IGNORE ROLLFORWARD CONTAINER OPERATIONS -- E############################## USING ( -- ** Container information for DMS tablespace [2] x72#BTABD -- ************************************************************************ -- ** Tablespace Content Type = All permanent data. Regular table space. -- ** current total pages : 30720 -- ** currently used pages : 30450 -- ** current high water mark : 30450 -- ** current page size (bytes) : 16384 -- ** current extent size (pages) : 2 -- ************************************************************************ -- ** Container information -- ** Type of containers can be changed. Valid modifications -- ** are the both types FILE and DEVICE -- ** If you want to add a container, separate the new -- ** container line by a comma. -- **

38

DB2 for LUW Backup Best Practices

-- ** type | name | size -- S############################## FILE /db2/X72/sapdata1/ X72#BTABD.000 30720 --,FILE <new file> <size> -- E############################## ); SET TABLESPACE CONTAINERS FOR 5 -- ************************************************************************ -- ** 'IGNORE ROLLFORWARD' would specify that ALTER TABLESPACE operations -- ** in the log are to be ignored when performing a roll forward. -- S############################## -- IGNORE ROLLFORWARD CONTAINER OPERATIONS -- E############################## USING ( -- ** Container information for SMS tablespace [5] PSAPTEMP16 -- ************************************************************************ -- ** Tablespace Content Type = System Temporary data -- ** current total pages : 1 -- ** currently used pages : 1 -- ** current high water mark : 0 -- ** current page size (bytes) : 16384 -- ** current extent size (pages) : 2 -- ************************************************************************ -- ** Container information -- ** Do not change the type of the container(s). -- ** If you want to add a container, separate the new -- ** container line by a comma. -- ** -- ** type | name -- S############################## PATH /db2/X72/saptemp1/NODE0000/temp16/PSAPTEMP16.container000 --,PATH <new directory> -- E############################## ); Using the replace option The option replace <OLD_DBSID>=<NEW_DBSID> is very helpful because you can use it to automatically replace the system ID in the appropriate locations in the script. Note that not all occurrences of the <OLDSID> are replaced.

6.2.3 Example
In the following, you find an example of how to change the name of the target database from PRD to QAS and the container location from /db2/PRD to /db2/QAS: brdb6brt s PRD bm RETRIEVE replace PRD=QAS,db2prd=db2qas Note that a script for a redirected RESTORE must match the backup image in terms of structure. This means, if structural changes are made in the database (for example, a new tablespace or container is created), a current redirected RESTORE script cannot be used with an older backup (that is, with a backup before the structural change) without adaptations. The following command is an example of how to perform a backup first and then create the script for a redirected restore operation: brdb6brt -bm BOTH

39

DB2 for LUW Backup Best Practices

6.2.4 Creating a Script for Redirected RESTORE from a Backup Image


As of DB2 V9.1 and higher, you can create a redirected RESTORE script also from an existing backup image (as opposed to the use of the brdb6brt tool where information is retrieved from the source database). Creating a redirected RESTORE script from an existing backup image has the advantage that you can use this method even if the source database has become unusable. You only require an existing offline backup image to perform the redirected RESTORE. However, with this method you cannot automatically replace the database and the instance name. Instead, you have to do this manually. Example: db2 restore db XEG from /db2/XEG/backup taken at 20091003 into XEG redirect generate script XEG.clp For more information, see Performing a redirected restore using an automatically generated script in the DB2 Data Recovery and High Availability Guide and Reference.

6.2.5 Adapting the Generated Redirected RESTORE Script


In the following, you find examples of a potential script for a redirected RESTORE. The first output shows the information required for step 1 of the redirected restore operation: -- ************************************************************************ -- ** automatically created redirect restore script -- ************************************************************************ UPDATE COMMAND OPTIONS USING S ON Z ON XEG_NODE0000.out V ON; SET CLIENT ATTACH_DBPARTITIONNUM 0; SET CLIENT CONNECT_DBPARTITIONNUM 0; -- ************************************************************************ -- ** automatically created redirect restore script -- ************************************************************************ RESTORE DATABASE XEG -- USER <username> -- USING '<password>' LOAD '/usr/lib/libnsrdb2.so' OPEN 2 SESSIONS -- OPTIONS '<options-string>' TAKEN AT 20090619121444 -- ON '/db2/XEG/sapdata1' -- DBPATH ON '<target-directory>' INTO YEG -- NEWLOGPATH '/db2/XEG/log_dir/NODE0000/' -- WITH <num-buff> BUFFERS -- BUFFER <buffer-size> -- REPLACE HISTORY FILE -- REPLACE EXISTING REDIRECT -- PARALLELISM <n> -- WITHOUT ROLLING FORWARD -- WITHOUT PROMPTING In this file, you can make further changes; for example, usually you have to change the NEWLOGPATH parameter to fit the new database name. The next output is an excerpt of the SET TABLESPACE CONTAINER information that is required for step 2 of the redirected RESTORE. The script contains the information for all DMS and SMS tablespaces based on the layout of the source database. We only show the information for one tablespace (XEG#STABD); all other tablespaces have similar entries.

40

DB2 for LUW Backup Best Practices

<...> -- ************************************************************************ -- ** Tablespace name = XEG#STABD -- ** Tablespace ID = 6 -- ** Tablespace Type = Database managed space -- ** Tablespace Content Type = All permanent data. Large table space. -- ** Tablespace Page size (bytes) = 16384 -- ** Tablespace Extent size (pages) = 2 -- ** Using automatic storage = No -- ** Auto-resize enabled = Yes -- ** Total number of pages = 311040 -- ** Number of usable pages = 311038 -- ** High water mark (pages) = 310586 -- ************************************************************************ SET TABLESPACE CONTAINERS FOR 6 -- IGNORE ROLLFORWARD CONTAINER OPERATIONS USING ( FILE'/db2/XEG/sapdata1/NODE0000/XEG#STABD.container000' 311040 ); <...> The following example shows the current layout for tablespace XEG#STABD. Usually, you change the DBSID in the container path. In our example, /db2/XEG/sapdata1 to /db2/YEG/sapdata1 has been changed. In addition to changing the container path, you can also modify the size /or amount of containers. For example, you could create two containers for XEG#STABD instead of having one large container as in our example. When changing the size or number of containers, you always have to keep the tablespace high-water mark in mind since we cannot lower the HWM using a RESTORE. To simplify the process, the highwater mark information is written in the output for every tablespace. The following output shows the commands that are required in the third and final step of the redirected restore operation: the RESTORE CONTINUE command. Note that you have to provide the database name of the source system.

-- ************************************************************************ -- ** start redirected restore -- ************************************************************************ RESTORE DATABASE XEG CONTINUE; -- ************************************************************************ -- ** end of file -- ************************************************************************

41

DB2 for LUW Backup Best Practices

6.2.6 Performing the Restore


Once you have performed all the necessary changes, you can transfer the script to the DB2 instance to which you want to restore the database. On the target system, you can then execute the script using the DB2 command line processor (CLP). To execute the redirected restore script, use the following command: db2 tvf <restore_script>

42

DB2 for LUW Backup Best Practices

Copyrights, Trademarks & Disclaimer


Copyright IBM Corporation, 2011 All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders.
The information in this presentation may concern new products that IBM may or may not announce. Any discussion of OEM products is based upon information which has been publicly available and is subject to change. The specification of some of the features described in this presentation may change before the General Availability date of these products.

REFERENCES IN THIS PUBLICATION TO IBM PRODUCTS, PROGRAMS, OR SERVICES DO NOT IMPLY THAT IBM INTENDS TO MAKE THESE AVAILABLE IN ALL COUNTRIES IN WHICH IBM OPERATES. IBM MAY HAVE PATENTS OR PENDING PATENT APPLICATIONS COVERING SUBJECT MATTER IN THIS DOCUMENT. THE FURNISHING OF THIS DOCUMENT DOES NOT IMPLY GIVING LICENSE TO THESE PATENTS. The following terms are registered trademarks of International Business Machines Corporation in the United States and/ or other countries: AIX, AIXwindows, DB2, e( logo), IBM, IBM( logo), InfoSphere Warehouse, InfoSphere Balanced Warehouse, Netfinity, Tivoli(logo), and WebSphere. The following terms are trademarks of International Business Machines Corporation in the United States and/ or other countries: AIX/ L, AIX/ L( logo), DB2 Universal Database, Intelligent Miner, POWER5 Architecture, POWER6 Architecture, pSeries, Tivoli Enterprise, TME 10, xSeries. A list of trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml NetView, Tivoli and TME are registered trademarks and TME Enterprise is a trademark of Tivoli Systems, Inc. in the United States and/ or other countries. Microsoft, Windows, Windows NT and the Windows logo are registered trademarks of Microsoft Corporation in the United States and/ or other countries. SAP, mySAP, SAP NetWeaver, SAP NetWeaver BI, SAP NetWeaver BW, SAP BW, SAP R/3, SAP SCM, SAP SEM and other SAP products and services mentioned herein are trademarks or registered trademarks of SAP AG in Germany and in several other countries. More about SAP trademarks at. http://www.sap.com/company/legal/copyright/trademark.asp UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. LINUX is a registered trademark of Linus Torvalds. Intel and Pentium are registered trademarks and MMX, Itanium, Pentium II Xeon and Pentium III Xeon are trademarks of Intel Corporation in the United States and/ or other countries. Java and all Java- based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States and/ or other countries. Other company, product and service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

43

You might also like