You are on page 1of 18

Backup and Recovery Optimization

BACKUP AND RECOVERY OPTIMIZATION


Stephan Haisley, Center Of Expertise, Oracle Corporation

INTRODUCTION
The objective of this paper is to explain what factors can effect the performance of backup, restoration and recovery activities.
This paper also aims to provide you something to think about within the realms of backup and recovery operations. Most
database administrators (DBAs) won’t spend much time on backup and recovery performance due to hopefully never needing
to restore a failed database. Backup performance is only looked at when the time window allocated to backups is too small, but
with the new backup duration feature in Oracle Database 10g, it is possible to tell RMAN how much time to spend on the
backup, and when this time has expired RMAN will stop and then can restart from where it left off the next time it runs.
Restoration and recovery performance is often only looked at when it is too late, i.e. the database has been recovering for 6
hours now, and it still hasn’t finished, but it needs to open NOW! This paper will provide you some areas to adjust and test in
order to improve backup, restoration and recovery performance.

This paper assumes that the reader uses Recovery Manager (RMAN) to manage the backups of their Oracle databases, and so
performance of user-managed backups (non-RMAN) will not be discussed. All tests carried out in this paper use Oracle
10.1.0.3 on Suse Linux Enterprise Server 7 with a dual Intel processor machine. This was not a high end or large server so the
tests are simplistic to prove the point.

Throughout this paper the amount of CPU time used for an activity is shown during testing. This value is retrieved from
v$sesstat for the statistic ‘CPU used by this session’ (statistic# 12) where the session is the one created when a channel is
allocated by RMAN. This shows the amount of CPU time used for reading, checking/compression and writing to the backup
set.

NOTE: The test results shown in this paper are not official Oracle performance figures and are simply
used to demonstrate performance characteristics of backup, restoration and recovery operations.
Mileage will vary greatly depending on your own hardware, operating system, network and database
configurations.

RMAN OVERVIEW
RMAN was introduced with Oracle 8.0 as a tool that allows the DBA to easily to manage backup and recovery for their Oracle
databases. There have been improvements with each new release of Oracle and because it is built into the RDBMS kernel
RMAN it can take advantage of various database features such as corruption validation checks.

For a thorough description of the RMAN architecture and features, please refer to the Oracle Database Backup and Recovery
Basics guide. This paper is assuming a basic understanding of the main RMAN components and terminology, and will only
discuss relevant features when appropriate.

HOW RMAN TAKES A BACKUP


RMAN allows the DBA to take a backup of datafiles, control files, SPFILE and archivelog files. The backups created are
stored in one of two formats:

Page 1 of 18
Backup and Recovery Optimization

1. Image Copies – This is an exact copy of the file being backed up, byte for byte. The Image copies that RMAN creates can
be used as replacement files of the original without using RMAN to restore them. They can only be stored on disk.
Because they are created from copying the original file without modification, there is very little optimization that can be
done to increase performance of creating them. For this reason Image Copies will not be discussed further in this paper.
2. Backup Sets – This is logical grouping of datafiles, archivelogs, controlfiles or SPFILE (datafiles and archivelogs cannot be
combined in the same backup set). Backup sets can be created on disk or to tape using the Media Management Layer
(MML), which is a software layer that provides the interface from RMAN to a tape library. This is the type of backup
discussed throughout this paper.

When RMAN creates a backup set it creates one or more backup pieces. These pieces contain the actual data from the
datafiles, archivelogs, controlfile or SPFILE. When the backup pieces are created, the data being backed up is multiplexed
together so that a single backup piece will contain data from one or more backup objects (datafiles, archivelogs, controlfile,
SPFILE). Because of this, only RMAN can read the data contained in the backup pieces.

During backup set creation of datafiles, RMAN will scan all the way through a datafile and will not backup the data blocks that
have never been used (have never contained data). This reduces the size of the backup pieces dramatically for datafiles with
large amounts of free space in them. This feature is called unused block compression. Data blocks that have contained data,
and then deleted to be empty will still be backed up.

INCREMENTAL BACKUPS
An incremental backup is a backup method that allows you to backup only those data blocks contained in datafiles that have
changed since the last incremental backup taken. Oracle provides two (0 & 1) levels (used to be five levels before Oracle
Database 10g) of incremental backups as well as two different types:

1. Differential – All datablocks modified since the last level 0 or level 1 are backed up. This is the default mode of operation.
2. Cumulative – All datablocks modified since the last level 0 are backed up.

Day of the week


Sun Mon Tues Wed Thr Fri Sat

0 1 1 1 1 1 0
Incremental backup level

Figure 1: Differential Incremental Backups

Page 2 of 18
Backup and Recovery Optimization

Day of the week


Sun Mon Tues Wed Thr Fri Sat

0 1 1 1 1 1 0
Incremental backup level

Figure 2: Cumulative Incremental Backups

Due the reduced volume of data backed up using a differential backup, the backups will most often be smaller and also run
faster than full or level 0 backups. Cumulative backups normally take longer than differential backups due to backing up more
data, but during a restoration, they will run faster because fewer incremental backups will be restored.
A simple test shows the difference in backup set size and time taken using differential and cumulative backups. After a level 0
backup was taken, a series of updates were run from a script that would update a similar number of data blocks each time. An
incremental backup was then taken. The update script was run a second time followed by another incremental backup. This
was repeated a third time to create three incremental backups. The whole test was run twice, one using differential and one
using cumulative backups. The results are shown in the table below:

Backup # Incremental Incremental Size of BS Time (secs) CPU (secs)


Type Level (8192 blocks)
0 Base level 0 778112 626 227.20
1 Differential 1 42375 312 82.93
2 Differential 1 42370 312 82.65
3 Differential 1 42369 312 82.45
0 Base Level 0 778112 628 226.09
1 Cumulative 1 42371 314 80.61
2 Cumulative 1 49605 315 83.70
3 Cumulative 1 60176 321 85.33
Table 1: Speed difference between Differential and Cumulative incremental backups

As you might expect, the speed and size of the cumulative backups grew because it is backing up all data blocks that changed
since the base level 0 backup. When looking at the timing of your own backups using the ELAPSED_SECONDS column in
Page 3 of 18
Backup and Recovery Optimization

v$backup_set it is important to note the amount of time spent taking the level 0 in comparison to the level 1 incrementals. If
the time taking a level 1 gets close to the time for a level 0, it makes practical sense to switch to using all level 0 backups due to
the speed increase when recovering the database.
The same backups shown in Table 1 were used to demonstrate the difference in time it takes to restore the three differential
backups and the single cumulative backup. Table 2 below shows the results:

Incremental Type Number of BS Time (secs) CPU (secs)


restored
Base level 0 1 626.67 210.85
Differential 3 98.67 23.00
Base level 0 1 629.33 209.21
Cumulative 1 43.00 11.05
Table 2: Performance differences during restoration

The timing of the restorations (which are carried out by RMAN when a RECOVER command is issued) for the incremental
backups clearly shows that cumulative incremental backups can save a lot of time during recovery. It is up to you to check the
timing of your backup and recoveries to see if using cumulative backups would help reduce recovery times. The time taken to
recover/restore using RMAN backups can be seen in the ELAPSED_SECONDS column of the v$session_longops view. Note
that the contents of this view are cleared when the instance is shutdown and rows may get replaced with new sessions over
time.

FACTORS AFFECTING BACKUP AND RESTORATION PERFORMANCE


There are a number of factors that can influence backup and restoration performance:
1. Channel Configuration
2. Size of memory buffers used in creating the backup sets
3. Speed of backup devices (tape or disk)
4. Amount of data being backed up
5. Amount of block checking features enabled
6. Use of compression

The process of applying recovery to the database using archivelogs and incremental backups will be discussed in a later section.

Page 4 of 18
Backup and Recovery Optimization

1. CHANNEL CONFIGURATION
A channel is the communication pathway from RMAN to the physical backup device, whether it is disk or tape. It is
recommended to match up the number of channels to the number of physical devices available for the backup. This applies
more specifically to tape devices. When multiple channels are allocated during the backup, each channel creates its own backup
sets. Oracle tries to distribute the datafiles amongst the channels with the aim of creating evenly sized backup sets. If you only
have two tape devices but you allocate three channels, depending on the MML, either one channel will always be waiting for a
device to finish on another channel before it can begin creating its backup set or two backup sets will be interlaced onto the
tape. Interlacing backups in this way may provide quicker backup times but will slow down restore times dramatically. It is also
important to consider that the available tape devices may be requested for use for another database if it needs to be restored at
the same time a backup is running. Make sure there is enough backup device bandwidth for emergency recoveries to occur at
any time.

By using the automatic channel parallelism option RMAN will:


• Split up the number of files to be backed up amongst the channels
• Try and even out the disks being read containing the files amongst the channels
• Attempt to make each backupset the same size

Automatic channel parallelism is documented in chapter 2 of the Backup and Recovery Advanced Users Guide 10g.
Adjusting the filesperset parameters for channels may make differences to the speed of backups but it can make major
difference with restoration times when not restoring all the datafiles. This is demonstrated with a test taking several level 0
backups with a different number of files per set. A datafile is then deleted and timing information is gathered when using each
of the backup sets to restore it. The results are shown below in table 3:

#Files in BS BS Size (blocks) Restored File Time (secs) CPU (Secs)


size (blocks)
8 702320 97727 132 39.42
4 658221 97727 110 36.92
2 132773 97727 82 29.92
1 97730 97727 74 25.62
Table 3: Effects of filesperset on restore speed

It is obvious from the results that by keeping the number of filesperset to a smaller value, the speed at which the file can be
restored is decreased. If the restoration involved the entire database, having more backup sets may increase the time it takes to
restore. This is due to more requests being sent to the MML. This may not be a significant performance decrease but should
be monitored before permanently reducing the filesperset of your backups.

2. SIZE OF MEMORY BUFFERS


When RMAN takes a backup of datafiles it must read each block into an input/reading buffer. The block is checked to make
sure it needs to be backed up and then various block validation checks are made in order to detect corruptions. The block is
then copied into the output/write buffers. For archivelogs each logfile block also needs to be read and validated before it is
written to the backup. Write buffers are used to store the multiplexed datablocks or archive log blocks, which are then written
to the backup media (disk or tape). This is shown clearly in Figure 3 below:

Page 5 of 18
Backup and Recovery Optimization

Datafiles Backup Device


Output Buffers
input Buffers (4 per channel)
(4 per datafile)

Figure 3: Use of memory buffers during a backup

BACKUP READING BUFFERS


Oracle has documented the size and number of disk (they will always be disk due to the Oracle datafiles residing on disk)
reading buffers used during a backup in the 10g Backup and Recovery Advanced User’s Guide:

Number of files per Buffer size


Allocated channel
Files ≤ 4 Each buffer = 1Mb, total buffer size for channel is up to 16Mb
4 > Files ≤ 8 Each buffer = 512k, total buffer size for channel is up to 16Mb.
Numbers of buffers per file will depend on number of files.
Files > 8 Each buffer = 128k, 4 buffers per file, so each file will have 512Kb
buffer
Table 4: Read buffer allocation algorithm

The parameter that adjusts the number and size of read buffers allocated during the backup is the channel parameter
MAXOPENFILES. This specifies the maximum number of datafiles that RMAN will concurrently open for input into the
backup. The default value is min(8, #files in backupset).

Page 6 of 18
Backup and Recovery Optimization

To show how the algorithm equates to the actual buffer allocation during a backup the table below shows the buffers being
split up amongst datafiles and backup sets when backing up a database using different numbers for MAXOPENFILES:

MAXOPENFILES Block Buffer #Buffers #files open at Total


size Size (Kb) per file one time Buffer Size
(bytes) (MB)
1 8192 1024 16 1 16
2 8192 1024 8 2 16
3 8192 1024 5 3 15
4 8192 512 8 4 16
5 8192 512 6 5 15
6 8192 512 5 6 15
7 8192 512 4 7 14
8 8192 512 4 8 16
9 8192 128 4 9 4.5
10 8192 128 4 10 5
Table 5: Disk Read Buffer Allocations for Datafile Backups

The number and the size of the buffers allocated for each file (datafile and archivelogs) can be seen using the following query:
SELECT set_count, device_type, type, filename, buffer_size, buffer_count, open_time,close_time
FROM v$backup_async/sync_io
ORDER BY set_count,type, open_time, close_time;

The algorithm that Oracle uses to allocate the read buffers seems to be adequate, but it is worth monitoring
v$backup_async/sync_io to show how much memory is being allocated to your backup read buffers to make sure you don’t
run into memory starvation issues. When running the tests shown in table 5, there was no change to the amount of time taken
to create the backup.

BACKUP WRITING BUFFERS


The backup writing buffers are sized differently for tape and disk devices:

Device Type Number of Buffers Buffer Size Total of Buffers


Per Channel Allocated
DISK 4 1Mb 4Mb
SBT 4 256Kb 1Mb
Table 6: Default write buffer allocation

The SBT (tape device) write buffers are smaller than the disk channels due to the fact that the devices write slower, and hence
a bigger memory area is not required. However, it is possible to modify the size of the write buffers allocated to channels,
which can possible show some performance improvements. Many modern tape devices are buffered with some sort of cache
so increasing the RMAN buffers should increase performance.

Page 7 of 18
Backup and Recovery Optimization

To demonstrate this I ran 3 backups of a database with different write buffer sizes. The first backup was taken using standard
buffer settings (1Mb buffer). The next two backups were taken using a small and larger write buffer size, which was altered
using the command:

Smaller buffer ( 32k * 4 = 128Kb buffer):


configure channel device type sbt parms=’BLKSIZE=32768';

Larger buffer ( 512k * 4 = 2Mb buffer):


configure channel device type sbt parms='BLKSIZE=524288’;

The following table showed the results of the backups (viewed in V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO):

Total Buffer Size (bytes) I/O Count I/O Time (100ths sec)
131,072 60564 6174
1,048,576 (default) 7571 5959
2,097,152 3786 5053
Table 7: I/O Rates with varying write buffer sizes

Because I was using the test SBT interface, which actually creates a backup to disk using the tape layer API the backup time
didn’t really differ. If I were using real tape devices, increasing the buffer would have reduced the number of I/O calls to the
MML significantly and subsequently the amount of I/O time would have been reduced also. It is recommended to monitor
the I/O rates being achieved and then adjust the buffer size using the BLKSIZE channel parameter to a more suitable value.
Of course it’s not possible to increase the I/O rate beyond what the physical device is capable of. This is explained in more
detail in a later section.

SO WHERE IS THE MEMORY ALLOCATED FROM?


The memory from the read and write buffers are allocated from the PGA unless I/O slaves are being used. Each channel
allocated creates a separate connection into the database and will have a separate PGA allocated. If I/O slaves are used then
the memory is allocated from the shared pool because the slaves will need to communicate data between themselves. If a large
pool area is allocated, using the LARGE_POOL_SIZE parameter, this will be used instead of the shared pool. It is
recommended to use the large pool to reduce contention in the shared pool.
I/O slaves should only be used if the operating system does not support asynchronous I/O (most do in fact support it).
Oracle will automatically use asynchronous or synchronous I/O if available for backups, which can be seen in
V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO.

The initialization parameters that control asynchronous I/O:


tape_asynch_io
disk_asynch_io

If these parameters are set to TRUE, yet the backup appears in the V$BACKUP_SYNC_IO, then the operation system may
not be enabled to use asynchronous I/O. This is the case on my test Linux server using the SBT channel type. If not using
asynchronous I/O with tape devices due to OS limitations, it is recommended to use I/O slaves which emulates asynchronous
I/O. When using tape devices, the goal is to keep the tape streaming.

Page 8 of 18
Backup and Recovery Optimization

What is Tape Streaming?


If there is enough data being fed to the tape fast enough, the tape drive will write continuously, always having more to write at
the next position on the tape after the current write is through. The drive motors can keep spinning continuously at full write
speeds, so the tape just 'streams' by the heads being written.
If there is not enough data to keep the tape streaming the tape drive will normally stop when no data is received and rewind
back to the place of the last block written, and then wait for new data to arrive. Constantly moving the tape forward and back
reduces the life of the tape, and can also slow down backup times.

With synchronous I/O the channel process will wait for the write to tape to complete before continuing and filling the read
buffers. When filling the read buffers the tape device will wait for the next set of write buffer data. This prevents continuous
tape streaming, and is not the optimized way for taking backups. If asynchronous I/O is not available it is possible to simulate
it using I/O slaves with the initialization parameters backup_tape_io_slaves. This way, the channel process allocates the writes
to the slave processes and does not wait for them to complete before refilling the buffers.
I ran a full database backup using SBT both with backup_tape_io_slaves set to TRUE and FALSE. Here are the results when
selecting from V$BACKUP_SYNC_IO/V$BACKUP_ASYNC_IO:

1. BACKUP_TAPE_IO_SLAVES = FALSE
SQL> SELECT set_count, set_stamp, device_type, type, ELAPSED_TIME, bytes
FROM v$backup_async_io
WHERE type=’OUTPUT’;
no rows selected.

SQL> SELECT set_count, set_stamp, device_type, type, ELAPSED_TIME, bytes


FROM v$backup_sync_io
WHERE type=’OUTPUT’;

SET_COUNT SET_STAMP DEVICE_TYPE TYPE ELAPSED_TIME BYTES


--------- ---------- -------------- --------- ------------ ----------
386 549510246 SBT_TAPE OUTPUT 61500 6386450432

2. BACKUP_TAPE_IO_SLAVES = TRUE
SQL> SELECT set_count, set_stamp, device_type, type, ELAPSED_TIME, bytes, filename
FROM v$backup_async_io
WHERE type=’OUTPUT’;

SET_COUNT SET_STAMP DEVICE_TYPE TYPE ELAPSED_TIME BYTES


--------- ---------- -------------- --------- ------------ ----------
388 549511025 SBT_TAPE OUTPUT 61000 6386450432

SQL> SELECT set_count, set_stamp, device_type, type, ELAPSED_TIME, bytes, filename


FROM v$backup_async_io
WHERE output=’OUTPUT’;
no rows selected

You should be using asynchronous I/O or I/O slaves to get the best backup I/O performance.

Page 9 of 18
Backup and Recovery Optimization

3. SPEED OF BACKUP DEVICES


The maximum speed at which a backup can run will be dictated by:

Max Mb/Sec = min(disk read Mb/s, tape write Mb/s)

It is not possible to make a backup to go faster than this, period.


The first part of a backup, as already explained, involves reading each datafile/archivelog file and placing the blocks into read
buffers. The maximum speed at which these buffers can be filled depends on the physical location/caching and maximum read
speed of the disks. The throughput being achieved through RMAN from the disks can be monitored using the
effective_bytes_per_second column in the V$BACKUP_SYNC/ASYNC_IO views, where type=’INPUT’. If this is lower than the
expected read rate advertised for your disks, then you need to start investigating why this is so by looking at OS data like sar
and at disk monitoring data which is particular to the disk/logical volume manager implementation.
The speed at which the devices used for writing the backup can also be monitored using V$BACKUP_ASYNC/SYNC_IO,
using the effective_bytes_per_second column where type=’OUTPUT’. If you are seeing a slower I/O rate than expected on the tape
devices then you need to look at the MML and drive options to tune the physical tape block size (normally the larger the
better), the tape compression (this always slows down backups), and to make sure you are streaming data to the drive.
Increasing the number of files multiplexed into each channel may increase the tape streaming abilities, but it will reduce the
performance of restoring a subset of the datafiles contained in one backupset. Most of the new tape devices are equipped with
some form of memory buffer so the importance of supplying adequate data for streaming is reduced. This is something you
should ask of the tape device supplier.
RMAN has a feature where it is possible to slow down the speed of a backup. The reason for doing this to make sure the I/O
system does not get overloaded due to the backup running. By default, RMAN uses as much I/O bandwidth it can when
reading the files being backed up, but it can be limited using the RATE parameter for the channels.
For example, supplying the following channel configuration command will limit the read rate from disk to 1Mb/second, which
if the drive offers 5Mb/sec read times, leaves plenty of I/O for other applications to function normally:

RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt RATE=1M;

4. AMOUNT OF DATA BEING BACKED UP (INCREMENTAL BACKUPS)


Probably one of the best ways to reduce the time to carry out the backups is to simply reduce the amount of data being backed
up. This is not as stupid as it sounds and with some monitoring of the backups created, they can be adjusted to reduce the
backup time significantly.

Here are a few guidelines:

a. If there is any static data contained in the database that never changes, or changes very infrequently, such as price lookup or
definition data, then place it within its own tablespace. This tablespace can be made read-only and then backed up. When it is
time to make changes to the data, put the tablespace back into read-write mode, make the changes, put back into read-only
mode and take another backup of it.

NOTE: When backing up tablespaces very infrequently, you must make sure the backup is not purged from the
tape library based on the date at which it was created. If this happens you will no longer have the data needed for
recovery. If there is a limit placed on the age of backups, then make sure that a fresh backup of the read-only data is
taken before the expiration. RMAN can do this automatically when using the Optimization feature.
Page 10 of 18
Backup and Recovery Optimization

b. Using differential incremental level 1 backups it is possible to see the exact number of blocks that is changing for each
datafile since the last level 1 backup. By using the query listed below it is possible to identify datafiles that are not changing
much, and would be candidates to being backed up less frequently.

SELECT set_count, set_stamp, file#, datafile_blocks, blocks “BACKUP_BLOCKS”


FROM v$backup_datafile
ORDER BY set_count, file#;

c. Large datafiles that contain very little data still take time to backup due to reading the whole datafile and evaluating if each
datablock should be included in the backup.

To demonstrate this, the following results compare taking a backup of a 500Mb file with different free space counts:

%Free space Number of Blocks Number of Blocks Time Taken (secs)


Scanned Backed Up
100% 64000 8 22
50% 64000 31678 31
< 0.79 % 64000 63496 49
Table 8: Backup time and datafile freespace

On the small test system that was used backing up to disk, the amount of time was only 49 seconds in the worst case, but you
can see that even when the file was empty, it took 45% of the worst case to backup only 8 blocks. Imagine this on a busier
system, with larger datafiles and many more of them.
If you have large datafiles (2Gb or greater) which contain large free space amounts, it is worth while resizing them (if there are
no object extents at the end of the file) and then setting them to autoextend to prevent wasted time in scanning empty space
during the backup.

d. Consider using the Block Change Tracking introduced in Oracle Database 10g. This allows backups using the Fast
Incremental feature. Instead of scanning an entire datafile during a backup, with block change tracking enabled, a logfile (block
change tracking logfile) keeps track of changed datablocks using bitmap patterns to represent ranges of datablocks. Because
Oracle doesn’t keep track of all changed blocks, just a block range represented by one bit, the size of the logfile is kept very
small in comparison to the size of the database. The performance overhead to updates on the database is also small.

To see the difference in DML performance I used a test similar to the TPC-C benchmark using 5 warehouses and 10
concurrent users, with no thinking or keying in time. Each session carries out 5000 transactions. The test was run using a tool
called HAMMERORA available at http://hammerora.sourceforge.net/tpc-c.htm. Before each test the instance
was restarted and system statistics were gathered before and after the test was complete. There was no other activity on the
database. Session running times were gathered using logon and logoff triggers.
With block change tracking disabled the average time for a session to complete was 28mins 52secs. When I turned on block
change tracking the average time increased to 29mins 49secs. This equates to a performance degradation of ~3% which is not
very high considering the improvements in incremental backup speed as demonstrated in the next test.
The improvement in backup speed using the fast incrementals with the block change tracking enabled was dramatic. To test this,
the database was backed up using a level 0 incremental, which scans all the datafiles from start to end even if block change

Page 11 of 18
Backup and Recovery Optimization

tracking is enabled. DML activity was generated in the database and then a level 1 differential incremental was taken. The
exercise was repeated with block change tracking disabled and then enabled. The table below shows the results:

Fast #Blocks in #Blocks Read #Blocks in Time Taken


Incrementals? Database Backup (secs)
No 404160 404160 36567 156
Yes 404160 72832 37215 35
Table 9: Speed increase using Fast Incremental backups

It is clear to see the increase in speed due to the reduced number of blocks being read when using the fast incremental
backups. The number of blocks read should be monitored because if this rises to a value closer to the number of blocks
contained in the datafile, then block change tracking will operate slower than a normal incremental level 0 due to reading the
bitmap information along with the full datafile.
To check if a backup used the fast incremental feature, look at the USED_CHANGE_TRACKING column in
v$backup_datafile.
The speed of restoration is not affected by using fast incremental backups, because the resulting backup set remains the same
as if it were backed up using normal incremental backups, just created quicker.

5. AMOUNT OF BLOCK CHECKING FEATURES ENABLED


RMAN offers the ability to run several different block sanity checks to detect any corruptions at time of backup and
restoration.
A. HEAD AND TAIL
When RMAN is reading datafiles being backed up it is always checking that the tail portion (last 4 bytes in each block) matches
the structures in the header (detecting a fractured/split block). If a mismatch is found, the block is re-read up to five times. If
the same mismatch occurs on the last read, the block is reported as corrupt in the RMAN session and an error is reported in
the alert.log. This type of block checking cannot be turned off.

B. CHECKSUMS
By default RMAN calculates checksums for every block read and written as part of the backup, irrespective of the
db_block_checksum init.ora parameter setting. If the block already contains a checksum value, RMAN will read it and validate it
against the newly calculated value. If there is a checksum mismatch the problem will be reported. The checksums calculated by
RMAN are stored with the block in the backupset. On the restore the checksum is recalculated and compared, and depending
on the value for db_block_checksum the checksum may get cleared before the block is written to disk. The checksum creation
can be turned off by specifying the NOCHECKSUM clause as part of the backup command. However, this is not
recommended. The checksums are always created for the SYSTEM datafiles and the controlfile.

C. LOGICAL BLOCK CHECKS


RMAN also offers a feature to check the logical structures contained within each data block being backed up. For example,
free space counts are correct, the row index directory is valid, index block pointers are correct and no rows overlap each other.
The checks are the same logical block checks as those used when the db_block_checking initialization parameter is used. To turn
on logical checking during the backup use the CHECK LOGICAL keywords within the backup command. By default, logical
checking is not used.

Page 12 of 18
Backup and Recovery Optimization

To demonstrate what effects on backup performance each check has, I ran a full database backup with each option enabled.
The results are shown below:

Checksums? Check Logical? Blocks read Time (secs) CPU (secs)


No No 825920 623.67 227.08
Yes No 825920 622.33 229.93
Yes Yes 825920 624.67 245.81
Table 10: Effects of checksum and logical block checks on backups

Table 10 shows that there really isn’t any difference between the calculation of checksums and without them. Each datablock
already contains a checksum due to the default of init.ora value of DB_BLOCK_CHECKSUMS being set to TRUE so when
RMAN is reading each block for the backup, it is computing a new checksum and comparing it against the one already
contained in the datablock. It is not recommended to turn off the checksums due to the value added by this RMAN feature in
block corruption detection during backups.
Adding the CHECK LOGICAL parameter to the RMAN backup shows a slight decrease in performance in time and the
amount of CPU used. The backups used in the test were small in size, and to disk, but it does give a guideline to the additional
time and effort RMAN must make in order to check the logical structures with each block being included in the backup set.
This should be tested and benchmarked in your own environment to see what the performance effects are.
The effects of checksums and logical block checks are similar during RMAN restores:

Checksums? Check Logical? #Blocks in DB Time (secs) CPU (secs)


No No 825920 559 210.53
Yes No 825920 559 211.14
Yes Yes 825920 610 218.53
Table 11: Effects of checksum and logical block checks on restores

Due to my small test system the effects of turning on checksums and logical block checks are not that great, but when backing
up and restoring a multi-terabyte database, the overheads will increase. But disabling the block checking features comes at a
much larger price because RMAN can no longer recognize and alert you of block corruptions. Having to restore and recover
parts of the database due to corruption will take far longer than the overheads involved in backing up and restoring with the
checksum and check logical options.

6. USE OF COMPRESSION
Before Oracle Database 10g the only type of compression that RMAN would use was that of unused block compression –
unused blocks were not backed up. In Oracle Database 10g a binary compression feature was introduced that will compress
the size of the backup sets. Due the increased amount of work involved in the compression, backup and restore times will
increase. CPU usage will also increase.

Page 13 of 18
Backup and Recovery Optimization

To demonstrate this, a full database backup was taken and the time, amount of CPU and final backup set size was recorded:

#Blocks Read Compression? Time (secs) CPU (secs) Backup Set Size
(Mb)
825920 No 622 227 6079
825920 Yes 1194 1100 517
Table 12: Effect of RMAN compression on backup speed

The use of RMAN compression provides an impressive compression rate of approximately 92%. For users that have limited
storage space for backups this is great, but it does come with the added CPU and time costs due to the amount of work it takes
to apply a compression algorithm.
If the MML also offers tape compression testing should be carried out to see if it offers a better compression ratio and backup
time than using RMAN compression. You should never use compression with RMAN and the MML due to decreased
performance during backup and restoration. It would be wise to test the time and CPU used for both RMAN and MML to see
which offers the best value.
If you don’t have any concerns about the amount of space a backup set occupies then don’t use compression.
Restoring a compressed backup set has similar performance effects on the CPU usage and overall time:

Compression #Blocks Read in Backup Set Size Time (secs) CPU (secs)
Backup Set (Mb)
No 778109 6079 559 209.37
Yes 778109 517 1133 1093.37
Table 13: Effect of RMAN compression on restoration speed

Similar to the backup compression tests, the time it takes to restore the backup is double, with a higher amount of time spent
on the CPU.
The use of RMAN compression would be advantageous if the tape devices are mounted across the network. If this is the case,
RMAN must transfer the backup pieces from the database server to the tape server. The use of compression will significantly
reduce the amount of data traveling across the network, possibly reducing the backup and restore times as well as causing less
inconvenience to other network users.

Page 14 of 18
Backup and Recovery Optimization

FACTORS EFFECTING MEDIA RECOVERY PERFORMANCE?


There are a number of factors that will affect the performance of recovery, including:
- Number of archivelogs and/or incrementals being applied
- Number of datafiles needing recovery
- If the archivelogs are available on disk outside of a backup set
- If using parallel recovery
- General database performance

This section is making the assumption that at this stage of recovery the datafiles have been restored from RMAN using a base
level backup (level 0 or a full backup).

1. THE NUMBER OF ARCHIVELOGS AND/OR INCREMENTALS BEING APPLIED


After the base level backup has been restored by RMAN, the recovery phase begins. RMAN will first look for any suitable
incremental backups that can be used to roll the database forward. If no suitable incrementals backups are found, the required
archive logs are looked for (assuming you are running in ARCHIVELOG mode). It has long been documented in several
places that recovering the database using incremental backups is faster than using archive redo log files. This can easily be
tested, by taking a base level 0 backup of the database in ARCHIVELOG mode. For a period of time run DML against the
database so that it creates between 10 and 20 archivelogs, and then take an incremental backup. Delete the database and
restore the base level 0 backup. Then use RMAN to recover the database, which will restore the incremental backup, timing it.
Restore the base level 0 again, but instead of using RMAN to recover the database, use SQL*Plus with a ‘RECOVER
AUTOMATIC DATABASE’ command, and compare the time it takes with the incremental restoration. I ran this test, and
found the incremental restore to take 46 seconds compared to the recovery through applying archive logs taking 789 seconds.
On the test system the backups were created on disk, so the speed of restoring incremental backups were predictably much
faster. If the incremental backups were coming from a tape device or tape silo that had to locate the tape, mount the tape and
then find the start of the backup, it may have taken much longer. The point is, on each system the speed at which archivelogs
and restorations from the backup media are going to vary significantly, so it is worth some investment in some time to test the
differences. Only then can you implement the backup strategy that will meet the time expectations when a failure occurs.
As documented in an earlier test, the type of incremental backups (cumulative or differential) will also help determine if
applying archivelogs are slower than restoring an incremental backup.

2. THE NUMBER OF DATAFILES NEEDING RECOVERY


When media recovery is carried out, each datablock affected by a redo record must be read into the buffer cache before the
redo can be applied to it. If you recover more datafiles than are necessary, you are causing recovery to do more work than
necessary. Only restore and recover the minimum amount of datafiles needed to fix the failure – there is no point restoring all
100 datafiles if only 10 have got failures, unless of course you are running in NOARCHIVELOG mode when media recovery
is not an option for you.
If the failure is due to block corruptions within a datafile, consider using the Block Media Recovery feature introduced in
Oracle9i of RMAN. Instead of restoring and recovering a whole datafile, individual datablocks can be restored from a backup,
and the archivelogs and/or incremental backups are used to recover them. To show the amount of time this can save, a test
that corrupted a different number of datablocks in a 102400 8Kb block datafile was carried out to show the total recovery time
(including the restoration) between block media recovery and datafile recovery. The block corruptions were spread evenly
throughout the datafile. Before corrupting the blocks a level 0 backup was taken of the datafile, the TPCC tests were run
against the database (changing 34113 datablocks in the datafile), and all the archivelogs remained on disk so they didn’t need to
be restored during recovery.

Page 15 of 18
Backup and Recovery Optimization

Number of Corrupt Blocks Datafile Recovery Time (secs) Block Media Recovery Time
(secs)
10 941 145
99 925 155
991 937 219
5000 922 616
10000 938 1156
Table 14: Speed of block media recovery compared to datafile recovery

As can be seen in Table 14, block media recovery showed a significant time saving for recovering individual blocks over
restoring and recovering the whole datafile. There will be some point at which block media recovery becomes more expensive
than recovering the whole datafile, and in the test this was seen somewhere around 10000 blocks which equates to
approximately ~10% of the datafile. This value will be different on everybody’s system and also depending how the corrupt
blocks are spread out, but this demonstrates that using BMR on too many blocks within a datafile can in fact be slower.

3. IF THE ARCHIVELOGS ARE AVAILABLE ON DISK OUTSIDE OF A BACKUP SET


If the archivelogs needed for recovery are already on disk, recovery will complete quicker than if the archivelogs need to be
restored from an. If backup compression (RMAN or MML) is being used recovery times will take even longer. On a simple
test to apply 20 archive logs or restore them and apply them on a test system, it took over 2 minutes extra to restore them, and
this was using a disk backup. This will be greatly increased if the backups are on tape and you need to apply a hundred or more
archivelogs.
It is a good idea to keep a number of archivelogs available on disk to aid in recovery speed should it be needed. The number of
logfiles to keep depends on the frequency of backups and available disk space to store them.

4. IF USING PARALLEL RECOVERY


By default Oracle uses a single process, which is the one issuing the recovery command, to carry out media recovery unless
using PARALLEL_AUTOMATIC_TUNING init.ora parameter. This single process will read the archive logs and determine
which changes are required for the specific recovering datafiles. The datablock is read into the buffer cache, and the redo is
applied to it. Each time an archivelog is completely applied a media recovery checkpoint occurs which signals DBWR to write
all the dirty blocks in the buffer cache to the datafiles. The file headers and controlfile also get updated with checkpoint
information. This is a lot to do for a single process, especially when recovering a large number of datafiles applying a large
amount of redo.
To help decrease the time it takes to carry out media recovery Oracle provides the Parallel Recovery feature. On multiple CPU
machines, you can specify a degree of parallelism for use with the RECOVER command. The process that issues the recovery
command reads the archive logs as before, but instead of reading the blocks in the cache and applying the redo to them
directly, it passes that work to the parallel execution slave processes. To make sure the redo is applied to the datafiles in SCN
order, the parallel slaves work on different ranges of datablocks, so they will not interfere with each other, and also the redo
will still get applied in SCN order for each datablock.
When recovering a small number of datafiles using parallel recovery it may take longer to perform due to the extra
computation involved in partitioning the work, plus the extra IPC communication between the slave and coordinator
processes. Monitor v$session_wait and v$system_events for the top wait events. If you are seeing ‘PX Deq’ events, for which
there are several different types, then reducing the degree of parallelism may increase the recovery time.

Page 16 of 18
Backup and Recovery Optimization

You will also notice an increase in CPU usage when using parallel recovery due to the fact that the coordinator process is
calculating the work partitioning for each redo change in the archive log, and there are more processes carrying out the
recovery. So long as the CPU is not being saturated (look at something equivalent to ‘sar –u’) then parallel recovery is not
causing CPU issues.
The final thing to note about parallel recovery is the way it uses memory. If the init.ora parameter
PARALLEL_AUTOMATIC_TUNING is set to FALSE (its default value) the buffers used for messaging between the
parallel processes is 2148bytes and is allocated from the shared pool. If the PARALLEL_AUTOMATIC_TUNING is set to
TRUE, the default buffer is 4096bytes and allocated from the large pool. By adjusting the size of the buffers, by setting
PARALLEL_EXECUTION_MESSAGE_SIZE speed of parallel recovery may be decreased, but the use of memory in the
shared or large pool should also be monitored to prevent resource starvation. For more information on media recovery, take a
look at the following white paper:
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gRecoveryBestPractices.pdf

5. GENERAL DATABASE PERFORMANCE


If the database performs slowly in day-to-day activities you shouldn’t expect a fast recovery time. Recovery uses the database to
carry out its tasks so if there are performance issues before recovery, recovery may have similar issues.

Common areas that should be optimized that can help recovery times include:
• I/O – Recovery is very read and write intensive due to having to read all the archivelog contents, read the datablocks from
the datafiles, and then write the dirty blocks backup to disk once the redo has been applied. By monitoring v$filestat along
with OS specific tools, you need to make sure the read and write times are acceptable for the hardware being used. Slow
I/O can significantly slow down the time it takes to carry out media recovery.
• DBWR performance – DBWRs main responsibility is to ensure there are enough clean buffers in the buffer cache to be
used when data is being read from the datafiles. When a block has been updated, it is DBWRs job to write the buffer to
disk and return it for reuse within the buffer cache. If DBWR cannot keep up with cleaning the buffers, you will notice
waits for the ‘free buffer waits’ event. If the I/O is not a problem, then using multiple DBWR process (DBWR slaves if
your OS does not support asynchronous I/O or asynchronous I/O is disabled) should reduce the waits.
• CPU Utilization – Because each datablock that requires recovery is read into the buffer cache before the redo change is
applied to it there is a number of latches that must be acquired first. This includes the cache buffers chains and the cache buffers
lru chain. Acquiring latches and making the changes to the blocks all takes CPU cycles, so you should make sure there is
enough CPU bandwidth for the database during media recovery. It is not uncommon to see a server with a slow trickle of
transactions with lots of CPU to spare turn into a CPU hog during media recovery due to having to apply a much more
concentrated amount of datablock changes. If you are using parallel recovery CPU usage will be even higher as already
discussed.

Page 17 of 18
Backup and Recovery Optimization

CONCLUSION
This paper has demonstrated a number of areas to consider to optimize the speed of backups, restorations and recoveries.
The speed of backup and restore activities can be controlled by:
• Using incremental backups, and cumulative incremental backups if restoration speed is more important than backup speed
• Allocating one channel to each available tape device or controller if tape devices are shared on a single controller
• Increasing the size of the memory buffers used when creating backups to tape
• Reducing the amount of data that needs to be backed up – scanning empty or static datafiles can waste significant time and
resources
• Using the Block Change Tracking feature to substantially increase performance of incremental backups
• The type of block checking features enabled for backup and restoration
• The type of compression being used by RMAN or the Media Management Layer

The speed of media recovery can be controlled by:


• The number of archivelogs that need applying with the greater the number, the slower recovery will take
• The number of incremental backups that need to be restored and applied
• The type of incremental backup being used (differential or cumulative)
• The number of datafiles needing recovery. A higher number datafiles that are restored and recovered, means a higher
number of datablocks need to be read into the buffer cache and written back to the datafiles
• Using block media recovery instead of restoring and recovering whole datafiles due to corruption issues
• Using parallel recovery, which partitions the work of applying redo between parallel execution slave process to decrease
the time it takes to apply the redo
• General database performance tuning. If the database performs slowly, so will the recoveries, so the database should be
optimized before beginning recovery optimization.

The tests carried out in this paper demonstrate the key points to optimization, and it should be understood that each system
will offer different performance gains by making any of the suggested adjustments. The speed of backup, restoration and
recovery is heavily dependent on the speed of the hardware being used by the system, plus any network latency introduced
when the backups are stored on remote tape servers.
Backup, restoration and recovery should be thoroughly tested, not just to ensure it will protect you against failures but also to
make sure the backups occur in the allotted time window and the restore and recovery can complete within an agreed Service
Level Agreement. Don’t leave tuning a backup, restore or recovery until it is performing slowly in production when you are in
the middle of a time constraint.

Page 18 of 18

You might also like