Professional Documents
Culture Documents
Recovery facilities:
A DBMS should provide the following facilities to assist with the recovery:
1. A backup mechanism that makes periodic backup copies of the database.
2. Logging facilities that keep track of the current state of transactions and
database changes.
3. A checkpoint facility that enables updates to the database that are in progress to
be made permanent.
4. A recovery manager that allows the system to restore the database to a
consistent state following a failure.
1. Backup mechanism:
The DBMS should provide a mechanism to create backup copies of the
database and the log files to be created at regular intervals without having to first
stop the system. The backup copy of the database can be used to recover the
database in the event that the database has been damaged or destroyed. A backup
can be complete copy of the entire database or an incremented copy. An
incremental backup consists only of modifications made since the last complete
or incremental backup. The backups are usually stored on offline storage like
magnetic tapes.
2. Logging:
To keep track of the database transactions, the DBMS maintains special
files called log files that contain information about all updates to the database.
The log file contains information like transaction identifier, type of the log record
(transaction start, insert, update, delete, commit, abort, etc.), identifier of the data
item affected by the database action (insert, delete, update operations), before-
image of the data item, after-image of the data item, log management
information, checkpoint records, and so on. The log files are stored online so that
recovery can be faster. Archive copies of log files are also maintained offline and
only the most recent copy stored online. Online copies basically provide recovery
from minor failures. In case of major failures, all the offline log archives are used
and an incremental recovery is done.
3. Checkpointing:
The log file information is used to recover the database from a failure. One
problem with this situation is that we may not know how far back in the log to
search and we may end up redoing transactions that have been safely written to
the database. To limit the amount of search and subsequent processing that we
need to carry out on the log file, we use checkpointing. A checkpoint is a point of
synchronization between the database and the transaction log file. All buffers are
force written to secondary storage at the checkpoint. Checkpoints are also called
syncpoints or savepoints.
If the transactions are executed serially, when a failure occurs we check the
log file to find the transaction that started before the last checkpoint. All the
earlier transactions would have committed previously and would have been
written to the database. Therefore, we need only redo the transaction that was
active at the checkpoint. If a transaction is active at the time of failure, it must be
undone. If the transactions are performed concurrently, we will have to redo all
transactions that have committed since the checkpoint and undo all transactions
that were active at the time of the failure.