You are on page 1of 18

Building a failover database using Oracle Database 11g Standard Edition and File Watchers

http://manchev.org/2012/01/building-a-failover-database-using-oracle-database-11g-standardedition-and-file-watchers/

We are all aware of Data Guard and the various disaster protection mechanisms that come with Oracle Database Enterprise Edition. In this article I will try to show you how to build a remote failover database, when all you have are Standard Edition databases at both ends. If we want to keep an exact replica of a production database we have to take care for three things. First, we have to ship the changes (archive logs) to the failover database. Second, we have to keep track of what was shipped, so we know what needs to be recovered if something goes wrong. Third, we have to apply the changes at the failover database. Before 11gR2 came out, detecting and shipping log files between hosts had to be done outside of the database. In GNU/Linux environments most people use rsync or a similar program to do the job. On the other hand I always prefer to do such tasks within the database and relaying on the OS is not always an option (what if one of the databases runs on Windows?). In this tutorial I will show you how to pickup archivelogs by using File Watchers (introduced in 11gR2) and transfer them via FTP to a remote host. Demonstration scenario I will be using two hosts, both running Oracle Linux 5.5. The one with the production database is called el5-prd and the one that will host the failover database is called el5-backup. The production host has Database 11gR2 installed. There is a default database configured and it includes the sample schemas. The el5-backup has only the Oracle software installed. Both installations reside in /u01/app/oracle/product/11.2.0/db_orcl and the software owner is user oracle.

We should perform the following steps to build the failover configuration: 1. Set the production database in archivelog mode. 2. Perform full database backup and create copies of control and parameter files. 3. Prepare the failover server for restore setup database directories, copy the backup, control, parameter and password file. Create a listener file. 4. Restore the backup on el5-backup.

5. Install ftpd on el5-backup. 6. Setup ACLs for FTP transfer and install FTP packages on el5-prd. 7. Setup archivelog directory and test FTP transfer from the production host. 8. Setup a file watcher on el5-prd. 9. Test that archivelogs are shipped. 10. Setup a mechanism to apply & delete the shipped logs on el5-backup. The list is quite long, so lets begin. Set the production database in archivelog mode First we have to create an OS directory, where the database should write the archivelogs. Login to the production hosts as the oracle software owner and create an empty directory. I will create a directory named archivelog in my FRA.
[oracle@el5-prd ~]$ mkdir /u01/app/oracle/fast_recovery_area/ORCL/archivelog [oracle@el5-prd ~]$

Next, put the database in archivelog mode.


[oracle@el5-prd]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Release 11.2.0.3.0 - Production SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/fast_recovery_area/ORCL/archiv elog/' scope=spfile; System altered. SQL> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile; System altered. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 422670336 Fixed Size 1345380 Variable Size 264243356 Database Buffers 150994944 Redo Buffers 6086656 Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; bytes bytes bytes bytes bytes

Database altered. SQL> alter database force logging; Database altered. SQL> exit Disconnected from Oracle Database 11g Release 11.2.0.3.0 - Production [oracle@el5-prd]$

Perform full database backup and create copies of control and parameter files We perform a full backup of the production database by using RMAN.
[oracle@el5-prd ~]$ rman target=/ Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 14:24:16 2011 Copyright (c) 1982, 2011, Oracle and/or its affiliates. reserved. connected to target database: ORCL (DBID=1297199097) RMAN> backup database plus archivelog; Starting backup at 10-DEC-11 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=40 device type=DISK channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=6 RECID=1 STAMP=769530270 channel ORA_DISK_1: starting piece 1 at 10-DEC-11 channel ORA_DISK_1: finished piece 1 at 10-DEC-11 piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_a nnnn_TAG20111210T142431_7g6mw03l_.bkp tag=TAG20111210T142431 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 10-DEC-11 Starting backup at 10-DEC-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: starting piece 1 at 10-DEC-11 channel ORA_DISK_1: finished piece 1 at 10-DEC-11 All rights

piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_n nndf_TAG20111210T142433_7g6mw1lw_.bkp tag=TAG20111210T142433 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:26 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 10-DEC-11 channel ORA_DISK_1: finished piece 1 at 10-DEC-11 piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_n csnf_TAG20111210T142433_7g6mytqr_.bkp tag=TAG20111210T142433 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 10-DEC-11 Starting backup at 10-DEC-11 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=7 RECID=2 STAMP=769530364 channel ORA_DISK_1: starting piece 1 at 10-DEC-11 channel ORA_DISK_1: finished piece 1 at 10-DEC-11 piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_a nnnn_TAG20111210T142604_7g6myw91_.bkp tag=TAG20111210T142604 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 10-DEC-11 RMAN> exit Recovery Manager complete. [oracle@el5-prd ~]$

Next we create copies of the control and parameter file, placing them in the oracle user home directory.
[oracle@el5-prd]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 14:28:45 2011 Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.3.0 - Production SQL> alter database create standby controlfile as '/home/oracle/orclbackup.ctl'; Database altered. SQL> create pfile='/home/oracle/initORCL-backup.ora' from spfile; File created. SQL>

Prepare the failover server for restore

After you have completed a software only installation of Database 11gR2 you have to create the following directories that are needed for successfully restoring the database backup:
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/oradata/ORCL [oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/fast_recovery_area/ORCL [oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/admin/ORCL/adump

Next we take the control file copy from the production server.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/orcl-backup.ctl /u01/app/oracle/oradata/ORCL/control01.ctl oracle@el5-prd's password: orcl-backup.ctl 100% 9520KB 9.3MB/s 00:01 [oracle@el5-backup ~]$

Another copy of the control file goes to the FRA.


[oracle@el5-backup ~]$ cp /u01/app/oracle/oradata/ORCL/control01.ctl /u01/app/oracle/fast_recovery_area/ORCL/control02.ctl [oracle@el5-backup ~]$

We also need the parameter and the password file.


[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/initORCL-backup.ora /home/oracle/ oracle@el5-prd's password: initORCL-backup.ora 100% 945 0.9KB/s 00:00 [oracle@el5-backup ~]$ scp -r oracle@el5prd:/u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL /u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL oracle@el5-prd's password: orapwORCL 100% 1536 1.5KB/s 00:00 [oracle@el5-backup ~]$

Lets copy the archivelogs and the backup as well.


[oracle@el5-backup ~]$ scp -r oracle@el5prd:/u01/app/oracle/fast_recovery_area/ORCL/archivelog /u01/app/oracle/fast_recovery_area/ORCL/ oracle@el5-prd's password: o1_mf_1_7_7g7hwtfw_.arc 100% 23KB o1_mf_1_6_7g7hs8tx_.arc 100% 4085KB [oracle@el5-backup ~]$ [oracle@el5-backup ~]$ scp -r oracle@el5prd:/u01/app/oracle/fast_recovery_area/ORCL/backupset /u01/app/oracle/fast_recovery_area/ORCL/ oracle@el5-prd's password: o1_mf_annnn_TAG20111210T222058_7g7hsbnq_.bkp 100% 4086KB o1_mf_annnn_TAG20111210T222250_7g7hwv01_.bkp 100% 24KB o1_mf_ncsnf_TAG20111210T222059_7g7hws7f_.bkp 100% 9600KB o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp 100% 1172MB [oracle@el5-backup ~]$

22.5KB/s 4.0MB/s

00:00 00:00

4.0MB/s 24.0KB/s 9.4MB/s 13.6MB/s

00:00 00:00 00:00 01:26

The final set of files are the redo log files.

[oracle@el5-backup ~]$ scp oracle@el5prd:/u01/app/oracle/oradata/ORCL/redo* /u01/app/oracle/oradata/ORCL oracle@el5-prd's password: redo01.log 100% 50MB 16.7MB/s redo02.log 100% 50MB 25.0MB/s redo03.log 100% 50MB 10.0MB/s [oracle@el5-backup ~]$

00:03 00:02 00:05

The last thing we have to do is to create a listener.ora file.


[oracle@el5-backup ~]$ cat >> /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora << EOF > LISTENER = > (DESCRIPTION_LIST = > (DESCRIPTION = > (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) > (ADDRESS = (PROTOCOL = TCP)(HOST = el5backup)(PORT = 1521)) > ) > ) > > ADR_BASE_LISTENER = /u01/app/oracle > EOF [oracle@el5-backup ~]$

As you can see, the listener for our failover database will use the default 1521 port. Time to restore from the backup. Restore the backup on el5-backup Before running the restore we have to start up and bring the failover database to a mount state. The first step is to start the listener on el5-backup.
[oracle@el5-backup ~]$ lsnrctl start LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 10-DEC-2011 22:45:13 Copyright (c) 1991, 2011, Oracle. All rights reserved.

Starting /u01/app/oracle/product/11.2.0/db_orcl/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 11.2.0.3.0 - Production System parameter file is /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora Log messages written to /u01/app/oracle/diag/tnslsnr/el5backup/listener/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5backup)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.3.0 Production Start Date 10-DEC-2011 22:45:15 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/el5backup/listener/alert/log.xml Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5-backup)(PORT=1521))) The listener supports no services The command completed successfully [oracle@el5-backup ~]$

Next we have to set the SID and create a SPFILE from the paramater file that we have in our home directory.
[oracle@el5-backup ~]$ export ORACLE_SID=ORCL [oracle@el5-backup ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 22:45:52 2011 Copyright (c) 1982, 2011, Oracle. Connected to an idle instance. SQL> create spfile from pfile='/home/oracle/initORCL-backup.ora'; File created. SQL> All rights reserved.

We can now restore the database by using RMAN.


[oracle@el5-backup ~]$ rman target=/ Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 22:47:11 2011 Copyright (c) 1982, 2011, Oracle and/or its affiliates. reserved. connected to target database (not started) RMAN> startup mount; Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers RMAN> restore database; Starting restore at 10-DEC-11 Starting implicit crosscheck backup at 10-DEC-11 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=18 device type=DISK Crosschecked 4 objects Finished implicit crosscheck backup at 10-DEC-11 Starting implicit crosscheck copy at 10-DEC-11 using channel ORA_DISK_1 422670336 bytes 1345380 268437660 146800640 6086656 bytes bytes bytes bytes All rights

Finished implicit crosscheck copy at 10-DEC-11 searching for all files in the recovery area cataloging files... cataloging done List of Cataloged Files ======================= File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_7_7g7 hwtfw_.arc File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_6_7g7 hs8tx_.arc using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/ORCL/system01.dbf channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/ORCL/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/ORCL/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/ORCL/users01.dbf channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/ORCL/example01.dbf channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TA G20111210T222059_7g7hsdkp_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_n nndf_TAG20111210T222059_7g7hsdkp_.bkp tag=TAG20111210T222059 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:01:36 Finished restore at 10-DEC-11 RMAN> exit Recovery Manager complete. [oracle@el5-backup ~]$

We successfully created an identical copy of the production database. Install ftpd on el5-backup Installing the FTP daemon on Oracle Linux is pretty straightforward.
[root@el5-backup ~]# yum install vsftpd Loaded plugins: security Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package vsftpd.i386 0:2.0.5-16.el5_4.1 set to be updated --> Finished Dependency Resolution Dependencies Resolved

=========================================================================== ===== Package Arch Version Repository Size =========================================================================== ===== Installing: vsftpd i386 2.0.5-16.el5_4.1 el5_u5_base 140 k Transaction Summary =========================================================================== ===== Install 1 Package(s) Upgrade 0 Package(s) Total download size: 140 k Is this ok [y/N]: Y Downloading Packages: vsftpd-2.0.5-16.el5_4.1.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : vsftpd 1/1 Installed: vsftpd.i386 0:2.0.5-16.el5_4.1 Complete! [root@el5-backup ~]#

| 140 kB

00:01

You should not forget to reconfigure the firewall on the failover server to allow FTP communication. First add ip_conntrack_ftp to the IPTABLES_MODULES line in /etc/sysconfig/iptables-config. The line in iptables-config should look like this:
IPTABLES_MODULES="ip_conntrack_netbios_ns ip_conntrack_ftp"

Next, edit /etc/sysconfig/iptables and add a rule for the FTP traffic (be sure to put the line before the REJECT rule). The line you have to add in iptables looks like this:
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 21 -j ACCEPT

Bounce iptables and set the FTP service to autostart with the server.
[root@el5-backup ~]# service iptables restart Flushing firewall rules: [ Setting chains to policy ACCEPT: filter [ Unloading iptables modules: [ Applying iptables firewall rules: [ Loading additional iptables modules: ip_conntrack_netbios_n[ [root@el5-backup ~]# chkconfig vsftpd on [root@el5-backup ~]# service vsftpd start Starting vsftpd for vsftpd: [ [root@el5-backup ~]# OK OK OK OK OK OK ] ] ] ] ] ]

You might want to test the access from el5-prd to the failover server.
[oracle@el5-prd ~]$ ftp el5-backup Connected to el5-backup. 220 (vsFTPd 2.0.5) 530 Please login with USER and PASS. 530 Please login with USER and PASS. KERBEROS_V4 rejected as an authentication type Name (el5-backup:oracle): oracle 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> bye 221 Goodbye. [oracle@el5-prd ~]$

Setup ACLs for FTP transfer and install FTP packages on el5-prd Our next task is to prepare the production server for communicating with el5-backup over FTP. We start by creating a dedicated database user that will be used for shipping and tracking the archivelog files. I will name it logship.
[oracle@el5-prd ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011 Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.3.0 - Production SQL> create user logship identified by logship; User created. SQL> grant connect, resource to logship; Grant succeeded. SQL>

Next we should configure an Access Control List (ACL) that will allow FTP connections to el5-backup for user logship. We have to use the CREATE_ACL, ADD_PRIVILEGE and ASSIGN_ACL procedures from the DBMS_NETWORK_ACL_ADMIN package. We will call the procedures with the following parameters:
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'ftp.xml', description => 'Allow FTP connections', principal => 'SYSTEM', is_grant => TRUE, privilege => 'connect', start_date => SYSTIMESTAMP, end_date => NULL); DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE (

acl principal is_grant privilege position start_date end_date

=> => => => => => =>

'ftp.xml', 'LOGSHIP', FALSE, 'connect', NULL, NULL, NULL);

DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => 'ftp.xml', host => 'el5-backup', lower_port => NULL, upper_port => NULL);

Here is an output of their execution:


SQL> exec dbms_network_acl_admin.create_acl (acl => 'ftp.xml', description => 'Allow FTP connections', principal => 'LOGSHIP', is_grant => TRUE, privilege => 'connect', start_date => SYSTIMESTAMP,end_date => NULL); PL/SQL procedure successfully completed. SQL> exec dbms_network_acl_admin.add_privilege (acl => 'ftp.xml', principal => 'LOGSHIP', is_grant => FALSE, privilege => 'connect', position => NULL, start_date => NULL, end_date => NULL); PL/SQL procedure successfully completed. SQL> exec dbms_network_acl_admin.assign_acl (acl => 'ftp.xml', host => 'el5-backup', lower_port => NULL,upper_port => NULL); PL/SQL procedure successfully completed. SQL>

For connecting to el5-backup from the production database we will be using the FTP API developed by Tim Hall. You need to download the FTP package and the package body creation scripts and run them as user logship.
SQL> conn logship/logship; Connected. SQL> @ftp.pks; Package created. No errors. SQL> @ftp.pkb; Package body created. No errors. SQL>

Setup archivelog directory and test FTP transfer We move on by creating a directory object within the production database that points to the location of the archivelog files.

[oracle@el5-prd ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sun Dec 11 08:44:57 2011 Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.3.0 - Production SQL> create directory arc_dir as '/u01/app/oracle/fast_recovery_area/ORCL/archivelog'; Directory created. SQL> grant read on directory arc_dir to logship; Grant succeeded. SQL>

It is a good idea to test the FTP communication from within the database. You can create a dummy test file in the archivelog dir:
[oracle@el5-prd ~]$ cat >> /u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt << EOF > FTP test file > EOF [oracle@el5-prd ~]$

You can then connect as user logship and run the following PL/SQL block:
declare l_conn utl_tcp.connection; begin l_conn := ftp.login('el5-backup','21','oracle','welcome1'); ftp.put(p_conn => l_conn, p_from_dir => 'ARC_DIR', p_from_file => 'testfile.txt', p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt'); ftp.logout(l_conn); end; /

If everything goes fine the testfile.txt will appear at el5-backup.


[oracle@el5-backup ~]$ cd /u01/app/oracle/fast_recovery_area/ORCL/archivelog/ [oracle@el5-backup archivelog]$ cat testfile.txt FTP test file [oracle@el5-backup archivelog]$

Setup a file watcher on el5-prd We will be using a database File Watcher for detecting new archivelog files and triggering FTP transfer. First we login as user logship and create a table for storing detected archivelog files and the date of attempted transfer. This table is needed only for our own convinience it can be used to check for missing log files if anything goes wrong.
SQL> conn logship/logship;

Connected. SQL> create sequence transfered_logs_seq start with 1 increment by 1 cache 20 nocycle; Sequence created. SQL> create table transfered_logs (id number, transfer_date date, file_name varchar2(4000), error char(1)); Table created. SQL>

Next we set the file detection interval to 1 minute. Of course, you can tune this to match closer you archivelog generation interval.
SQL> conn / as sysdba Connected. SQL> exec dbms_scheduler.set_attribute('file_watcher_schedule', 'repeat_interval', 'freq=minutely; interval=1'); PL/SQL procedure successfully completed. SQL>

In order to have access to the archivelog directory, the file watcher needs an OS user account. We will create a credentials that the watcher can use and provide them with the oracles username and password. For my demo install the oracles password is welcome1.
SQL> exec dbms_scheduler.create_credential(credential_name => 'local_credential', username => 'oracle', password => 'welcome1'); PL/SQL procedure successfully completed. SQL>

The final preparation is to create a PL/SQL procedure that the file watcher will call upon detecting a new archivelog file. The procedure I am using looks like this:
create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as v_transfer_id number; v_file_name varchar2(4000); v_ftp_conn utl_tcp.connection; begin v_transfer_id := transfered_logs_seq.nextval; v_file_name := p_sched_result.actual_file_name; v_ftp_conn := ftp.login('el5-backup','21','oracle','welcome1'); ftp.put(p_conn => v_ftp_conn, p_from_dir => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name); ftp.logout(v_ftp_conn); insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null); commit; exception when others then insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y');

commit; end; /

This procedure will try to ftp the file that the watcher will pass to it. If the operation is successful the procedure will insert a record in the TRANSFERED_LOGS table with the file name and date and time of its transfer. If an error occurs the procedure will set the ERROR column for the record to Y. Lets create this procedure in the logship schema.
SQL> conn logship/logship Connected. SQL> create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as 2 v_transfer_id number; 3 v_file_name varchar2(4000); 4 v_ftp_conn utl_tcp.connection; 5 begin 6 v_transfer_id := transfered_logs_seq.nextval; 7 v_file_name := p_sched_result.actual_file_name; 8 v_ftp_conn := ftp.login('el5backup','21','oracle','welcome1'); 9 ftp.put(p_conn => v_ftp_conn, p_from_dir => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name); 10 ftp.logout(v_ftp_conn); 11 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null); 12 commit; 13 exception when others then 14 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y'); 15 commit; 16 end; 17 / Procedure created. SQL> show errors No errors. SQL>

Time to create the file watcher. This is done by calling the CREATE_FILE_WATCHER procedure from the DBMS_SCHEDULER package. I call the procedure with the following parameters.
BEGIN DBMS_SCHEDULER.create_file_watcher( file_watcher_name => 'arc_watcher', directory_path => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog', file_name => '*.arc', credential_name => 'local_credential', destination => NULL, enabled => FALSE); END; /

Here is the execution:


SQL> conn / as sysdba Connected. SQL> exec dbms_scheduler.create_file_watcher(file_watcher_name => 'arc_watcher', directory_path => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog', file_name => '*.arc', credential_name => 'local_credential', destination => NULL, enabled => FALSE);

PL/SQL procedure successfully completed. SQL>

Next we create a program that will bind the file watcher and the TRANSFER_ARC_LOG PL/SQL procedure.
SQL> exec dbms_scheduler.create_program(program_name => 'arc_watcher_prog', program_type => 'stored_procedure', program_action => 'logship.trasnfer_arc_log', number_of_arguments => 1, enabled => FALSE); PL/SQL procedure successfully completed. SQL> exec dbms_scheduler.define_metadata_argument(program_name => 'arc_watcher_prog', metadata_attribute => 'event_message', argument_position => 1); PL/SQL procedure successfully completed. SQL>

The final touch is creating a job for the ARC_WATCHER_PROG.


SQL> exec dbms_scheduler.create_job(job_name => 'arc_watcher_job', program_name => 'arc_watcher_prog', event_condition => NULL, queue_spec => 'arc_watcher', auto_drop => FALSE, enabled => FALSE); PL/SQL procedure successfully completed. SQL>

An important step is to set a value for the PARALLEL_INSTANCES attribute for our job. We will set this to TRUE to let the scheduler run multiple instances of our job. If you omit this step the system will process archivelogs one a time and if its busy with a file it will just ignore any new archivelogs that appear in this period. You definitely do not want this to happen.
SQL> exec dbms_scheduler.set_attribute('arc_watcher_job','parallel_instances',TRUE); PL/SQL procedure successfully completed. SQL>

As finally everything is in place, we can enable the watcher, its program and the job. This is done by executing the DBMS_SCHEDULER.ENABLE procedure.
SQL> exec dbms_scheduler.enable('arc_watcher'); PL/SQL procedure successfully completed. SQL> exec dbms_scheduler.enable('arc_watcher_prog'); PL/SQL procedure successfully completed. SQL> exec dbms_scheduler.enable('arc_watcher_job');

PL/SQL procedure successfully completed. SQL>

Test that archivelogs are shipped For testing if archivelog transfers are happening we will take a look at the archivelog directory on failover server.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/ total 5664 drwxr-xr-x 2 oracle oinstall 4096 Dec 27 07:56 . drwxr-xr-x 4 oracle oinstall 4096 Dec 27 07:22 .. -rw-r----- 1 oracle oinstall 1043968 Dec 27 07:52 1_10_769951554.arc -rw-r----- 1 oracle oinstall 1701888 Dec 27 07:52 1_7_769951554.arc -rw-r----- 1 oracle oinstall 544256 Dec 27 07:52 1_8_769951554.arc -rw-r----- 1 oracle oinstall 2481152 Dec 27 07:52 1_9_769951554.arc [oracle@el5-backup ~]$

We then execute ALTER SYSTEM SWITCH LOGFILE on el5-prd.


SQL> alter system switch logfile; System altered. SQL>

We connect with the logship user and check the contents of TRANSFERED_LOGS.
SQL> conn logship/logship; Connected. SQL> select count(*) from transfered_logs; COUNT(*) ---------0 SQL>

OK, the archivelog directory is checked in 60 seconds interval, so you might have to wait some more. After one minute tops the new file should be detected and transferred.
SQL> select count(*) from transfered_logs; COUNT(*) ---------1 SQL>

The new log is detected and a transfer attempt was made. Check the archivelog directory on el5-backup again to see if the file is there.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/

total 6920 drwxr-xr-x 2 oracle oinstall drwxr-xr-x 4 oracle oinstall -rw-r----- 1 oracle oinstall -rw-r--r-- 1 oracle oinstall -rw-r----- 1 oracle oinstall -rw-r----- 1 oracle oinstall -rw-r----- 1 oracle oinstall [oracle@el5-backup ~]$

4096 4096 1043968 1282048 1701888 544256 2481152

Dec Dec Dec Dec Dec Dec Dec

27 27 27 27 27 27 27

08:00 07:22 07:52 08:00 07:52 07:52 07:52

. .. 1_10_769951554.arc 1_11_769951554.arc 1_7_769951554.arc 1_8_769951554.arc 1_9_769951554.arc

The logfile appears as expected. This concludes the detect and transfer part of our configuration. A mechanism to apply and delete the shipped logs Having the log files transferred to a failover server is not enough. If you really want to have an identical copy that is ready to take over the primary role you should take care to apply the database changes described in the logs. The easiest way is to simply start RMAN and apply the log files manually (do not forget to register your newly transferred archivelogs with RMAN. You might want to use something like catalog archivelog start with path_to_your_archivelogs).
[oracle@el5-backup ~]$ rman target=/ Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 27 11:24:16 2011 Copyright (c) 1982, 2011, Oracle and/or its affiliates. reserved. connected to target database: ORCL (DBID=1297199097) RMAN> recover database noredo; Starting recover at 17-DEC-11 using channel ORA_DISK_1 Starting recover at 17-DEC-11 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=18 device type=DISK starting media recovery archived log for thread 1 with sequence 8 is already on disk as file /u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc thread=1 sequence=8 archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_9_769951554.arc thread=1 sequence=9 archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_10_769951554.arc thread=1 sequence=10 archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_11_769951554.arc thread=1 sequence=11 unable to find archived log All rights

archived log thread=1 sequence=12 Finished recover at 17-DEC-11

You can then open the failover database and use in place of the production by executing
alter database open resetlogs

The thing is that you probably want to automate the process. This automation can not happen in the failover database as it is not really operational (its not in open state). You will probably go with some kind of OS level automation, but this will be platform dependent. For GNU/Linux environments, what you can do is to create a simple shell script that looks like this:
rman target / nocatalog << EOF run { catalog archivelog start with '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/' noprompt; recover database noredo; delete noprompt force archivelog until time 'SYSDATE-7'; } exit EOF

This script will call RMAN, apply the received archivelogs and delete all log files that are older than 7 days (I keep the others just in case). You can then setup a cron job to run the script at an appropriate interval and you should not worry for managing the archivelogs manually. Final remarks In this tutorial I showed you how to build a platform independant archivelogs shipping mecahnism, that does all the work from within the database. This approach has its limitations and it's not in anyway a substitution of Data Guard and the other recovery features of Enterprise Edition. It's just a simple workaround when you are forced to use Database SE and you are looking for a simple way to be more protected from failiures. There are several areas for improvement in this mechanism, especially when it comes to security. You should keep in mind that FTP is not really secure, so if you're dealing with sensitive data you might want to consider using SFTP or something else that provides encryption. Another issue is keeping plaintext passwords in TRANSFER_ARC_LOG procedure (you might want to wrap this one) and the database dictionary.

You might also like