You are on page 1of 49

PURPOSE ------The procedures below contain information about applying a patchset to your current release of Oracle for a configuration

that includes one or more physical standby databases. This information only applies to release 9.0.1 and higher installations of Oracle on UNIX. For logical standby, please refer to the following: Note.210989.1 'Data Guard 9i Applying Patch Sets / Interim Patches with Logical Standby Database in Place' SCOPE & APPLICATION ------------------These procedures are to be used in conjunction with the ones contained within the Patch Set readme. This document complements and supplements the existing Patch Set readme; it does not replace the current documentation. This instructions will also need to be used for interim patch installs and de-installs IF the interim patch contains any SQL scripts that must be run after the patch install. If you are upgrading a system that has Real Application Clusters installed, then most of the actions described should be performed on only one node of the system. You should perform the actions described on only one node unless instructed otherwise in a particular step. Procedure to Apply a Patch Set with physical Standby Database in Place ======================================================================== NOTE: If you are using the Data Guard Broker, you should either disable the Data Guard Broker Configuration DGMGRL> disable configuration; or stop the Data Guard Broker (set db_broker_start=false) during the Upgrade. You can enable the Data Guard Broker Configuration DGMGRL> enable configuration; or restart the Data Guard Broker (set db_broker_start=true) again once the Upgrade completed successfully. 1. Log in to the oracle account on both the primary and standby hosts and make sure the environment is set to the correct ORACLE_HOME and ORACLE_SID. 2. On both the primary and standby host uncompress and untar the downloaded patch set / interim patch file into a new directory.

3. Shut down the existing Oracle Server instance on the primary host with normal or immediate priority. Stop all listeners, agents and other processes running

step

against the ORACLE_HOME. If using Real Application Clusters perform this on all nodes. SQL> shutdown immediate % agentctl stop % lsnrctl stop

4.

Cancel managed recovery on the standby database. SQL> recover managed standby database cancel;

5.

Shutdown the standby instance on the standby host. Stop all listeners, agents and other processes running against the ORACLE_HOME. If using Real Application Clusters perform this step on all nodes. SQL> shutdown immediate % agentctl stop % lsnrctl stop

6. Run the Installer and install the patchset on both primary and standby host. % ./runInstaller If this is an interim patch, run opatch per the patch README. If using Real Application Clusters, be sure the install has propagated to the other nodes if using private ORACLE_HOMEs. Please see the Patch Set readme for specific instructions. 7. Once the patchset/patch has been installed on on all hosts / nodes startup the standby listener standby host. % lsnrctl start 8. Startup nomount the standby database. % sqlplus "/ as sysdba" SQL> startup nomount 9. Mount the standby database. SQL> alter database mount standby database; 10. Place the standby database in managed recovery mode. SQL> recover managed standby database nodelay disconnect; 11. Startup the primary instance on the primary host. % sqlplus "/ as sysdba" SQL> startup migrate If this is an interim Patch and no Post-Installation Scripts are required,

you can perform a normal startup here and skip to Step 18 (switch Logfiles after you started the Primary Database and verify Step 18) 12. Ensure that remote archiving to the standby database is functioning correctly by switching logfiles on the primary and verifying that v$archive_dest.status is valid. If you are not performing remote archiving make note of the current archive log sequence. SQL> alter system archive log current; SQL> select dest_id, status from v$archive_dest; 13. On the primary instance run the following script: SQL> @?/rdbms/admin/catpatch.sql For the interim patch, run any scripts as outlined in the README. 14. Once the catpatch.sql script / patch SQL scripts completes make note of the current log sequence and issue the following command: SQL> alter system archive log current; 15. Verify the standby database has been recovered to the log sequence from step 12. SQL> select max(sequence#) from v$log_history; 16. On the primary instance run the following command: SQL> alter system disable restricted session; 17. Complete the remainder of the "Post Install Actions" from the Patch Set readme on the primary host. Please note that it is not necessary to shudown the standby in conjuction with the primary during the "Post Install Actions". 18. Once all "Post Install Actions" have been completed verify the standby database has been recovered to the last archive log produced by the primary. On the primary: SQL> select max(sequence#) from v$archived_log; On the standby: SQL> select max(sequence#) from v$log_history; RELATED DOCUMENTS -----------------

Patch Set Notes Oracle9i Patch Set Oracle9i Data Guard Concepts and Administration Oracle9i Data Guard Broker Oracle9i SQL Reference

Complete checklist for manual upgrades to 10gR1 (10.1.0.x) [ID 263809.1] Modified 10-SEP-2010 Type BULLETIN Status PUBLISHED
PURPOSE ------This document is created for use as a guideline and checklist when manually upgrading Oracle 8, Oracle 8i or Oracle 9i to Oracle 10gR1 (10.1.0.x)

SCOPE & APPLICATION ------------------Database administrators UPGRADE CHECKLIST ----------------UPGRADING THE DATABASE ---------------------1. ----------------------------------------------------------------------------Upgrade path for Oracle 7: If your old release version is 7.3.4 or less, you must first upgrade to Oracle 8.0.6 or Oracle 8.1.7. After the upgrade to 8.0.6 or 8.1.7 is done you can directly upgrade your database to 10gR1. Upgrade path for Oracle 8.0: If your old release version is 8.0.5 or less (i.e. 8.0.4), then direct upgrade is NOT supported. You must first upgrade this version to 8.0.6. After the upgrade to 8.0.6 or your version IS 8.0.6 , you can directly upgrade your database to 10gR1. Upgrade path for Oracle 8i: If your old release version is 8.1.5 or 8.1.6, then direct upgrade to Oracle 10g is NOT supported. You must first upgrade this version to 8.1.7. After the upgrade to 8.1.7 or your version IS 8.1.7, you can directly upgrade your database to Oracle 10gR1.

Upgrade path for Oracle 9i: Direct upgrade is supported. If your old release is 8.0.5 or less, look at Note:133920.1 for manual upgrading the database to 8.0.6 or higher. You can also use this note to first upgrade your database from Oracle 8.1.5 or Oracle 8.1.6 to Oracle 8.1.7 What version is running? What option is installed? Select * from v$version; Select * from v$option; 2. ----------------------------------------------------------------------------When upgrading to Oracle Database 10gR1, optimizer statistics will be collected for dictionary tables that lack statistics. This statistics collection could be time consuming for databases with a large number of dictionary tables, but it will only occur for those tables that lack statistics or are significantly changed during the upgrade. For databases that are upgraded from Oracle9i, it is possible to decrease the downtime during the upgrade by collecting statistics for the dictionary prior to the upgrade. The following two scripts collect statistics for dictionary objects in Oracle9i. This process should be tested on a test database just like any other aspect of the upgrade. Also, some schemas referenced in these scripts may not exist if some database components have not been installed. This script collect stats for system component schemas. The stats collection may give error if a particular component schema does not exist in the database. This can happen if a component is not installed or if it is invalid. This script must be run connected AS SYSDBA using SQL*Plus: Sqlplus /nolog SQL> Connect sys/passwd_for_sys as sysdba SQL> spool gdict grant analyze any to sys; exec dbms_stats.gather_schema_stats('WMSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('MDSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('CTXSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('XDB',options=>'GATHER',

estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('WKSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('LBACSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('DMSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('ORDSYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('ORDPLUGINS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('SI_INFORMTN_SCHEMA',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('OUTLN',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('DBSNMP',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); spool off This script creates the stats table, 'dictstattab' and exports the stats for the RDBMS component schemas into it. The export will give error if a particular component schema does not exist in the database. This can happen if a component is not installed or if it is invalid. This will be useful incase you want to import the stats back example: Following stmt imports the stats for SYS schema after deleting the existing stats. exec dbms_stats.delete_schema_stats('SYS'); exec dbms_stats.import_schema_stats('SYS','dictstattab'); This script must be run connected AS SYSDBA using SQL*Plus: Sqlplus /nolog SQL> Connect sys/passwd_for_sys as sysdba

spool sdict grant analyze any to sys; exec dbms_stats.create_stat_table('SYS','dictstattab'); exec dbms_stats.export_schema_stats('WMSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('CTXSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('XDB','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('WKSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('LBACSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('DMSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('ORDSYS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('ORDPLUGINS','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('SI_INFORMTN_SCHEMA','dictstattab', statown => 'SYS'); exec dbms_stats.export_schema_stats('OUTLN','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('DBSNMP','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('SYSTEM','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS'); spool off 3. ----------------------------------------------------------------------------Log in to the system as the owner of the ORACLE_HOME directory of the new Oracle Database 10g release and copy the file ORACLE_HOME/rdbms/admin/utlu101i.sql of the new Oracle Database 10gR1 release to a directory outside of the ORACLE_HOME, such as the temporary directory on your system. Make a note of the new location of this file. 4. ----------------------------------------------------------------------------Change to the directory outside of the ORACLE_HOME directory that you copied files to in Step 3. Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu101i.sql file. Please note that the database must be running in normal mode in the old release. The script must be run in the context of the old release and not in the 10gR1 release. Sqlplus /nolog SQL> Connect sys/passwd_for_sys as sysdba SQL> SPOOL info.log SQL> @utlu101i.sql SQL> SPOOL OFF Then, check the spool file and examine the output of the upgrade information

tool. The sections, which follow, describe the output of the Upgrade Information Tool (utlu101i.sql). Database This section displays global database information about the current database, such as the database name, release number, and compatibility level. A warning is displayed if the COMPATIBLE initialization parameter needs to be adjusted before the database is upgraded. Log files This section displays a list of redo log files in the current database whose size is less than 4 MB. For each log file, the file name, group number, and recommended size is displayed. New files of at least 4 MB (preferably 10 MB) need to be created in the current database. Any redo log files less than 4 MB must be dropped before the database is upgraded. Tablespaces This section displays a list of tablespaces in the current database. For each tablespace, the tablespace name, owner, and minimum required size is displayed. In addition, a message is displayed if the tablespace is adequate for the upgrade. If the tablespace does not have enough free space, then space must be added to the tablespace in the current database. Tablespace adjustments need to be made before the database is upgraded. Options This section displays a list of options in the current database that must be available in the new Oracle Database 10g release before the database is upgraded. Update Parameters This section displays a list of initialization parameters in the parameter file of the current database that must be adjusted before the database is upgraded. The adjustments need to be made to the parameter file after it is copied to the new Oracle Database 10gR1 release. Deprecated Parameters This section displays a list of initialization parameters in the parameter file of the current database that are deprecated in the new Oracle Database 10gR1 release. Obsolete Parameters This section displays a list of initialization parameters in the parameter file of the current database that are obsolete in the new Oracle Database 10gR1 release. Obsolete initialization parameters need to be removed from the parameter file before the database is upgraded. Components This section displays a list of database components in the new Oracle Database 10gR1 release that will be upgraded or installed when the current database is upgraded. SYSAUX Tablespace This section displays the minimum required size for the SYSAUX tablespace, which is required in Oracle Database 10gR1. The SYSAUX tablespace must be created

after the new Oracle Database 10g release is started and BEFORE the upgrade scripts are invoked. Note: Because of Bug 3405535 Check the component status and version in the 9.2 database prior to upgrading eg; select comp_name, status, version from dba_registry; 5. -----------------------------------------------------------------------------Upgrade will leave all objects (packages,views,...) invalid, except for tables. All other objects must be recompiled manually. List all objects that are not VALID before the upgrade. This list of fatal objects. Select substr(owner,1,12) owner, substr(object_name,1,30) object, Substr(object_type,1,30) type,status from dba_objects where status <>'VALID'; To create a script to compile all invalid objects, before upgrading, run the the script called utlrp.sql in the $ORACLE_HOME/rdbms/admin directory. This script recompiles all invalid PL/SQL in the database including views. $ cd $ORACLE_HOME/rdbms/admin $ sqlplus sys/<password for SYS> as sysdba SQL> @utlrp.sql Run the script and than rerun the query to get invalid objects. spool invalid_pre.lst Select substr(owner,1,12) owner, Substr(object_name,1,30) object, Substr(object_type,1,30) type, status from dba_objects where status <>'VALID'; spool off This last query will return a list of all objects that cannot be recompiled before the upgrade in the file 'invalid_pre.lst' There should be not dictionary objects invalid. 6. ----------------------------------------------------------------------------Verify the kernel parameters according to the installation guide of the new version. Example for Solaris: $ cat /etc/system 7. ------------------------------------------------------------------------------

Ensure ORACLE_SID is set to instance you want to upgrade. Echo $ORACLE_SID Echo $ORACLE_HOME 8. ----------------------------------------------------------------------------As of Oracle 9i the National Characterset (NLS_NCHAR_CHARACTERSET) will be limited to UTF8 and AL16UTF16. Note:276914.1 The National Character Set in Oracle 9i and 10g Any other NLS_NCHAR_CHARACTERSET will no longer be supported. When upgrading to 9i the value of NLS_NCHAR_CHARACTERSET is based on value currently used in the Oracle8 version. If the NLS_NCHAR_CHARACTERSET is UTF8 then new it will stay UTF8. In all other cases the NLS_NCHAR_CHARACTERSET is changed to AL16UTF16 and -if used- N-type data (= data in columns using NCHAR, NVARCHAR2 orNCLOB ) may need to be converted. The change itself is done in step 31 by running the upgrade script. If you are NOT using N-type columns *for user data* then simply go to step 9. No further action required. ( so if: select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in ('SYS','SYSTEM'); returns no rows, go to point 9.) If you have N-type columns *for user data* then check: SQL> select * from nls_database_parameters where parameter ='NLS_NCHAR_CHARACTERSET'; If you are using N-type columns AND your National Characterset is UTF8 or is in the following list: JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED ZHT32EUCFIXED then also simply go to point 9. The conversion of the user data itself will then be done in step 38. If you are using N-type columns AND your National Characterset is NOT UTF8 or NOT in the following list: JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED ZHT32EUCFIXED (your current NLS_NCHAR_CHARACTERSET is for example US7ASCII, WE8ISO8859P1, CL8MSWIN1251 ...) then you have to: * change the tables to use CHAR, VARCHAR2 or CLOB instead the N-type or

* use export/import the table(s) containing N-type columns and truncate those tables before migrating to 9i. The recommended NLS_LANG during export is simply the NLS_CHARACTERSET, not the NLS_NCHAR_CHARACTERSET 9. -----------------------------------------------------------------------------Check for corruption in the dictionary, use the following commands in sqlplus connected as sys: Set verify off Set space 0 Set heading off Set feedback off Set pages 1000 Spool analyze.sql Select 'Analyze '||object_type||' '||object_name ||' validate structure;' from dba_objects where owner='SYS' and object_type in ('INDEX','TABLE','CLUSTER'); spool off This creates a script called analyze.sql. Run the script. This script (analyze.sql) should not return any errors. 10. -----------------------------------------------------------------------------Ensure that all Snapshot refreshes are successfully completed. And replication is stopped. $ Sqlplus SYS/<password for SYS> SQL> Select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times; 11. ---------------------------------------------------------------------------Stop the listener for the database $ lsnrctl Lsnrctl> stop <listenername> 12. ---------------------------------------------------------------------------Ensure no files need media recovery: $ sqlplus SYS/<password for SYS> Select * from v$recover_file; This should return no rows 13. ---------------------------------------------------------------------------Ensure no files are in backup mode:

Select * from v$backup where status!='NOT ACTIVE'; This should return no rows. 14. ---------------------------------------------------------------------------Resolve any outstanding unresolved distributed transaction: Select * from dba_2pc_pending; If this returns rows you should do the following: Select local_tran_id from dba_2pc_pending; Execute dbms_transaction.purge_lost_db_entry('<LOCAL_TRAN_ID>'); Commit; 15. ---------------------------------------------------------------------------Disable all batch and cron jobs. 16. ---------------------------------------------------------------------------Ensure the users sys and system have 'system' as their default tablespace. Select username, default_tablespace from dba_users where username in ('SYS','SYSTEM'); To modify use: Alter user sys default tablespace SYSTEM; Alter user system default tablespace SYSTEM; 17. ---------------------------------------------------------------------------Optionally ensure the aud$ is in the system tablespace when auditing is enabled. Select tablespace_name from dba_tables where table_name='AUD$'; 18. ---------------------------------------------------------------------------Note down where all control files are located. Select * from v$controlfile; 19. ---------------------------------------------------------------------------Note down all sysdba users. Select * from v$pwfile_users; If a passwordfile is used copy it to the new location. On unix the default is $ORACLE_HOME/dbs/orapw<SID>.

On windows NT this is %ORACLE_HOME%\database\orapw<SID> 20. ---------------------------------------------------------------------------Shutdown the database $ sqlplus SYS/<password for SYS> SQL> Shutdown immediate 21. ---------------------------------------------------------------------------Change the init.ora file: - Make a backup of the init.ora file. - Comment out obsoleted parameters(list in appendix B). - Change all deprecated parameters(list in appendix A). - Make sure the COMPATIBLE initialization parameter is properly set for the new Oracle Database 10gR1 release. The Upgrade Information Tool displays a warning in the Database section if COMPATIBLE is not properly set. - If you have set the parameter NLS_LENGTH_SEMANTICS to CHAR, change the value to BYTE during the upgrade. - Verify that the parameter DB_DOMAIN is set properly. - Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to at least 24 MB. - Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 48 MB. - Ensure there is a value for DB_BLOCK_SIZE - Comment out the JOB_QUEUE_PROCESSES parameter, put in a new and set this explicitly to zero, during the upgrade - Comment out the AQ_TM_PROCESSES parameter, put in a new and set this explicitly to zero, during the upgrade - On Windows operating systems, change the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters that point to RDBMS80 or any other environment variable to point to the following directories instead: BACKGROUND_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME and USER_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME\archive - Make sure all path names in the parameter file are fully specified. You should not have relative path names in the parameter file. - If you are using a cluster database, set the parameter CLUSTER_DATABASE=FALSE during the upgrade. - If you are upgrading a cluster database, then modify the initdb_name.ora file in the same way that you modified the parameter file. - check MAX_ENABLED_ROLES parameter value. Increase it if the number of roles in the Database is close to its value. When upgrading to higher versions, new roles might be added . If the Database already contains high number of roles, upgrade scripts can fail with error ORA-01925 as indicated in Note 261632.1 - Ensure that the shared_pool_size and the large_pool_size are at least 150Mb - Ensure that PLSQL_V2_COMPATIBILITY is unset for the upgrade, otherwise this could cause problems during a later stage. See Note 170282.1 22. ----------------------------------------------------------------------------

Check for adequate freespace on archive log destination file systems. 23. ---------------------------------------------------------------------------Ensure the NLS_LANG variable is set correctly: $ echo $NLS_LANG 24. ---------------------------------------------------------------------------If needed copy the listener.ora and the tnsnames.ora to the new location (when no TNS_ADMIN env. Parameter is used) cp $ORACLE_HOME/network/admin <NEW_ORACLE_HOME>/network/admin 25. ---------------------------------------------------------------------------If your Operating system is Windows NT, delete your services With the ORADIM of your old oracle version. Stop the OracleServiceSID Oracle service of the database you are upgrading, where SID is the instance name. For example, if your SID is ORCL, then enter the following at a command prompt: C:\> NET STOP OracleServiceORCL For Oracle 8.0 this is: C:\ORADIM80 -DELETE -SID <SID> For Oracle 8i or higher this is: C:\ORADIM -DELETE -SID <SID> And create the new Oracle Database 10g service at a command prompt using the ORADIM command of the new Oracle Database release: C:\> ORADIM -NEW -SID SID -INTPWD PASSWORD -MAXUSERS USERS -STARTMODE AUTO -PFILE ORACLE_HOME\DATABASE\INITSID.ORA 26. ---------------------------------------------------------------------------Copy configuration files from the ORACLE_HOME of the database being upgraded to the new Oracle Database 10gR1 ORACLE_HOME: If your parameter file resides within the old environment's ORACLE_HOME, then copy it to the new ORACLE_HOME. By default, Oracle looks for the parameter file in ORACLE_HOME/dbs on UNIX platforms and in ORACLE_HOME\database on Windows operating systems. The parameter file can reside anywhere you wish, but it should not reside in the old environment's ORACLE_HOME after you upgrade to Oracle Database 10gR1. If your parameter file is a text-based initialization parameter file with either an IFILE (include file) or a SPFILE (server parameter file) entry, and the file specified in the IFILE or SPFILE entry resides within the old

environment's ORACLE_HOME, then copy the file specified by the IFILE or SPFILE entry to the new ORACLE_HOME. The file specified in the IFILE or SPFILE entry contains additional initialization parameters. If you have a password file that resides within the old environments ORACLE_HOME, then move or copy the password file to the new Oracle Database 10g ORACLE_HOME. The name and location of the password file are operating system-specific. On UNIX platforms, the default password file is ORACLE_HOME/dbs/orapwsid. On Windows operating systems, the default password file is ORACLE_HOME\database\pwdsid.ora. In both cases, sid is your Oracle instance ID. If you are upgrading a cluster database and your initdb_name.ora file resides within the old environment's ORACLE_HOME, then move or copy the initdb_name.ora file to the new ORACLE_HOME. Note: If you are upgrading a cluster database, then perform this step on all nodes in which this cluster database has instances configured. 27. ---------------------------------------------------------------------------Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup: <SID>:<new ORACLE_HOME>:N 28. ---------------------------------------------------------------------------Update the environment variables like ORACLE_HOME and PATH $ . oraenv 29. ---------------------------------------------------------------------------Make sure the following environment variables point to the new Release directories: - ORACLE_HOME - PATH - ORA_NLS10 - ORACLE_BASE - LD_LIBRARY_PATH - ORACLE_PATH For HP-UX systems verify the SHLIB_PATH parameter points to the new release directories. $ $ $ $ $ $ env env env env env env | | | | | | grep grep grep grep grep grep ORACLE_HOME PATH ORA_NLS33 ORACLE_BASE LD_LIBRARY_PATH ORACLE_PATH

HP-UX: $ env | grep SHLIB_PATH 30. ----------------------------------------------------------------------------PERFORM a Full cold backup!!!!!!! You can either do this by manually copying the files or sign on to RMAN: $rman "target / nocatalog" And issue the following RMAN commands: RUN {

ALLOCATE CHANNEL chan_name TYPE DISK; BACKUP DATABASE FORMAT 'some_backup_directory%U' TAG before_upgrade; BACKUP CURRENT CONTROLFILE TO 'save_controlfile_location';

} 31. ---------------------------------------------------------------------------Startup upgrade the database: $ cd $ORACLE_HOME/rdbms/admin Sqlplus /nolog SQL> Connect sys/passwd_for_sys as sysdba Use Startup with the UPGRADE option: SQL> Startup upgrade 32. ---------------------------------------------------------------------------Create a SYSAUX tablespace. In Oracle Database 10gR1, the SYSAUX tablespace is used to consolidate data from a number of tablespaces that were separate in previous releases. The SYSAUX tablespace must be created with the following mandatory attributes: ONLINE PERMANENT READ WRITE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The Upgrade Information Tool(utlu101i.sql in step 4) provides an estimate of the minimum required size for the SYSAUX tablespace in the SYSAUX Tablespace section. The following SQL statement would create a 500 MB SYSAUX tablespace for the database: SQL> CREATE TABLESPACE sysaux DATAFILE 'sysaux01.dbf'

SIZE 500M REUSE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE; 33. ---------------------------------------------------------------------------Spool the output so you can take a look at possible errors after the upgrade: SQL> Spool Upgrade.log Run the appropriate script for your version. From ==== 7.x.x 8.0.5 8.0.6 8.1.5 8.1.6 8.1.7 9.0.1 9.2 To: === 10.1 10.1 10.1 10.1 10.1 10.1 10.1 10.1 Only Script to Run ================== Not supported Not supported u0800060.sql Not Supported Not Supported u0801070.sql u0900010.sql u0902000.sql

You only need to run one script, even if your upgrade spans more than one release. For example, if your old release was 8.1.7, then you only need to run u0801070.sql. Each of these scripts is a direct upgrade path from the version you are on to 10gR1. You do not need to run catalog.sql and catproc.sql as these two scripts are called from within the upgrade script. The upgrade script creates and alters certain data dictionary tables. It also upgrades or installs the following database components in the new release 10gR1 database: Oracle Database Catalog Views, Oracle Database Packages and Types JServer JAVA Virtual Machine, Oracle Database Java Packages, Oracle XDK, Oracle Real Application Clusters, Oracle Workspace Manager, Oracle interMedia, Oracle XML Database, OLAP Analytic Workspace, Oracle OLAP API, OLAP Catalog, Oracle Text, Spatial, Oracle Data Mining, Oracle Label Security, Messaging Gateway Turn off the spooling of script results to the log file: SQL> SPOOL OFF Then, check the spool file and verify that the packages and procedures compiled successfully. You named the spool file in Step 12; the suggested name was upgrade.log. Correct any problems you find in this file and rerun the appropriate upgrade script if necessary. You can rerun any of the scripts described in this chapter as many times as necessary. 34. ----------------------------------------------------------------------------

Run utlu101s.sql, specifying the TEXT option: SQL> @utlu101s.sql TEXT This is the Post-upgrade Status Tool displays the status of the database components in the upgraded database. The Upgrade Status Tool displays output similar to the following: Oracle Database 10.1 Upgrade Status Tool MM-DD-YYYY HH:MM:SS --> Oracle Database Catalog Views Normal successful completion --> Oracle Database Packages and Types Normal successful completion --> JServer JAVA Virtual Machine Normal successful completion --> Oracle XDK Normal successful completion --> Oracle Database Java Packages Normal successful completion --> Oracle Real Application Clusters Normal successful completion --> Oracle interMedia Normal successful completion --> Oracle Text Normal successful completion 35. ---------------------------------------------------------------------------Restart the database: SQL> Shutdown Immediate (DO NOT USE SHUTDOWN ABORT!!!!!!!!!) SQL> Startup restrict Executing this clean shutdown flushes all caches, clears buffers and performs other database housekeeping tasks. Which is needed if you want to upgrade specific components. 36. ---------------------------------------------------------------------------Run script to recompile invalid pl/sql modules: SQL> @utlrp If there are still objects which are not valid after running the script run the following: spool invalid_post.lst Select substr(owner,1,12) owner, Substr(object_name,1,30) object, Substr(object_type,1,30) type, status from dba_objects where status <>'VALID'; spool off Now compare the invalid objects in the file 'invalid_post.lst' with the invalid objects in the file 'invalid_pre.lst' you create in step 5. There should be no dictionary objects invalid. 37. ---------------------------------------------------------------------------Shutdown the database and startup the database. $ sqlplus /nolog SQL> Connect sys/passwd_for_sys as sysdba

SQL> Shutdown SQL> Startup restrict 38. ---------------------------------------------------------------------------A) IF you are NOT using N-type columns for *user* data: select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in ('SYS','SYSTEM'); did not return rows in point 8 of this note. then simply: $ sqlplus /nolog SQL> connect sys/passwd_for_sys as sysdba SQL> shutdown immediate and go to step 39. B) IF your version 8 NLS_NCHAR_CHARACTERSET was UTF8: you can look up your previous NLS_NCHAR_CHARACTERSET using this select: select * from nls_database_parameters where parameter ='NLS_SAVED_NCHAR_CS'; then simply: $ sqlplus /nolog SQL> connect sys/passwd_for_sys as sysdba SQL> shutdown immediate go to step 39. C) IF you are using N-type columns for *user* data *AND* your previous NLS_NCHAR_CHARACTERSET was in the following list: JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED ZHT32EUCFIXED then the N-type columns *data* need to be converted to AL16UTF16: To upgrade user tables with N-type columns to AL16UTF16 run the script utlnchar.sql: $ sqlplus /nolog SQL> connect sys/passwd_for_sys as sysdba SQL> @utlnchar.sql SQL> shutdown immediate go to step 39. D) IF you are using N-type columns for *user* data *AND * your previous NLS_NCHAR_CHARACTERSET was *NOT* in the following list: JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED ZHT32EUCFIXED

then import the data exported in point 8 of this note. The recommended NLS_LANG during import is simply the NLS_CHARACTERSET, not the NLS_NCHAR_CHARACTERSET After the import: $ sqlplus /nolog SQL> connect sys/passwd_for_sys as sysdba SQL> shutdown immediate go to step 39. 39. ---------------------------------------------------------------------------Now edit the init.ora: - put back the old value for the JOB_QUEUE_PROCESSES parameter - put back the old value for the AQ_TM_PROCESSES parameter - If you change the value for NLS_LENGTH_SEMANTICS prior to the upgrade put the value back to CHAR. - If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE 40. ---------------------------------------------------------------------------Startup the database : SQL> Startup This is optional: Create a server parameter file with a SQL> Create spfile from pfile; initialization parameter file

This will create a spfile as a copy of the init.ora file located in the $ORACLE_HOME/dbs directory. 41. ---------------------------------------------------------------------------Modify the listener.ora file: For the upgraded intstance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME. 42. ---------------------------------------------------------------------------Start the listener $ lsnrctl LSNRCTL> start <listenername> 43. ---------------------------------------------------------------------------Enable cron and batch jobs

44. ---------------------------------------------------------------------------Change oratab entry to use automatic startup SID:ORACLE_HOME:Y --------------------------------------------------------------------------Appendix A: Deprecated parameters in release 10gR1: -----------------------------------------------BUFFER_POOL_KEEP (replaced by DB_KEEP_CACHE_SIZE) BUFFER_POOL_RECYCLE (replaced by DB_RECYCLE_CACHE_SIZE) GLOBAL_CONTEXT_POOL_SIZE LOCK_NAME_SPACE LOG_ARCHIVE_START MAX_ENABLED_ROLES PARALLEL_AUTOMATIC_TUNING PLSQL_COMPILER_FLAGS (replaced by PLSQL_CODE_TYPE and PLSQL_DEBUG) DRS_START (replaced by DG_BROKER_START) FAST_START_IO_TARGET (replaced by FAST_START_MTTR_TARGET) MTS_CIRCUITS (replaced by CIRCUITS) MTS_DISPATCHERS (replaced by DISPATCHERS) MTS_MAX_DISPATCHERS (replaced by MAX_DISPATCHERS) MTS_MAX_SERVERS (replaced by MAX_SHARED_SERVERS) MTS_SERVERS (replaced by SHARED_SERVERS) MTS_SESSIONS (replaced by SHARED_SERVER_SESSIONS) PARALLEL_SERVER (replaced by CLUSTER_DATABASE) PARALLEL_SERVER_INSTANCES (replaced by CLUSTER_DATABASE_INSTANCES) Appendix B: Obsolete parameters in 10gR1: --------------------------------------DBLINK_ENCRYPT_LOGIN HASH_JOIN_ENABLED LOG_PARALLELISM MAX_ROLLBACK_SEGMENTS MTS_CIRCUITS MTS_DISPATCHERS MTS_LISTENER_ADDRESS MTS_MAX_DISPATCHERS MTS_MAX_SERVERS MTS_MULTIPLE_LISTENERS MTS_SERVERS MTS_SERVICE MTS_SESSIONS OPTIMIZER_MAX_PERMUTATIONS ORACLE_TRACE_COLLECTION_NAME ORACLE_TRACE_COLLECTION_PATH ORACLE_TRACE_COLLECTION_SIZE ORACLE_TRACE_ENABLE ORACLE_TRACE_FACILITY_NAME ORACLE_TRACE_FACILITY_PATH

PARTITION_VIEW_ENABLED PLSQL_NATIVE_C_COMPILER PLSQL_NATIVE_LINKER PLSQL_NATIVE_MAKE_FILE_NAME PLSQL_NATIVE_MAKE_UTILITY ROW_LOCKING SERIALIZABLE TRANSACTION_AUDITING UNDO_SUPPRESS_ERRORS DISTRIBUTED_TRANSACTIONS MAX_TRANSACTION_BRANCHES PARALLEL_BROADCAST_ENABLED STANDBY_PRESERVES_NAMES ALWAYS_ANTI_JOIN ALWAYS_SEMI_JOIN DB_BLOCK_LRU_LATCHES DB_BLOCK_MAX_DIRTY_TARGET DB_FILE_DIRECT_IO_COUNT GC_DEFER_TIME GC_RELEASABLE_LOCKS GC_ROLLBACK_LOCKS HASH_MULTIBLOCK_IO_COUNT INSTANCE_NODESET JOB_QUEUE_INTERVAL OPS_INTERCONNECTS OPTIMIZER_PERCENT_PARALLEL SORT_MULTIBLOCK_READ_COUNT TEXT_ENABLE DB_BLOCK_BUFFERS

RELATED NOTES: -------------Keywords: --------Upgrade migration migrate move convert 10g 10.1 9i 9.2 9.0.1 8.1.7 8.0.6 8.1.5 8.1.6 Upgrade migration migrate move convert 10g 10.1 9i 9.2 9.0.1 8.1.7 8.0.6 8.1.5 8.1.6 Upgrade migration migrate move convert 10g 10.1 9i 9.2 9.0.1 8.1.7 8.0.6 8.1.5 8.1.6

Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g/11g [ID 28814.1] Modified 29-APR-2011 Type BULLETIN S tatus PUBLISHED

Handling Block Corruptions in Oracle7 / 8 / 8i / 9i / 10g / 11g


Contents
Introduction Overview of Steps to handle a Corruption Corruption due to NOLOGGING or UNRECOVERABLE (1) Determine the Extent of the Corruption Problem (2) Replace or Move Away from Suspect Hardware (3) Which Objects are Affected ? Options for various Segment Types: CACHE CLUSTER INDEX PARTITION INDEX LOBINDEX LOBSEGMENT ROLLBACK TABLE PARTITION TABLE TEMPORARY IOT TYPE2 UNDO Other Segment Types No Segment (4) Choosing a Recovery Option (4A) Complete Recovery Block Level Recovery , Datafile Recovery , Database Recovery , After Complete Recovery (4B) Recreating Indexes (4C) Salvaging Data from Tables Methods of extracting data from a corrupt table AROUND a corrupt block Methods of extracting data from a table with a corrupt LOBSEGMENT block Extracting data from the corrupt block itself (4D) Leaving the Corruption in Place Warnings when Leaving a Corruption in Place (4E) Last Options Document History All SQL statements here are for use in SQL*Plus (in 8.1 or higher) or Server Manager (Oracle7 / 8.0) when connected as a SYSDBA user. (Eg: "connect / as sysdba" or "connect internal")

Introduction
This article discusses how to handle one or more block corruptions on an Oracle datafile and describes the main actions to take to deal with them. Please read the complete article before taking any action. This note does not cover memory corruption issues (typically ORA-600 [17xxx] type errors). Note: If the problem is an ORA-1578 on STARTUP then please contact your local support center for advice referencing

Note:106638.1 - this note is not visible to customers but the relevant steps from it can be supplied by an experienced support analyst.

You may be referred to this article from many places for many forms of error - it is important that you have the following information for each corrupt block:

An absolute FILE NUMBER of the file containing the corrupt block. Referred to as "&AFN" in this article.

The file name of the file containing the corrupt block. Referred to as "&FILENAME" in this article. ( If you know the FILE NUMBER but not its name then V$DATAFILE can be used to get the file name: SELECT name FROM v$datafile WHERE file#=&AFN; If the file number does not appear in V$DATAFILE in Oracle8i AND &AFN is greater than the DB_FILES parameter value then it is probably a TEMPFILE. In this case the filename can be found using: SELECT name FROM v$tempfile WHERE file#=(&AFN - &DB_FILES_value); ) The BLOCK NUMBER of the corrupt block in that file. Referred to as "&BL" in this article. The tablespace number and name containing the affected block. Referred to as "&TSN" (tablespace number) and "&TABLESPACE_NAME" in this article. If you do not know these then you can find them using: SELECT ts# "TSN" FROM v$datafile WHERE file#=&AFN; SELECT tablespace_name FROM dba_data_files WHERE file_id=&AFN; The block size of the tablespace where the corruption lies. Referred to as "&TS_BLOCK_SIZE" in this article. For Oracle 9i+, run the following query to determine the appropriate block size:

SELECT block_size FROM dba_tablespaces WHERE tablespace_name = (SELECT tablespace_name FROM dba_data_files WHERE file_id=&AFN); For Oracle 7, 8.0 and 8.1: Every tablespace in the database has the same block size. For these versions, issue "SHOW PARAMETER DB_BLOCK_SIZE" and use this value as your &TS_BLOCK_SIZE.

Eg: For the ORA-1578 error: ORA-01578: ORACLE data block corrupted (file # 7, block # 12698) ORA-01110: data file 22: '/oracle1/oradata/V816/oradata/V816/users01.dbf' then: &AFN is "22" (from the ORA-1110 portion of the error) &RFN is "7" (from the "file #" in the ORA-1578) &BL is "12698" (from the "block #" in the ORA-1578) &FILENAME is '/oracle1/oradata/V816/oradata/V816/users01.dbf' &TSN etc.. should be determined from the above SQL For other errors (ORA-600 , ORA-1498 etc...) the above values should either be given to you by Oracle Support, or be given to you from the article which covers the relevant error. Some errors, such as ORA-1410 "invalid ROWID" , ORA-12899 "value too large for column" etc.., do not give details of the corrupt file / block. For such cases Note:869305.1 may help in locating the corrupt row.

Overview of Steps to handle a Corruption


There are many possible causes of a block corruption including: Bad IO hardware / firmware OS problems Oracle problems Recovering through "UNRECOVERABLE" or "NOLOGGING" database actions (in which case ORA-1578 is expected behaviour - see below)

The point in time when an Oracle error is raised may be much later than when any corruption initially occurred. As the root cause is not usually known at the time the corruption is encountered, and as in most cases the key requirement is to get up and running again, then the steps used tackle corruption problems in this article are: 1) Determine the extent of the corruption problems

and also determine if the problems are permanent or transient. If the problem is widespread or the errors move about then focus on identifying the cause first (check hardware etc..). This is important as there is no point recovering a system if the underlying hardware is faulty. 2) Replace or move away from any faulty or suspect hardware. 3) Determine which database objects are affected. 4) Choose the most appropriate database recovery / data salvage option. For all steps above it is sensible to collect evidence and document exactly what actions are being taken. The 'Evidence>>' tags in this article list the information which should be collected to assist with identifying the root cause of the problem. Corruption due to NOLOGGING or UNRECOVERABLE If a NOLOGGING (or UNRECOVERABLE) operation is performed on an object and the datafile containing that object is subsequently recovered then the data blocks affected by the NOLOGGING operation are marked as corrupt and will signal an ORA-1578 error when accessed. In Oracle8i an ORA-26040 is also signalled ("ORA-26040: Data block was loaded using the NOLOGGING option" ) which makes the cause fairly obvious, but earlier releases have no additional error message. If a block is corrupt due to recovery through a NOLOGGING operation then you can use this article from Section 3 "Which Objects are Affected ?" onwards but note that: (a) Recovery cannot retrieve the NOLOGGING data (b) No data is salvagable from inside the block

(1) Determine the Extent of the Corruption Problem


Whenever a corruption error occurs note down the FULL error message/s and look in the instance's alert log and trace files for any associated errors. It is important to do this first to assess whether this is a single block corruption, an error due to an UNRECOVERABLE operation or a more severe issue. It is a good idea to scan affected files (and any important files) with DBVERIFY to check for other corruptions in order to determine the extent of the problem. For details of using DBVERIFY see Note:35512.1 Once you have determined a list of corrupt file/block combinations then the steps below can be used to help determine what action can be taken.

Evidence>> - Record the original error in full, along with details of the application which encountered the error. - Save an extract from the alert log from a few hours before the FIRST recorded problem up to the current point in time. - Save any tracefiles mentioned in the alert log. - Record any recent OS problems you have encountered. - Note if you are using any special features - Eg: ASYNC IO, fast write disk options etc.. - Record your current BACKUP position (Dates, Type etc...) - Note if your database is in ARCHIVELOG mode or not Eg: Issue "ARCHIVE LOG LIST" in SQL*Plus (or Server Manager)

(2) Replace or Move Away from Suspect Hardware


The vast majority of corruption problems are caused by faulty hardware. If there is a hardware fault or a suspect component then it is sensible to either repair the problem, or make disk space available on a separate disk sub-system prior to proceeding with a recovery option. You can move datafiles about using the following steps: 1. Make sure the file to be relocated is either OFFLINE or the instance is in the MOUNT state (not open) 2. Physically restore (or copy) the datafile to its new location eg: /newlocation/myfile.dbf 3. Tell Oracle the new location of the file. eg: ALTER DATABASE RENAME FILE '/oldlocation/myfile.dbf' TO '/newlocation/myfile.dbf'; (Note that you cannot RENAME a TEMPFILE - TEMPFILEs should be dropped and recreated at the new location) 4. Online the relevant file / tablespace (if database is open) IMPORTANT: If there are multiple errors (which are NOT due to NOLOGGING) OR You have OS level errors against the affected file OR The errors are transient and keep moving about then there is little point proceeding until the underlying problem has been addressed or space is available on alternative disks. Get your hardware vendor to check the system over and contact Oracle Support with details of all errors. Please note: Whilst a failed hardware check is a good indication that there is a hardware issue, a successful hardware check should not be taken as proof that there is no hardware related issue - it is very common for hardware tests to report success when there really is some underlying fault. If using any special IO options such as direct IO , async IO or similar it may be worth disabling them in order to eliminate such options as a potential source of problems.

(3) Which Objects are Affected ?


It is best to determine which objects are affected BEFORE making any decisions about how to recover - this is because the corruption/s may be on object/s which can easily be re-created. Eg: For a corruption on a 5 row lookup table it may be far quicker to drop and recreate the table than to perform a recovery. For each corruption collect the information in the following table. The steps to do this are explained below.

Information to Record for each Corruption Abs Rel Bl Segme Rel Ori Seg Reco olute ative ock# Table nt ated ginal ment very File# File# space Owner.Na Object Error Type Options &AFN &RFN &BL me s

The notes below will help you fill in this table for each corruption. "Original Error" This is the error as initially reported. Eg: ORA-1578 / ORA-1110 , ORA-600 with all arguments etc.. "Absolute File#", "Relative File#" and "Block#" The File# and Block# should have been given to you either by the error, by Oracle Support, or by the steps in an error article which directed you to this article. In Oracle8/8i/9i/10g: The absolute and relative file numbers are often the same but can differ (especially if the database has been migrated from Oracle7). It is important to get the correct numbers for &AFN and &RFN or you may end up salvaging the wrong object !! An ORA-1578 reports the RELATIVE file number, with the ABSOLUTE file number given in the accompanying ORA-1110 error. For ORA-600 errors you should be told an absolute file number. The following query will show the absolute and relative file numbers for datafiles in the database: SELECT tablespace_name, file_id "AFN", relative_fno "RFN" FROM dba_data_files;

In Oracle8i/9i/10g: In addition to the notes above about Oracle8, Oracle8i onwards can have TEMPFILES. The following query will show the absolute and relative file numbers for tempfiles in the database: "RFN" SELECT tablespace_name, file_id+value "AFN", relative_fno FROM dba_temp_files, v$parameter WHERE name='db_files'; In Oracle7: Use the same file number for both the "Absolute File#" and the "Relative File#"

"Segment Type", "Owner", "Name" and "Tablespace" The following query will tell you the object TYPE , OWNER and NAME of a segment given the absolute file number "&AFN" and block number "&BL" of the corrupt block - the database must be open in order to use this query: SELECT FROM WHERE and ; tablespace_name, segment_type, owner, segment_name dba_extents file_id = &AFN &BL between block_id AND block_id + blocks - 1

If the block is in a TEMPFILE the above query will return no data. For TEMPFILES the "Segment Type" will be "TEMPORARY". If the above query does not return rows, it can also be that the corrupted block is a segment header in a Locally Managed Tablespace (LMT). When the corrupted block is a segment header block in a LMT, the above query produces a corruption message in the alert.log but the query does not not fail. In that case use this query: SELECT FROM WHERE and ; owner, segment_name, segment_type, partition_name dba_segments header_file = &AFN header_block = &BL

Reference Note 819533.1 "Related Objects" and Possible "Recovery Options" by SEGMENT_TYPE: The related objects and recovery options which can be used depend on the SEGMENT_TYPE. The additional queries and possible recovery options are listed below for each of the most common segment types. CACHE CLUSTER INDEX PARTITION LOBINDEX

INDEX LOBSEGMENT

ROLLBACK TABLE PARTITION TEMPORARY TYPE2 UNDO Some other Segment Type "no rows" from the query CACHE

TABLE

- If the segment type is CACHE recheck you have entered the SQL and parameters correctly. If you get the same result contact Oracle support with all information you have. Options: The database is likely to require recovery. {Continue} {Back to Segment List}

CLUSTER - If the segment is a CLUSTER determine which tables it contains. Eg: SELECT owner, table_name FROM dba_tables WHERE owner='&OWNER' AND cluster_name='&SEGMENT_NAME' ; Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary clusters possible options include: Recovery OR Salvage data from all tables in the cluster THEN Recreate the cluster and all its tables As the cluster may contain a number of tables, it is best to collect information for each table in the cluster before making a decision. {Collect TABLE information} {Back to Segment List}

INDEX PARTITION - If the segment is an INDEX PARTITION note the NAME and OWNER and then determine which partition is affected thus: SELECT FROM WHERE AND ; partition_name dba_extents file_id = &AFN &BL BETWEEN block_id AND block_id + blocks - 1

then continue below as if the segment was an INDEX segment. Options: Index partitions can be rebuilt using: ALTER INDEX xxx REBUILD PARTITION ppp;

(take care with the REBUILD option as described in "Recreating Indexes" below) INDEX

- If the segment is an INDEX then if the OWNER is "SYS" contact Oracle support with all details. For a non-dictionary INDEX or INDEX PARTITIONs find out which table the INDEX is on: Eg: SELECT table_owner, table_name FROM dba_indexes WHERE owner='&OWNER' AND index_name='&SEGMENT_NAME' ; and determine if the index supports a CONSTRAINT: Eg: SELECT FROM WHERE AND ; owner, constraint_name, constraint_type, table_name dba_constraints owner='&TABLE_OWNER' constraint_name='&INDEX_NAME'

Possible values for CONSTRAINT_TYPE are: P U If the INDEX check if the constraints: Eg: SELECT FROM WHERE AND ; The index supports a primary key constraint. The index supports a unique constraint. supports a PRIMARY KEY constraint (type "P") then primary key is referenced by any foreign key owner, constraint_name, constraint_type, table_name dba_constraints r_owner='&TABLE_OWNER' r_constraint_name='&INDEX_NAME'

Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary indexes possible options include: Recovery OR Recreate the index (with any associated constraint disables/enables) (take care with the REBUILD option as described in "Recreating Indexes" below) {Continue} {Back to Segment List}

ROLLBACK - If the segment is a ROLLBACK segment contact Oracle support as rollback segment corruptions require special handling.

Options: The database is likely to require recovery. {Continue} {Back to Segment List}

TYPE2 UNDO - TYPE2 UNDO is a system managed undo segment which is a special form of rollback segment. Corruptions in these segments require special handling. Options: The database is likely to require recovery. {Continue} {Back to Segment List}

TABLE PARTITION - If the segment is a TABLE PARTITION note the NAME and OWNER and then determine which partition is affected thus: SELECT FROM WHERE AND ; partition_name dba_extents file_id = &AFN &BL BETWEEN block_id AND block_id + blocks - 1

then continue below as if the segment was a TABLE segment. Options: If all corruptions are in the same partition then one option at this point is to EXCHANGE the corrupt partition with an empty TABLE - this can allow the application to continue (without access to the data in the corrupt partition) whilst any good data can then be extracted from the table. For other options see the TABLE options below. TABLE

- If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For a non-dictionary TABLE or TABLE PARTITIONs find out which INDEXES exist on the TABLE: Eg: SELECT owner, index_name, index_type FROM dba_indexes WHERE table_owner='&OWNER' AND table_name='&SEGMENT_NAME' ; and determine if there is any PRIMARY key on the table: Eg: SELECT FROM WHERE AND owner, constraint_name, constraint_type, table_name dba_constraints owner='&OWNER' table_name='&SEGMENT_NAME'

AND constraint_type='P'

If there is a primary key then check if this is referenced by any foreign key constraints: Eg: SELECT owner, constraint_name, constraint_type, table_name FROM dba_constraints WHERE r_owner='&OWNER' AND r_constraint_name='&CONSTRAINT_NAME' ; Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary tables possible options include: Recovery OR Salvage data from the table (or partition) THEN Recreate the table (or partition) OR Leave the corruption in place (eg: Use DBMS_REPAIR to mark the problem blocks to be skipped) {Continue} {Back to Segment List}

IOT (Index Organized Table) The corruption in IOT table should be handled in the same way as in a heap or partitioned table. The only exception is if the PK is corrupted. PK of an IOT table is the table itself and can't be dropped and recreated. Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary tables possible options include: Recovery OR Salvage data from the table (or partition) THEN Recreate the table (or partition) OR Leave the corruption in place (DBMS_REPAIR cannot be used with IOTs) {Continue} {Back to Segment List}

LOBINDEX - Find out which table the LOB belongs to: SELECT FROM WHERE AND table_name, column_name dba_lobs owner='&OWNER' index_name='&SEGMENT_NAME';

- If the table is owned by "SYS" then contact Oracle support with all details. The database is likely to require recovery. - It is not possible to rebuild LOB indexes and so you have to

treat the problem as a corruption on the LOB column of the affected table. Get index and constraint information for the table which has the corrupt LOB index using the SQL in the TABLE section, then return here. Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary tables possible options Recovery OR Salvage data from the table (and its THEN Recreate the table It is not generally sensible just to leave place unless the table is unlikely to have the problem column. {Continue} LOBSEGMENT - Find out which table the LOB belongs to: Eg: SELECT table_name, column_name FROM dba_lobs WHERE owner='&OWNER' AND segment_name='&SEGMENT_NAME'; - If the table is owned by "SYS" then contact Oracle support with all details. The database is likely to require recovery. - For non-dictionary tables ... Get index and constraint information for the table which has the corrupt LOB data using the SQL in the TABLE section, then return here to find details of the exact rows affected. Finding the exact row which references the corrupt LOB block can be a challenge as the errors reported do not show any detail about which table row owns the lob entry which is corrupt. Typically one can refer to application logs or any SQL_TRACE or 10046 trace of a session hitting the error (if available) or see if having event "1578 trace name errorstack level 3" set in the session helps identify the current SQL/binds/row. eg: ALTER SYSTEM SET EVENTS '1578 trace name errorstack level 3'; Then wait for the error to be hit by the application and find the trace file. If there are no clues then you can construct a PLSQL block {Back to Segment List} include: LOB column/s) the corruption in any further DML on

to scan the problem table row by row extracting the LOB column data which loops until it hits an error. Such a technique may take a while but it should be possible to get a primary key or rowid of any row which references a corrupt LOB block. eg: set serverout on exec dbms_output.enable(100000); declare error_1578 exception; pragma exception_init(error_1578,-1578); n number; cnt number:=0; badcnt number:=0; begin for cursor_lob in (select rowid r, &LOB_COLUMN_NAME L from &OWNER..&TABLE_NAME) loop begin n:=dbms_lob.instr(cursor_lob.L,hextoraw('AA25889911'),1,999999) ; exception when error_1578 then dbms_output.put_line('Got ORA-1578 reading LOB at '|| cursor_lob.R); badcnt:=badcnt+1; end; cnt:=cnt+1; end loop; dbms_output.put_line('Scanned '||cnt||' rows - saw '|| badcnt||' errors'); end; / It is possible to have a corrupt LOB block which is only present as an old version (for consistent read) and which has not yet been re-used in which case all table rows will be accessible but it may not be possible to insert / update the LOB columns once that block is reclaimed for reuse. Options: If the OWNER is "SYS" then contact Oracle support with all details. The database is likely to require recovery. For non dictionary tables possible options include: Recovery OR Salvage data from the table (and its LOB column/s) THEN Recreate the table OR Leave the corruption in place (It is not possible to use DBMS_REPAIR on LOB segments) {Continue} {Back to Segment List}

TEMPORARY - If the segment type is TEMPORARY then the corruption does not affect a permanent object. Check if the tablespace where the problem occurred is being used as a TEMPORARY tablespace thus: SELECT count(*) FROM dba_users WHERE temporary_tablespace='&TABLESPACE_NAME' ; Options: If this is a TEMPORARY_TABLESPACE then it may be possible to create a NEW temporary tablespace and switch all users to that tablespace then DROP the problem tablespace. If this is not a temporary tablespace then the block should not be read again and should get re-formatted next time the block is used - the error should not repeat PROVIDED any underlying cause has been cured. No restore is normally required, although if the disk is suspect and the tablespace contains useful data then a database recovery of the affected file/s may be wise. {Continue} {Back to Segment List}

Some other SEGMENT_TYPE - If the segment type returned is not covered above then contact Oracle support for advice with all information collected so far. {Continue} {Back to Segment List}

"no rows returned" - If there appears to be no extent containing the corrupt block then first double check the figures used in the query. If you are sure the file and block are correct and do not appear as belonging to an object in DBA_EXTENTS then: - Double check if the file involved is a TEMPFILE. Note that TEMPFILE file numbers depend on the init.ora parameter DB_FILES so any changes to this parameter change the absolute file number reported in errors. - DBA_EXTENTS does not include blocks which are used for local space management in locally managed tablespaces. - If the database you are now querying is from a different point in time to the datafile with the error then the problem object may have been dropped and so queries against DBA_EXTENTS may show no rows. - If the error you are investigating was reported by DBVERIFY then DBV checks all blocks regardless of whether they belong to an object or not. This it is possible for a corrupt block to exist in the datafile but in a block not in use by any object.

Options: An error on an UNUSED Oracle block can be ignored as Oracle will create a new block image should the block need to be used so any existing problem on the block will never get read. If you suspect that the block may be a space management block then you can use DBMS_SPACE_ADMIN to help check this by running: exec DBMS_SPACE_ADMIN.TABLESPACE_VERIFY('&TABLESPACE_NAME'); This should write inconsistencies to the trace file but if it encounters a fatally corrupt block it will report an error like: ORA-03216: Tablespace/Segment Verification cannot proceed An error on a bitmap space management block can often be corrected by running: exec DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS('&TABLESPACE_NAME'); {Continue} {Back to Segment List}

Evidence>> - For each corrupt block it is also a good idea to collect the following physical evidence if there is a need to try and identify the actual cause of the corruption: i) An operating system HEX dump of the bad block and the block either side of it. On UNIX: dd if=&FILENAME bs=&TS_BLOCK_SIZE skip=&BL-1 count=3 of=BL.dd ^^^^^^^^ ^^^^^^^^^^^^^^ ^^^ Eg: For BL=1224: dd if=ts11.dbf bs=4k skip=1223 count=3 of=1223_1225.dd On VMS: DUMP/BLOCKS=(start:XXXX,end:YYYY)/out=dump.out &FILENAME Where XXXX=Operating system block number (in 512 byte chunks) To calculate this multiply the block number reported by '&TS_BLOCK_SIZE/512'. ii) If you are in ARCHIVELOG mode make a safe copy of the archived log files around the time of the error, and preferably for a few hours before the error was reported. Also secure any backup/s of the problem datafile from before the errors as the before image PLUS redo can help point towards a cause. (DBV can often be used to check if the problem exists in a backup copy of a file). The ideal scenario is to have a datafile backup image which does not have any corruption and all the redo from that point in time up to and just past the time when the corruption is first reported. iii) Obtain an Oracle dump of the problem block/s:

ALTER SYSTEM DUMP DATAFILE '&FILENAME' BLOCK &BL ; (The output will go to a tracefile in the USER_DUMP_DEST). {Continue} {Back to Segment List}

(4) Choosing a Recovery Option


The best recovery option now depends on the objects affected. The notes in Section (3) above should have highlighted the main options available for each affected object. The actual recovery method chosen may include a mix or one or more methods thus: Is any Recovery Required ? If the error is in a TEMPORARY tablespace, or is in a block which is no longer part of any database object then no action is required, although it may be wise to relocate the problem tablespace to a different storage device. See Warnings. Is Complete Recovery an option ? In order for complete recovery to be an option the following must be true: - The database is in ARCHIVELOG mode (The "ARCHIVE LOG LIST" command shows Archivelog Mode) cases, long contains you have all - You have a good backup of affected files. Note that in some

the corruption may have been present, but undetected, for a period of time. If the most recent datafile backup still

the corruption, you can try an earlier backup as long as the necessary ARCHIVELOGS. (You can often use the DBV START= / END= options to check if specific block/s in a restored copy of a backup file are corrupt) - All ARCHIVELOGS are available from the time of the backup to the current point in time - The current online log/s are available and intact - The errors are NOT due to recovery through a NOLOGGING

operation When the above criteria are satisfied then complete recovery is usually the preferred option *BUT NOTE*

an

(a) If the rollback of a transaction has seen a corrupt block on object other than the rollback segment itself then UNDO may have been discarded. In this case you may need to rebuild indexes / check data integrity AFTER the recovery completes. (b) If the files to be recovered contain data from NOLOGGING operations performed since the last backup then those blocks will be marked corrupt if datafile or database recovery is used. In some cases this can put you in a worse scenario than the current position.

If database recovery has already been performed and the corruption is still there then either all of your backups contain the corruption, the underlying fault is still present or the problem is replaying through redo. In these cases you will need to choose some other recovery option. See "(4A) Complete Recovery" for complete recovery steps. Can the object be Dropped or Re-created without needing to extract any data from the object itself ? It may be possible to lose the object, or to recreate it from a script / recent export. Once an object is dropped then blocks in that object are marked as "free" and will be re-formatted when the block gets allocated to a new object. It is advisable to RENAME rather than DROP a table unless you are absolutely sure that you do not need any data in it. In the case of a table partition then only the affected partition needs to be dropped. eg: ALTER TABLE ... DROP PARTITION ... If the corruption affects the partition segment header, or the file containing the partition header is offline, then DROP PARTITION may fail. In this case it may still be possible to drop the partition by first exchanging it with a table of the same definition. eg: ALTER TABLE .. EXCHANGE PARTITION .. WITH TABLE ..; The most common object which can be re-created is an index. Always address TABLE corruptions before INDEX problems on a table. See "(4B) Recreating Indexes" for more details. when you block, is: For any segment, a quick way to extract the DDL for an object, have the absolute file number and block number of the corrupt set long 64000 select dbms_metadata.get_ddl(segment_type, segment_name, owner) FROM dba_extents WHERE file_id=&AFN AND &BL BETWEEN block_id AND block_id + blocks -1; Is it required to salvage data before recreating the object ? If the problem is on a critical application table which is regularly updated then it may be required to salvage as much data from the table as possible, then recreate the table.

See "(4C) Salvaging Data from Tables" for more details. Is it acceptable to leave the corruption in place for the moment? In some cases the best immediate option may be to leave the corruption in place and isolate it from application access. See "(4D) Leaving the Corruption In Place" for more details. Last Options Are any of the following possible ? Recovery to an old point-in-time (via point in time recovery) of either the database or tablespace point in time recovery OR Restore of a COLD backup from before the corruption OR Use of an existing export file See "(4E) Last Options" for more details.

(4A) Complete Recovery


If the database is in ARCHIVELOG mode and you have a good backup of the affected files then recovery is usually the preferred option. This is not GUARANTEED to clear a problem, but is effective for the majority of corruption issues. If recovery re-introduces the problem then return to the list of options above and choose another method. If you are using Oracle9i (or higher) then it may be possible to perform block level recovery using the RMAN BLOCKRECOVER command. If using an earlier Oracle release then you can either perform datafile recovery (which can be done while the rest of the database is still up and running), or database recovery (which requires the database to be taken down) . If you are using Oracle 11g (or higher) then it may be possible to use Data Recovery Advisor (see Note 1317849.1)

Block Level Recovery ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As of Oracle9i RMAN allows individual blocks to be recovered whilst the rest of the database (including other blocks in the datafile) are available for normal access. Note that block level recovery can only be used to recover a block fully to the current point in time. It is not necessary to be using RMAN for backups to be able to use this option for recovery of individual blocks. eg: Consider that you have an ORA-1578 on file #6 block #30 which is likely due to a media corruption problem and there is a good cold backup image of that file which has been restored to '.../RESTORE/filename.dbf'. Provided all archivelogs exist (in the default location) then you can use RMAN to perform a block level recovery using a command sequence like:

rman nocatalog connect target catalog datafilecopy '.../RESTORE/filename.dbf'; run {blockrecover datafile 6 block 30;} This will use the registered datafile backup image and any required archivelogs to perform block recovery of just the one problem block to current point in time. Please see the documentation for full details of the RMAN BLOCKRECOVER command and limitations.

Datafile Recovery ~~~~~~~~~~~~~~~~~~ Datafile recovery of a file involves the following steps. If there are several files repeat the steps for each file or see "Database Recovery" below. These steps can be used if the database is either OPEN or MOUNTED. OFFLINE the affected data file eg: ALTER DATABASE DATAFILE 'name_of_file' OFFLINE; Copy it to a safe location (in case the backup is bad) Restore the latest backup of the file onto a GOOD disk Check the restored file for obvious corruptions with DBVERIFY For details of using DBVERIFY see Note:35512.1 Assuming the restored file is OK, then RENAME the datafile to the NEW location (if different from the old location) eg: ALTER DATABASE RENAME FILE 'old_name' TO 'new_name'; Recover the datafile eg: RECOVER DATAFILE 'name_of_file'; Online the file/s eg: ALTER DATABASE DATAFILE 'name_of_file' ONLINE; {Continue} Database Recovery ~~~~~~~~~~~~~~~~~ Database recovery generally involves the following steps: Shutdown (Immediate or Abort) Copy the current copy of all files to be recovered to a safe location Restore the backup files to a GOOD disk location DO NOT RESTORE THE CONTROL FILES or ONLINE REDO LOG FILES Check restored files with DBVERIFY For details of using DBVERIFY see Note:35512.1

Startup MOUNT Rename any relocated files eg: ALTER DATABASE RENAME FILE 'old_name' TO 'new_name'; Ensure all required files are online eg: ALTER DATABASE DATAFILE 'name_of_file' ONLINE; Recover the database eg: RECOVER DATABASE Open the database eg: ALTER DATABASE OPEN; After a Complete Recovery ~~~~~~~~~~~~~~~~~~~~~~~~~~ Once a complete recovery has been performed it is advisable to check the database before allowing it to be used: - Run "ANALYZE <table_name> VALIDATE STRUCTURE CASCADE" against each problem object to check for table/index mis-matches. If there has been any UNDO discarded this may show a mismatch requiring indexes to be re-created. - Check the logical integrity of data in the table at the application level.

(4B) Recreating Indexes


If the corrupt object is a user INDEX you can simply drop and re-create it PROVIDED the underlying table is not also corrupt. If the underlying table is also corrupt it is advisable to sort out the TABLE before recreating any indexes. If the information collected shows that the index has dependent FOREIGN KEY constraints then you will need to do something like this: - ALTER TABLE <child_table> DISABLE CONSTRAINT <fk_constraint>; for each foreign key - Rebuild the primary key using ALTER TABLE <table> DISABLE CONSTRAINT <pk_constraint>; DROP INDEX <index_name>; CREATE INDEX <index_name> .. with appropriate storage clause ALTER TABLE <table> ENABLE CONSTRAINT <pk_constraint>; - Enable the foreign key constraints ALTER TABLE <child_table> ENABLE CONSTRAINT <fk_constraint>; For an index partition you can: ALTER INDEX ... REBUILD PARTITION ...;

Notes: (1) It is important not to REBUILD a non-partitioned corrupt index using an "ALTER INDEX .. REBUILD" command as this will usually try to build the new index from the existing index segment, which contains a corrupt block. "ALTER INDEX ... REBUILD ONLINE" and "ALTER INDEX ... REBUILD PARTITION ..." do not build the new index from the old index segment and so can be used. (2) Create INDEX can use the data from an existing index if the new index is a sub-set of the columns in the existing index. Hence if you have 2 corrupt indexes drop them BOTH before re-creating them. (3) Be sure to use the correct storage details when recreating indexes.

(4C) Salvaging Data from Tables


If the corrupt object is a TABLE or CLUSTER or LOBSEGMENT then it must be understood that the data within the corrupt block is lost. Some of the data may be salvageable from a HEX dump of the block, or from columns covered by indexes. Important: As it may be required to salvage data in the corrupt block from the indexes it is a good idea NOT to drop any existing index until any required data has been extracted. There are many ways to get data out of a table which contains a corrupt block. Choose the most appropriate method as detailed below. The aim of these methods is to extract as much data as possible from the table blocks which can be accessed. It is usually a good idea to RENAME the corrupt table so that the new object can be created with the correct name. Eg: RENAME <emp> TO <emp_corrupt>; Methods of extracting data from a corrupt table AROUND a corrupt block (1) From Oracle 7.2 onwards, including Oracle 8.0, 8.1, and 9i, it is possible to SKIP over corrupt blocks in a table. This is by far the simplest option to extract table data and is discussed in: Extracting data using DBMS_REPAIR.SKIP_CORRUPT_BLOCKS or Event 10231 Note:33405.1 should If the corruption is in an IOT overflow segment then the same method be followed, but using event 10233 together with a full index scan. Note that this method can only be used if the block "wrapper" is marked corrupt. Eg: If the block reports ORA-1578.

If the problem is an ORA-600 or other error which does not report and ORA-1578 error then it is often possible to use DBMS_REPAIR to mark the problem blocks in a table as "soft corrupt" such that they will then signal ORA-1578 when accessed which then allows you to use DBMS_REPAIR.SKIP_CORRUPT_BLOCKS. Note: Any blocks which are marked corrupt by the "FIX_CORRUPT_BLOCKS" procedure will also be marked corrupt following any restore / recover operation through the time of the FIX_CORRUPT_BLOCKS. Full details of using DBMS_REPAIR for this can be found in the documentation but in summary the steps are: Use DBMS_REPAIR.ADMIN_TABLES to create the admin tables Use DBMS_REPAIR.CHECK_OBJECT to find problem blocks Get any good data out of problem blocks before corrupting them. Use DBMS_REPAIR.FIX_CORRUPT_BLOCKS to mark the found problem blocks as corrupt so that they will then signal ORA-1578 - If required use DBMS_REPAIR.SKIP_CORRUPT_BLOCKS to skip corrupt blocks on the table. (2) From Oracle 7.1 onwards you can use a ROWID range scan. The syntax for this is a little tricky but it is possible to select around a corrupt block using a ROWID hint. As the format of ROWIDs changed between Oracle7 and Oracle8 there are 2 articles which discuss this: Using ROWID Range Scans to extract data in Oracle8 and higher Note:61685.1 Using ROWID Range Scans to extract data in Oracle7 Note:34371.1 (3) If there is a primary key you can select table data via this index. It may also be possible to select some of data via any other index. This can be slow and time consuming and is only normally needed for Oracle 7.0 releases. This method is described in Note:34371.1 (which also describes the ROWID range scans) (4) There are various salvage programs / PLSQL scripts which can be used to salvage data from a table. These can take longer to set up and use than the above methods but can often cope with various kinds of corruption besides an ORA-1578. As these methods typically require much hand-holding from support then some of these articles may not be visible to customers. These require Pro*C to be available and an understanding of how to build Pro*C executables: SALVAGE.PC for Oracle7 Note:2077307.6 These requires manual interaction: SALVAGE.SQL for Oracle7/8 Note:2064553.4

Methods of extracting data from a table with a corrupt LOBSEGMENT block

It is not possible to used DBMS_REPAIR on LOB segments. If the corrupt LOB block is NOT referenced by any row in the table then it should be possible to CREATE TABLE as SELECT (CTAS) or export / drop / import the table as is. If the corrupt LOB block is referenced by a row then it should be possible to select or export with a WHERE predicate that excludes the problem row/s. WARNING: It is possible to update the LOB column value of a problem row to NULL which will then clear ORA-1578 on SELECT operations *BUT* the corrupt block will then be waiting to be reclaimed and will eventually signal an ORA-1578 on attempts to get a new LOB for INSERT or UPDATE operations on any row which can be a worse situation than having a corruption on a known row. Hence you should only really set the LOB column to NULL if you intend to immediately recreate the table. Extracting data from the corrupt block itself As the corrupt block itself is "corrupt" then any data extracted from the block should be treated as suspect. The main methods of getting the rows from the corrupt block itself are: - For TABLE blocks Oracle Support can use a tool which attempts to interpret the block contents. - Use any existing indexes on the table to extract data for columns covered by the index where the ROWID falls inside the corrupt block. This is described towards the end of the ROWID range scan articles mentioned above: For Oracle8/8i see Note:61685.1 For Oracle7 see Note:34371.1 - It may be possible to use LogMiner on the redo stream to find the original inserts/updates which loaded the data to the problem block. The main factor here is WHEN the data was actually put in the block. eg; row 2 may have been inserted yesterday but row 1 may have been inserted 5 years ago.

(4D) Leaving A Corruption In Place


It is possible to leave a corruption in place and just accept the errors reported, or prevent access to the problem rows at an application level. eg: If the problem block / row is in a child table then it may be possible

at application level to prevent access via the parent row/s such that the child rows are never accessed. (Be wary of cascade type constraints though) This may not help with reports and other jobs which access data in bulk so it may also be desirable to use the DBMS_REPAIR options shown in 4C above to prevent the block/s erroring when accessed. Marking a corruption like this and leaving it around may give a short term solution allowing full data salvage and/or recovery to be attempted at scheduled outage, or allowing time to check other recovery options on a second (clone) database. Note though that marking a block corrupt with DBMS_REPAIR.FIX_CORRUPT_BLOCKS will cause the marked block/s to also be corrupt after recovery through the time that FIX_CORRUPT_BLOCKS was executed. Leaving a corruption may be sensible for data which rapidly ages and is subsequently purged (eg: In a date partitioned table where older partitions are dropped at some point). Leaving Corruptions in LOB segments At application level it can be possible to leave a corrupt LOB column in place until such time as the table can be rebuilt. One way to ensure you do not hit the "WARNING" scenario is to ensure that the table is only ever accessed via a which includes a WHERE predicate to prevent the problem from being seen. eg: Consider table MYTAB( a number primary key, b clob one or more rows pointing at corrupt LOB data. ALTER TABLE MYTAB ADD ( BAD VARCHAR2(1) ); CREATE VIEW MYVIEW AS SELECT a,b FROM MYTAB WHERE BAD is null; Set BAD='Y' for any problem row/s If you only access MYTAB via MYVIEW and the row will never be visible and so cannot be updated keeping the corrupt entry isolated until it can be dealt with. Clearly this example is more of a design time solution but some applications may already have similar mechanisms and may only access data via a view (or via an RLS policy) giving some option/s to hide the problem row/s. Warnings when Leaving a Corruption in Place Whilst it is possible to leave a corruption in place it should be noted that the corrupt blocks will still show up in runs of DBVERIFY, in RMAN backup warnings / errors etc.. It is important to make a careful record of any corruption you expect to see from these tools, particularly any blocks you expect to skip with RMAN (eg: having MAX_CORRUPT set) and be sure to remove above view row/s ) has

any "acceptance" of the errors once the corruptions have been cleared. eg: Consider that a corrupt block has been handled by leaving the corruption in place and avoiding the problem row/s at application level. RMAN may be configured to allow the corruptions during backup. The table is then recreated at a later date during some table reorganisation. If RMAN is not updated to reflect that no errors should now be expected then RMAN may ignore some other corruption which occurs at a later time. It is also important to note that leaving corrupt blocks around in table segments can lead to mismatched results from queries eg: different results can occur for tables with SKIP_CORRUPT set depending on whether an index scan or table access occurs. Other reports may just error . Note that leaving a corruption in place but marking the block with DBMS_REPAIR.FIX_CORRUPT_BLOCKS writes redo to corrupt the block which may limit subsequent recovery options.

(4E) Last Options


If you have a standby setup (physical or logical) then check that first. Whatever sort of block the problem occurred on, one possible option is to recover the database, or problem tablespace, to a point in time BEFORE the corruption appeared. The difficulty with this option is that it is not always possible to know when the problem first appeared. DBVERIFY can be often be used to check a restored file for corruptions. For details of using DBVERIFY see Note:35512.1 . In particular the START= / END= DBV options can be used to give a quick first test of whether the problem block itself is bad on a restored backup image. This section outlines some final options available for recovering. If you have come here then one or more of the following have happened: - You have lost a "vital" datafile (or have a corruption on it) and have no good backup of the problem file/s (without the corruption) - Are either not in ARCHIVELOG mode OR do not have all archivelogs since the file was first created - Complete recovery keeps reintroducing the problem Last chance: Please note if you have lost all copies of a datafile but DO still have the ARCHIVE logs from when the file was first created it is still possible to recover the file. Eg: ALTER DATABASE CREATE DATAFILE '....' [as '...'] ; RECOVER DATAFILE '....'

ALTER DATABASE DATAFILE '....' ONLINE; If you are in this scenario try to recover the datafile using these steps before proceeding below. If you have reached this line there are no options left to recover to the current point in time. It is advisable to shutdown the instance and take a BACKUP of the current database NOW in order to provide a fall-back position if the chosen course of action fails. (Eg: if you find your backup is bad). Some outline options available are: Revert to an old COLD backup - eg: If in NOARCHIVELOG mode Set up a clone database from a COLD backup - and extract (export) the problem table/s or transport the problem tablespace Point in time recovery to an older point in time that is consistent - requires a good backup and any necessary archive logs - ALL files have to be restored and the whole DB rolled forward to a suitable point in time. - It may be possible to do the point in time recovery in a clone database and then transport the problem tablespace to the problem database, or export / import the problem table from the clone to the problem database . Tablespace point in time recovery - It may be possible to perform a point in time recovery of the affected tablespace only. There are many notes describing tablespace point in time recovery such as Note:223543.1. Rebuild of DB from some logical export / copy - Requires there to already be a good logical backup of the database - NB: You have to RE-CREATE the database for this option. - As with other options the rebuild could be in a clone database just to get a good image of the problem table/s. If you have a good backup then rolling forwards with DB_BLOCK_CHECKING=TRUE can help find the first point in time where something started to go wrong. It is not generally necessary to take the problem database down while investigating the recovery options. eg: You can restore the system tablespace and problem tablespace datafiles only to a totally different location and/or machine as a different instance to investigate how far you can roll forwards etc.. As of Oracle9i you can also use "Trial Recovery" options to save having to keep restoring a backup while looking into your options. Document History

29-APR-2011 Add reference to Note 1317849.1 18-Aug-2009 Add reference to Note 869305.1 15-Oct-2008 Minor addition of script to get_ddl 24-Feb-2006 Major additions for LOBs, BLOCKRECOVER, TSPITR, further DBMS_REPAIR options 03-Mar-2005 Minor change to "dd" command to use 22-Nov-2001 Include 9i and multiple block sizes 13-Sep-2001 Add TEMPFILE details 12-Sep-2000 Major rewrite / merge

References
NOTE:1088018.1 - Master Note for Handling Oracle Database Corruption Issues NOTE:1317849.1 - Data Recovery Advisor - Corruption Reference Guide

Back to top

You might also like