Professional Documents
Culture Documents
If my users need a representative sample of data, I may decide to use tools like
DataPump Export to select every nth record from the source databases tables.
DataPump may also be an excellent choice if there are foreign key constraints on
several of the tables, since I can use the QUERY directive during the DataPump
Export operation to filter only selected rows from related tables.
However, if the data in one or more tablespaces needs to be transferred, I also have
the option to use the Oracle transportable tablespace feature to transfer the
tablespace(s) themselves to the target server.
How much storage is available on the target server, and how is that storage
arrayed? If I have the same amount of disk storage on each server, and it is arrayed
exactly the same, then this makes my decision much simpler: I can simply use Recovery
Manager (RMAN) to clone the production database, ship the backups to the target server,
restore the databases control files and datafiles from the backups, and then roll forward
changes from copies of archived redo logs. However, storage is rarely identical, of course!
When disk storage is arrayed differently or limited in size on the target server, I may have
no choice but to export the source database, copy the resulting export files to the target,
and import the data there. (Oracle 10gs DataPump utility makes this even faster, of course.
See my articles on the DataPump utility for more information.)
Do the source and target databases have the same character set? This is one of the
most often-overlooked issues for data transfers. If I have ensured that standards are in
place, hopefully the two databases character sets match, or at least the target databases
character set is a superset of the source databases character set. Otherwise, if I attempt to
import data into my target database, and the target databases character set is not a
superset of the source databases character set, there is a good probability that character
data will be corrupted, unconvertible, or simply lost.
How much time do I have to complete the transfer? If I need to transfer the complete
contents of a 4TB database to my QA server prior to the evaluation of a new Oracle patch,
upgrade, or release, or for an upcoming major application software release, then I need to
find the fastest possible method to transfer the data from the source to the target database.
Needless to say, even though DataPump Export has improved dramatically the speed at
which I can dump data out of my source database, I am still constrained by the amount of
time it takes to reload the data into the target database.
Transportable Tablespaces: Concepts
If time is of the essence, I may decide upon a much more attractive option: using the
Oracle transportable tablespace features to migrate data from one server to another. The
ability to transport a tablespace has been around since at least Oracle 8i, but I have found
that not many DBAs are aware of its power and flexibility.
To transport a tablespace prior to Oracle 10gR1, the following steps are involved:
Create the metadata for source tablespace(s) via the EXPORT utility (exp.exe).
Copy the resulting tablespaces metadata as well as all datafile(s) from the source
database to the target database.
Import the transportable tablespaces metadata on the target database via the
IMPORT utility (imp.exe).
Once the transport operations are complete, I can bring the newly transported
tablespace into read-write mode on the target database by issuing the ALTER
TABLESPACE <tablespace_name> READ WRITE; command, and then issue the
same command on the source database to bring it back into read-write mode.
Prior to Oracle 10gR1, there was at least one other concern that needed to be addressed
before considering this approach:
What operating system do the source and target platforms use? If the source and
target databases servers did not share the same operating system, I had little choice but to
utilize the export/import method I have described previously. However, with Oracle 10gR1,
it is now possible to prepare transportable tablespaces that can be transported across
platform boundaries. Even more encouraging, Oracle 10gR1 has removed a key limitation to
cross-platform transportability because it is now possible to transport tablespaces between
source and target platforms regardless of those platforms endian-ness.
Endian-Ness: A Modest Proposal About Numbers
If you have never encountered the concept of endian-ness before, you are not alone. It is
not something DBAs are forced to deal with very often unless we are dealing with crossplatform transfers. (The term endian was actually derived from Jonathan Swifts Gullivers
Travels, a barely-disguised political satire in which the political fortunes of the inhabitants of
Lilliput and Blefuscu were determined by which end of their breakfast time boiled eggs they
opened first: either the little end, or the big end.)
Some clever IT wag must have decided this would be a great way to describe the
differences between how certain operating systems and platforms determine the maximum
size of the integer values within their systems. However, there is a great amount of debate
on how this term came about. For example, 32-bit Windows NT platforms and 32-bit Linux
platforms both use a little endian system of numbering (i.e. the least significant byte is
stored at the memory location with the lowest address), while Sun Solaris platforms use the
big-endian system (i.e. the most significant byte is stored at the memory location with the
lowest address).
Endian-ness thus becomes an important concern when converting data between platforms
because all data must be examined during the conversion effort. Fortunately, Oracle 10gR1
has a method to scan the data and determine if they represent any endian issues during
cross-platfiorm conversion. For example, if the source platform is a Sun Solaris E3500 and
the target platform is using Red Hat Linux Enterprise 3.0 in 32-bit mode, then conversion is
required because the source platform is a big-endian environment, and the target is a
small-endian environment.
The good news is that it is simple to determine the endian-ness of Oracle database
platforms by running a query against each database on these platforms (see Listing 1.1 for
an example).
Transporting a Tablespace Between Different Platforms
Here is an example of how to perform a transportable tablespace operation using Oracle
10gR1 as long as the source and target platforms have the same endian-ness:
Make the tablespace read only on the source database by issuing the ALTER
TABLESPACE <tablespace_name> READ ONLY; command.
Create the metadata for the tablespace via the Oracle Export utility or, for even
better speed and flexibility, use the DataPump Export utility, or call the
DBMS_DATAPUMP packaged procedures to build the metadata.
Copy the tablespaces datafile(s) from the source platform to the target platform. For
this, I can simply use an OS command, FTP, or even the Oracle 10gR1
DBMS_FILE_TRANSFER procedures to either push or pull a copy of the
datafile(s) from the source to the target platform and place the tablespaces
datafile(s) in the appropriate directory.
On the target server, perform an Import or DataPump Import operation to add the
tablespaces metadata into the target database.
Finally, bring the tablespace back online with the ALTER TABLESPACE
<tablespace_name> READ WRITE; command.
To illustrate, I have created two locally-managed tablespaces and some sample tables,
constraints, and indexes in each tablespace as shown in Listing 1.2. Table
SH.AGGR_SALES contains sample sales history transactions and resides in one tablespace,
and two reference tables, SH.CUST_TYPES and SH.SALES_AGGR_TYPES, reside in the other
tablespace. I have also added foreign key constraints between these tables to illustrate how
referential integrity enforcement issues affect transportable tablespace operations.
Handling Endian Conversion During Tablespace Transport Operations
Oracle 10gR1 can perform the conversion of the tablespaces datafiles either before the
creation of the transportable tablespace on the source platform, or after the tablespace has
been transported to the target platform. In either case, the new RMAN CONVERT command
is used to handle the conversion. Oracle 10gR1 also generates all the scripts necessary for
conversion, and these scripts generally need only minor adjustments by the DBA.
Listing 1.3 illustrates how to perform a tablespace transport between a Linux 32-bit
platform and a Microsoft Windows XP 32-bit platform. Since these platforms share the same
endian-ness, I can transfer tablespaces across these two platforms in either direction.
However, if I were transporting tablespaces between two platforms of different endian-ness,
I would have to either convert the tablespace on the source platform before transport, or
convert the tablespaces datafile on the target platform after transport. For example,
Listing 1.4 shows the RMAN script to convert a transportable tablespace from one source
platform of little endian-ness (e.g. a Microsoft Windows XP 32-bit server) to another
target platform of big endian-ness (e.g. a Sun Solaris E350 server).
Transporting an Entire Database Between Different Platforms
The previous new features would be impressive enough, but Oracle 10gR1 does not stop
there. Not only can I now transport tablespaces across platforms regardless of endian-ness,
but I can also transport an entire database from one platform to another. Oracle 10gR1 will
generate all the scripts necessary to create the components on the target platform to start
up the database after its transport is complete. The generated scripts usually need only
small adjustments to reflect the available memory resource and storage destinations on the
target platform.
(Be sure to note the one significant limitation to this feature, however: Unlike the crossplatform tablespace transport features described in the previous section, Oracle10g does
not permit the transport of an entire database between two platforms when those platforms
have different endian-ness.)
To illustrate this scenario, I will transport a complete database from one source platform (a
Microsoft Windows XP 32-bit server) to another target platform (a Red Hat Enterprise Linux
32-bit server). Since these platforms do not share the same operating system, Oracle will
automatically generate the proper RMAN CONVERT commands to translate the databases
components so they can be utilized on the target platform. As with the prior cross-platform
transportable tablespaces example, I can also choose to perform the conversion on either
the source platform or on the target platform instead if I wish to limit the time required to
complete the conversion.
Here is a summary of the steps required to transport the entire database from source to
target platforms. First, these steps need to happen on the source platform:
Shut down the database and then reopen it in READ ONLY mode. This insures that all
of the databases control files and datafiles are frozen temporarily so that the
database can be transported. Of course, this does mean that the source database
will be only available for limited queries until the transport preparations are
completed, and DML against the database will be completely forbidden.
Verify that the database is indeed currently transportable. I will do this with the
DBMS_TTS.CHECK_DB and DBMS_TTS.CHECK_EXTERNAL package procedures.
Listing 1.5 shows how I used these procedures to determine if the database is
ready for transport, and also notifies me if there are other external objects like
external tables, BFILEs, and DIRECTORY objects that I will need to move separately
to the target database.
Run the appropriate RMAN CONVERT command script to prepare the database for
cloning on the target platform:
o
RMAN will also create a script to create the new database on the target
platform. This script will prepare the cloned databases control file to reflect
the new locations of the online redo log files and datafiles on the target
server.
In Listing 1.6, I have illustrated the results of preparing the database for transport and
eventual conversion on the target platform, while in Listing 1.7 I have shown how to
similarly convert the databases datafiles on the source platform before transporting the
datafiles to the target platform.
Once these steps are completed on the source server, I can then transfer the databases
components and conversion scripts to the target platform and complete the transference:
Copy the databases datafiles from the specified conversion folder on the source
platform to the target platform. I can use an OS command, FTP, or even Oracle 10gs
DBMS_FILE_TRANSFER procedures to either push or pull a copy of the
datafiles from the source to the target platform and place them in the appropriate
directory.
Run the generated CREATE CONTROLFILE script on the target server to create the
new databases control file and mount the database, and then run the conversion
scripts (if any exist) on the target server to complete the conversion of the datafiles
(see Listing 1.8 for examples of the conversion process).
Once the datafile conversion has been completed, all that is left to do is open the
transported database with the ALTER DATABASE OPEN RESETLOGS; command, add any
TEMPFILEs, and perform some minor cleanup (recompilation of all PL/SQL code). See
Listing 1.9 for the conclusion of this example.
Oracle 10g Recovery Manager: A Final Sales Pitch
One final note about RMAN: In my discussions with my DBA colleagues over the past few
years, I have noticed that some of us are still holding back from using RMAN and instead
insist upon using user-managed backups. In my colleagues defense, I have also found that
many DBAs had tried out RMAN when it was first available in Oracle 8 and found it wanting
in those early days. Since then, however, RMANs stability and reliability has improved
drastically. If you are still holding back even after hearing about these transportable
tablespace features in addition to the panoply of other new RMAN features present in Oracle
10g (e.g. FLASHBACK DATABASE), please consider this: If youre going to implement
Automatic Storage Management (ASM) to manage your Oracle 10gs storage, RMAN is the
only available method to back up and restore data that resides on ASM disk groups in an
ASM instance.
Next Steps
As this article has demonstrated, Oracle 10gs advanced transportability features offer the
capability to transfer individual tablespaces or even an entire database between platforms
regardless of the platforms operating systems and with only minimal regard to the
platforms endian-ness. In the next article in this series, I will demonstrate how Oracle
10gR2 offers the capability to transport tablespaces without incurring any appreciable down
time by creating the transportable tablespace sets directly from RMAN backups, and then I
will illustrate how to create data jukeboxes via Oracles tablespace versioning features.
References and Additional Reading
Even though I have hopefully provided enough technical information in this article to
encourage you to explore with these features, I also strongly suggest that you first review
the corresponding detailed Oracle documentation before proceeding with any experiments.
Actual implementation of these features should commence only after a crystal-clear
understanding exists. Please note that I have drawn upon the following Oracle 10gR2
documentation for the deeper technical details of this article:
B10750-01 Oracle Database 10gR1 New Features Guide
B10734-01 Oracle Database 10gR1 Backup and Recovery Advanced Users Guide
B10739-01 Oracle Database 10gR1 Administrators Guide
B10749-02 Oracle Database 10gR1 Globalization Support Guide
B10770-02 Oracle Database 10gR1 Recovery Manager Reference
B10802-01 PL/SQL Packages and Types Reference
/*
||
||
||
||
||
||
||
||
||
||
||
||
||
||
*/
/*
|| Listing 1.1: Determining a platform's ENDIAN-ness and its potential impact
||
on the database's tablespace "transportability"
*/
------ What are the available Transportable Tablespace Platform possibilities?
----TTITLE 'Current Transportable Tablespace Platform Attributes'
COL platform_name
FORMAT A40
HEADING 'Platform Name'
COL endian_format
FORMAT A12
HEADING 'ENDIAN|Format'
SELECT
platform_name
,endian_format
FROM v$transportable_platform
ORDER BY platform_name
;
TTITLE OFF
>>> Results:
Thu Apr 13
page
ENDIAN
Format
-----------Big
Big
Little
Little
Big
Big
Big
Big
Little
Little
Little
Little
Little
Little
Little
Big
Big
17 rows selected.
------ What's the current ENDIAN-ness of my database and platform?
----TTITLE 'Current Database Platform ENDIAN-Ness'
COL name
FORMAT A16
HEADING 'Database Name'
COL endian_format
FORMAT A12
HEADING 'ENDIAN|Format'
SELECT
D.name
,TP.endian_format
FROM
v$transportable_platform TP
,v$database D
WHERE TP.platform_name = D.platform_name
;
TTITLE OFF
>>> Results:
Thu Apr 13
page 1
Current Database Platform ENDIAN-Ness
ENDIAN
Database Name
Format
---------------- -----------ORCL102
Little
/*
|| Listing 1.2: Preparations for Transportable Tablespace demonstrations
*/
-----
PRIMARY KEY
PRIMARY KEY
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
SH.AGGR_SALES
VALUES('TER','MW01','WHL',201981,2186969.48);
VALUES('TER','MW02','WHL',212799,2304102.45);
VALUES('TER','WE01','WHL',305791,3310982.63);
VALUES('TER','WE02','WHL',335931,3637326.50);
VALUES('CPY','CMPY','SLG',98728,1068987.29);
VALUES('RGN','EAST','SLG',40139,434609.04);
VALUES('RGN','WEST','SLG',58589,634378.26);
VALUES('DST','EA00','SLG',19963,216151.38);
VALUES('DST','SE00','SLG',20176,218457.66);
VALUES('DST','MW00','SLG',26920,291478.99);
VALUES('DST','WE00','SLG',31669,342899.26);
VALUES('TER','EA01','SLG',9872,106890.07);
VALUES('TER','EA02','SLG',10091,109261.31);
VALUES('TER','SE01','SLG',11091,120088.91);
VALUES('TER','SE02','SLG',9085,98368.75);
VALUES('TER','MW01','SLG',12000,129931.2);
VALUES('TER','MW02','SLG',14920,161547.79);
VALUES('TER','WE01','SLG',2908,31486.66);
VALUES('TER','WE02','SLG',28761,311412.60);
VALUES('CPY','CMPY','ITL',5694,61652.35);
VALUES('RGN','EAST','ITL',3299,35720.25);
VALUES('RGN','WEST','ITL',2395,25932.10);
VALUES('DST','EA00','ITL',2037,22055.82);
VALUES('DST','SE00','ITL',1262,13664.43);
VALUES('DST','MW00','ITL',1992,21568.58);
VALUES('DST','WE00','ITL',403,4363.52);
VALUES('TER','EA01','ITL',1400,15158.64);
VALUES('TER','EA02','ITL',637,6897.18);
VALUES('TER','SE01','ITL',291,3150.83);
VALUES('TER','SE02','ITL',971,10513.60);
VALUES('TER','MW01','ITL',1005,10881.74);
VALUES('TER','MW02','ITL',987,10686.84);
VALUES('TER','WE01','ITL',129,1396.76);
VALUES('TER','WE02','ITL',274,2966.76);
COMMIT;
------ Create indexes and constraints
------ Create indexes and constraints
CREATE UNIQUE INDEX sh.aggr_sales_pk_idx
ON sh.aggr_sales (sales_aggr_type, geo_area, cust_type)
TABLESPACE lmt_ref
PCTFREE 5
STORAGE (INITIAL 128K);
ALTER TABLE sh.aggr_sales
ADD CONSTRAINT aggr_sales_pk
PRIMARY KEY (sales_aggr_type, geo_area, cust_type);
ALTER TABLE sh.aggr_sales
ADD CONSTRAINT aggr_sales_fk_aggr_type
FOREIGN KEY (sales_aggr_type)
REFERENCES sh.sales_aggr_types (item_type);
ALTER TABLE sh.aggr_sales
/*
|| Transporting Tablespaces: Source Server Processing
*/
-- Create the DIRECTORY object on the source database server
C:> mkdir c:\oracle\ttxports
SQL> DROP DIRECTORY ttxports;
SQL> CREATE DIRECTORY ttxports AS 'c:\oracle\ttxports';
SQL> GRANT READ, WRITE ON DIRECTORY ttxports TO PUBLIC;
-- Make the source tablespace read-only
SQL> ALTER TABLESPACE lmt_xact READ ONLY;
-- Contents of DataPump Export parameter file (tts_export_1.dpectl):
JOB_NAME = TTS_EXPORT_1
DIRECTORY = TTXPORTS
DUMPFILE = tts_export_1.dmp
LOGFILE = tts_export_1.log
TRANSPORT_TABLESPACES = lmt_xact
TRANSPORT_FULL_CHECK = TRUE
-- Start a DataPump Export operation for the tablespace transport
EXPDP system/oracle PARFILE=c:\oracle\ttxports\tts_export_1.dpectl
------ Results of failed transportable tablespace metadata export. Note that
-- Oracle will not let a tablespace's metadata be created if tables within
-- the tablespace reference objects in other tablespaces not included in
-- the list of tablespaces to be transported; also, all tablespaces in the
-- transportable tablespace set must be in READ ONLY mode
----Export: Release 10.2.0.1.0 - Production on Sunday, 16 April, 2006 12:38:36
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."TTS_EXPORT_1": system/********
PARFILE=c:\oracle\ttxports\tts_export_1.dpectl
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29341: The transportable set is not self-contained
'c:\oracle\rptrepos';
12> }
table
table
table
table
The following commands will create a new control file and use it
to open the database.
Data used by Recovery Manager will be lost.
The contents of online logs will be lost and all backups will
be invalidated. Use this only if online logs are damaged.
ARCHIVELOG
prompt
prompt
prompt
prompt
prompt
prompt
prompt
* There are many things to think about for the new database. Here
* is a checklist to help you stay on track:
* 1. You may want to redefine the location of the directory objects.
* 2. You may want to change the internal database identifier (DBID)
*
or the global database name for this database. Use the
*
NEWDBID Utility (nid).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SHUTDOWN IMMEDIATE
STARTUP UPGRADE PFILE='C:\ORACLE\INIT_RPTREPOS.ORA'
@@ ?/rdbms/admin/utlirp.sql
SHUTDOWN IMMEDIATE
STARTUP PFILE='C:\ORACLE\INIT_RPTREPOS.ORA'
-- The following step will recompile all PL/SQL modules.
-- It may take serveral hours to complete.
@@ ?/rdbms/admin/utlrp.sql
set feedback 6;
------ Oracle-generated script (RPTREPOS.CNV) containing commands needed to
-- convert all datafiles on the target platform. This file can be edited
-- to place datafiles in appropriate folders on the target
----RUN {
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\SYSTEM01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\SYSTEM01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\SYSAUX01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\SYSAUX01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\UNDOTBS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\UNDOTBS01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\EXAMPLE01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\EXAMPLE01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\LMT_XACT01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\LMT_XACT01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\USERS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\USERS01.DBF';
CONVERT DATAFILE 'C:\ORACLE\ORADATA\ORCL102\LMT_REF01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT 'C:\ORACLE\RPTREPOS\LMT_REF01.DBF';
}
------ Oracle-generated initialization parameter file (INIT_RPTREPOS.ORA)
-- for use during database creation and conversion on target platform
----# Please change the values of the following parameters:
control_files
= "C:\ORACLE\RPTREPOS"
db_recovery_file_dest
= "C:\ORACLE\flash_recovery_area"
db_recovery_file_dest_size= 2147483648
audit_file_dest
= "C:\ORACLE\ADUMP"
background_dump_dest
= "C:\ORACLE\BDUMP"
user_dump_dest
= "C:\ORACLE\UDUMP"
core_dump_dest
= "C:\ORACLE\CDUMP"
db_name
= "RPTREPOS"
# Please review the values of the following parameters:
__shared_pool_size
= 54525952
__large_pool_size
= 4194304
__java_pool_size
= 4194304
__streams_pool_size
= 4194304
__db_cache_size
= 46137344
remote_login_passwordfile= "EXCLUSIVE"
db_domain
= ""
dispatchers
= "(PROTOCOL=TCP) (SERVICE=orcl102XDB)"
# The values of the following parameters are from source database:
processes
= 150
sga_max_size
= 134217728
sga_target
= 117440512
db_block_size
= 8192
compatible
= "10.2.0.1.0"
db_file_multiblock_read_count= 16
undo_management
= "AUTO"
undo_tablespace
= "UNDOTBS1"
shared_servers
= 2
max_shared_servers
= 5
job_queue_processes
= 10
open_cursors
= 300
pga_aggregate_target
= 33554432
/*
|| Listing 1.7: Preparing a database for transport when conversion will
||
occur on the source platform
*/
------ RMAN session that converts database at source and prepares for transport
-- to target platform
----C:\WINDOWS\system32>rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Sun Apr 16 14:16:38 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: ORCL102 (DBID=3040314982)
RMAN> RUN {
2>
CONVERT DATABASE
3>
NEW DATABASE 'rptrepos'
4>
TRANSPORT SCRIPT 'c:\oracle\rptrepos\rptrepos.sql'
5>
TO PLATFORM 'Linux IA (32-bit)'
6>
db_file_name_convert 'c:\oracle\oradata\orcl102' 'c:\oracle\rptrepos';
7> }
Starting convert at 16-APR-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=153 devtype=DISK
External table HR.XT_EMPLOYEE_PAYCHECKS found in the database
The following commands will create a new control file and use it
to open the database.
Data used by Recovery Manager will be lost.
The contents of online logs will be lost and all backups will
be invalidated. Use this only if online logs are damaged.
db_file_multiblock_read_count= 16
undo_management
= "AUTO"
undo_tablespace
= "UNDOTBS1"
shared_servers
= 2
max_shared_servers
= 5
job_queue_processes
= 10
open_cursors
= 300
pga_aggregate_target
= 33554432
/*
|| Listing 1.8: Completing the transport of the database on the target
||
platform. Note that this example converts the datafiles
||
on the target, and that the scripts generated on the
||
source platform were edited to reflect the appropriate
||
directories and file names on the target platform
*/
------ Completion of transfer to target database:
-- 1.) Copy converted datafiles to target platform
-- 2.) Edit conversion scripts to reflect file locations on target server
-- 3.) Run script to create control files on target server
-- 3.) Run conversion scripts to convert all datafiles on target server
-- 4.) Open database in RESETLOGS mode
-- 5.) Bring all datafiles into READ WRITE mode on target server
----#####
# Edited RMAN script to complete datafile conversion
#####
RUN {
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/SYSTEM01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/system01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/SYSAUX01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/sysaux01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/UNDOTBS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/sysaux01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/EXAMPLE01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/example01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/LMT_XACT01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/lmt_xact01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/USERS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/users01.dbf';
CONVERT DATAFILE '/u01/app/oracle/oradata/RPTREPOS/LMT_REF01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/app/oracle/oradata/RPTREPOS/lmt_ref01.dbf';
}
#####
# Edited PFILE for use during creation of new database
#####
The following commands will create a new control file and use it
to open the database.
Data used by Recovery Manager will be lost.
The contents of online logs will be lost and all backups will
be invalidated. Use this only if online logs are damaged.
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/RPTREPOS/redo01.log'
GROUP 2 '/u01/app/oracle/oradata/RPTREPOS/redo02.log'
GROUP 3 '/u01/app/oracle/oradata/RPTREPOS/redo03.log'
DATAFILE
'/u01/app/oracle/oradata/RPTREPOS/SYSTEM01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/UNDOTBS01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/SYSAUX01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/USERS01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/EXAMPLE01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/LMT_XACT01.DBF',
'/u01/app/oracle/oradata/RPTREPOS/LMT_REF01.DBF'
CHARACTER SET AL32UTF8
;
SIZE 50M,
SIZE 50M,
SIZE 50M
Database altered.
SQL> ALTER TABLESPACE TEMP
ADD TEMPFILE '/u01/app/oracle/oradata/RPTREPOS/temp01.tmp'
SIZE 202375168
AUTOEXTEND ON
NEXT 655360
MAXSIZE 32767M; 2
3
4
5
6
Tablespace altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP UPGRADE PFILE='C:\ORACLE\INIT_RPTREPOS.ORA'
SQL> @@ ?/rdbms/admin/utlirp.sql
<<< Results edited for brevity >>>
SQL> DOC
DOC>#######################################################################
DOC>#######################################################################
DOC>
utlirp.sql completed successfully. All PL/SQL objects in the
DOC>
database have been invalidated.
DOC>
DOC>
Shut down and restart the database in normal mode and run utlrp.sql to
DOC>
recompile invalid objects.
DOC>#######################################################################
DOC>#######################################################################
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP PFILE='/u01/app/oracle/oradata/RPTREPOS/INIT_RPTREPOS.ORA'
ORACLE instance started.
Total System Global Area 134217728
Fixed Size
1218148
Variable Size
83888540
Database Buffers
46137344
Redo Buffers
2973696
Database mounted.
Database opened.
SQL> @@ ?/rdbms/admin/utlrp.sql
bytes
bytes
bytes
bytes
bytes
TIMESTAMP
------------------------------------------------------------------------------COMP_TIMESTAMP UTLRP_BGN 2006-04-19 20:53:24
<<< Results edited for brevity >>>
TIMESTAMP
------------------------------------------------------------------------------COMP_TIMESTAMP UTLRP_END 2006-04-19 21:18:24