You are on page 1of 29

Oracle Architecture

Database Creation
1. Decide where to store pfile, Datafile, Controlfile and Redo log file. For example here we decide to store those file
in C:\
2. Create the folder in the following orders

bdump: Store Background process trace files.


cdump: Core dump files.
ddump: The dpdump directory has files created by the data pump export/import utility.
adump: The adump directory seems to be the area that Oracle ASM (automatic storage management) utility writes
to.
pfile: Instance parameter files
udump: User SQL trace files

3. Copy an existing INIT.ORA file and paste it in pfile directory


4. Rename the file initdba1.ora
5. Password File Creation:
C:\>orapwd file=C:\oracle\product\10.2.0\db_1\database\pwddba1 password=admin entries=10
6. Instance Creation:
C:\>oradim -new -sid dba1 -intpwd admin -maxusers 10 -startmode auto -pfile
C:\oracle\product\10.2.0\admin\dba1\pfile\initdba1.ora
7. Oracle Base, Home and SID Creation
C:\>set oracle_base=C:\oracle
C:\>set oracle_home=C:\oracle\product\10.2.0\db_1
C:\>set orcle_sid=dba1
C:\>sqlplus/nolog
SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 12 16:25:14 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL> connect sys/admin as sysdba


Connected.
SQL> startup nomount pfile='C:\oracle\product\10.2.0\admin\dba1\pfile\initdba1.ora';
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 62915964 bytes
Database Buffers 96468992 bytes
Redo Buffers 7139328 bytes

8. Database Creation
SQL> create database dba1
controlfile reuse
logfile
group 1
('C:\oracle\product\10.2.0\oradata\dba1\redo1a.log',
'C:\oracle\product\10.2.0\oradata\dba1\redo1b.log') size 5m,
group 2
('C:\oracle\product\10.2.0\oradata\dba1\redo2a.log',
'C:\oracle\product\10.2.0\oradata\dba1\redo2b.log') size 5m
maxlogfiles 4
maxlogmembers 2
maxdatafiles 5
maxinstances 2
maxloghistory 0
datafile 'C:\oracle\product\10.2.0\oradata\dba1\system01.dbf' size 100
SYSAUX
datafile 'C:\oracle\product\10.2.0\oradata\dba1\sysaux01.dbf' size 50m
undo tablespace UNDOTBS1
datafile 'C:\oracle\product\10.2.0\oradata\dba1\utable01.dbf' size 10m
default temporary tablespace temp
tempfile 'C:\oracle\product\10.2.0\oradata\dba1\temp01.dbf' size 10m
character set we8iso8859p1;
Database created.

Maxlogfiles: maximum number of redo log groups that can be created for the database.
Maxlogmembers: maximum number of members for each group.
Maxdatafiles: Maximum number of datafile that can be created.
Maxinstances: The number of instances that can access a database concurrently.
Maxloghistory: specifies the maximum number of redo log files that can be recorded in the log history of the control
file.

9. Executing Scripts
SQL>@C:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catalog.sql
SQL>@C:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catprc.sql

PFile and SPFile


PFILE:
PFILE is a text file which can be modified by text editor. Oracle server required PFILE to start an oracle Instance.

SPFILE:
SPFILE is a binary file which can not be modified by text editor. It is created from PFILE.

How the Oracle Instance is initialized


When the Oracle instance start, first it looks to the $ORACLE_HOME/dbs (UNIX, Linux)
or ORACLE_HOME/database (Windows) directory for the following files (in this order):
1. spfileSID.ora (SPFILE)
2. Default SPFILE (SPFILE)
3. initSID.ora (PFILE)
4. Default PFILE (PFILE)

SPFILE advantages
1. No need to restart the database in order to have a parameter changed and the new value stored in the
initialization file
2. Reduce human errors: Parameters are checked before changes are accepted
3. An SPFILE can be backed-up with RMAN (RMAN cannot backup PFILEs)

How could I switch from SPFILE to PFILE and vice-versa?

Switch from SPFILE to PFILE:


1) CREATE PFILE FROM SPFILE;
2) Backup and delete SPFILE
3) Restart the instance

Switch from PFILE to SPFILE:


1) CREATE SPFILE FROM PFILE=’Location of the PFILE’;
2) Restart the instance (the PFILE will be in the same directory but will not be used. SPFILE will be used
instead)

Converting SPFILE to PFILE and vice-versa

This could be done in order to have a backup in the other format or to change the initialization file for the
database instance.

CREATE PFILE FROM SPFILE;


CREATE SPFILE FROM PFILE='/oradata/initORCL.ora';
CREATE SPFILE = '/oradata/spfileORCL.ora' FROM PFILE = '/oradata/initORCL.ora' ;

Archivelog
Check the Database in an Archive Log Mode or Not
>ARCHIVE LOG LIST

Changing the Archiving Mode


>SHUTDOWN IMMEDIATE

>STARTUP MOUNT

>ALTER DATABASE ARCHIVELOG

>ALTER DATABASE OPEN


Automatic and Manual Archiving
>SHOW PARAMETER DB_RECOVERY_FILE_DEST;

Automatic and Manual Archiving


Automatic Archiving:

>ALTER SYSTEM SET LOG_ARCHIVE_START=TRUE SCOPE=SPFILE;

>STARTUP FORCE;

Manual Archiving:

>ALTER SYSTEM SET LOG_ARCHIVE_START=FALSE SCOPE=SPFILE;

>STARTUP FORCE;

Stop or Start Additional Archive Processes


>ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=4 SCOPE=SPFILE;

>STARTUP FORCE;

Manually Archiving Online Redo Log Files


>ALTER SYSTEM ARCHIVE LOG CURRENT;

Specify Multiple Archive Log Destinations


Use LOG_ARCHIVE_DEST_ n to specify up to ten archival destinations, which can be on a:

Local Disk

Remote Location
> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1="LOCATION=C:\oracle\product\10.2.0\

icelog_dba12\Location_1" SCOPE=SPFILE;

>STARTUP FORCE;

LOG_ARCHIVE_DEST_n Options
> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2="LOCATION=C:\oracle\product\10.2.0\

icelog_dba12\Location_2 MANDATORY REOPEN" SCOPE=SPFILE;

> ALTER SYSTEM SET LOG_ARCHIVE_DEST_3="LOCATION=C:\oracle\product\10.2.0\

icelog_dba12\Location_2 OPTIONAL REOPEN=200" SCOPE=SPFILE;

>STARTUP FORCE;

Default REOPEN is 300 seconds.

Specifying a Minimum Number of Local Destinations


> ALTER SYSTEM SET LOG_ARCHIVE_MIN_SUCCEED_DEST=2 SCOPE=SPFILE;

Controlling Archiving to a Destination


Archiving to a destination can be disabling

>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;


Archiving to a destination can be enabling

>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=enable;

Dynamic Views
1. V$ARCHIVED_LOG: Displays archived log information from the control file;
2. V$ARCHIVE_DEST: For the Current instance, describes all archive log destinations, the current value
and status

> SELECT destination, binding,status

FROM v$archive_dest;

3. V$LOG_HISTORY: Contains log file information from the control file.


4. V$ARCHIVE_PROCESSES: Provides information about the state of the various ARCH processes for the
instance.

>SELECT * FROM v$archive_processes;

Controlfile
Control Files:
The control files of a database store the status of the physical structure of the database.

Controlfile Contains the following type of information:

1. Archive log history


2. Tablespace and datafile records (filenames, datafile checkpoints, read/write status,
offline or not)
3. Current redo log file sequence number
4. Database's creation date
5. database name
6. current archive log mode
7. Backup information
8. Database block corruption information
9. Database ID, which is unique to each DB

Multiplexing Control file using SPFILE:

1. SQL>ALTER SYSTEM SET control_files=

C:\oracle\product\10.2.0\oradata\dba01\CONTROL01.CTL',
'C:\oracle\product\10.2.0\oradata\dba01\CONTROL02.CTL',

'C:\oracle\product\10.2.0\oradata\dba01\CONTROL03.CTL',

'C:\oracle\product\10.2.0\oradata\dba01\CONTROL04.CTL'

SCOPE=spfile;

2. SQL> shutdown;
3. C:\>copy C:\oracle\product\10.2.0\oradata\dba01\CONTROL01.CTL
C:\oracle\product\10.2.0\oradata\dba01\CONTROL04.CTL
4. startup;

Multiplexing Control file using SPFILE:

1. shutdown

2. C:\>copy C:\oracle\product\10.2.0\oradata\dba01\CONTROL01.CTL
C:\oracle\product\10.2.0\oradata\dba01\CONTROL04.CTL

3. Add new file in the PFILE

control_files='C:\oracle\product\10.2.0\oradata\dba01\control01.ctl','C:\
oracle\product\10.2.0\oradata\dba01\control02.ctl','C:\oracle\product\10.
2.0\oradata\dba01\control03.ctl','C:\oracle\product\10.2.0\oradata\dba01\
control04.ctl'

4. startup pfile=’C:\oracle\product\10.2.0\admin\dba01\pfile\init.ora';

Data Files
The Following Tablespaces exists in Oracle 10g Database:

 SYSTEM Tablespace

 SYSAUX Tablespace

 TEMPORARY Tablespace (TEMP)

 UNDO Tablespace (UNDOTBS1)

 USERS Tablespace (default users tablespace created)


SYSTEM Tablespace:

All major database systems include something called a Data Dictionary. A Data Dictionary describes
the contents of the database. Which tables do I own? What columns are in those tables? Do I have
permissions to view other tables? Do these tables have indexes? Where these tables are physically
located? What code runs when I execute a stored procedure? The Data Dictionary contains the answer
to all of these questions and more. System Tablespace contain these data dictionary.

SYSAUX Tablespace:

SYSAUX is the name of the compulsory tablespace, introduced in Oracle 10g, to support optional
database components (called occupants) like AWR, Statspack, Oracle Streams, etc.

TEMPORARY Tablespace:

Temporary Tablespace are used to store short life span data. For example: sort result.

UNDO Tablespace:

UNDO Tablespace used for capturing “old imeage” date for rollback.

USER Tablespace:

A default user tablespace allows defining a tablespace that will be the default tablespaces for users
created objects.

Creating and Managing Tablespace:

In oracle 10g big file tablespaces can be created. Bigfile tablespaces are built on a single datafile (or
temp file), which can be as many as 2^32data blocks in size. So, a bigfile tablespace that uses 8KB
data blocks can be as much as 32TB in size.
It is usually use in very large database. When there is very large database with thousand of datafile,
then it will take long time for updating datafile header, such as checkpoint.

SQL> create bigfile tablespace users2

datafile 'C:\oracle\product\10.2.0\oradata\dba12\USERS02.DBF' size 25G;

In smallfile tablespace Each datafile can be as many as 2^22 data blocks in size. So datafiles in a
smallfile tablespace that uses 8KB data blocks are limited to 32GB. The smallfile tablespace can have
as many as 1,022 datafiles Tablespace created.

SQL> create tablespace users1

datafile 'C:\oracle\product\10.2.0\oradata\dba12\USERS03.DBF' size 25M;

Tablespace created.

Locally Managed Tablespace:

Using Locally Managed Tablespace, each tablespace manages its own free and used space within a
bitmap structure stored in one of the tablespace's data files.

When creating a locally managed tablespace, you can specify the extent allocation method to be used.

AUTOALLOCATE - means that the extent sizes are managed by Oracle.


This might help conserve space but will lead to fragmentation. This is usually recommended for small
tables or in low managed systems.

SQL> CREATE TABLESPACE test


2 DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\TEST01.DBF' SIZE 2M

3 EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

UNIFORM - specifies that the extent allocation in the tablespace is in a fixed uniform size. The
extent size can be specified in M or K. The default size for UNIFORM extent allocation is 1M. Using
uniform extents usually minimizes fragmentation and leads to better overall performance

SQL> CREATE TABLESPACE test1

2 DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\TEST02.DBF' SIZE 2M

3 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100K;

Locally managed tablespaces have the following advantages over dictionary-managed tablespaces:

 Local management of extents tracks adjacent free space, eliminating coalescing free extents.

 Reliance on data dictionary is reduced. This minimizes access to the data dictionary,
potentially, improving performance and availability.

Dictionary Managed Tablespace:

For a tablespace that uses the data dictionary to manage its extents, the Oracle server updates the
appropriate tables in the data dictionary whenever an extent is allocated or dealloated.

CREATE TABLESPACE USERDATA

DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\TEST03.DBF' SIZE 100M EXTENT


MANAGEMENT DICTIONARY

DEFAULT STORAGE (initial 1M NEXT 1M);


Segment in dictionary managed tablespaces can have a customized storage, this is more flexible than
locally managed tablespaces but much less efficient,

Changing the storage settings:

The storage setting for dictionary managed tablespaces can be altered but locally managed tablespace
can not be altered.

ALTER TABLESPACE userdata MINIMUM EXTENT 2M;

ALTER TABLESPACE userdata

DEFAULT STORAGE (

INITIAL 2M

NEXT 2M

MAXEXTENT 999);

Creating UNDO Tablespace:

CREATE UNDO TABLESPACE undo01

DATAFILE ‘/u01/oradata/undo101.dbf’ SIZE 50M;

Creating TEMPORARY Tablespace:

SQL> create temporary tablespace temp2

2 tempfile 'C:\oracle\product\10.2.0\oradata\dba12\TEMP02.DBF' SIZE 5M;

SQL> select username,temporary_tablespace from dba_users where username='SYS';


USERNAME TEMPORARY_TABLESPACE

------------------------------ ------------------------------

SYS TEMP

After database creation, a default temporary tablespace can be set by creating a temporary tablespace
and then altering the database

ALTER DATABSE DEFAULT TEMPORARY TABLESPACE temp2;

Restrictions of temporary tablespace:

1. You can not drop a default temporary tablespcae.


2. You can not take offline default temporary tablespace.
3. Temporary tablespace can not be make a permanent type tablespace.

Taking Tablespace ONLINE/OFFLINE:

Taking a Tablespace OFFLINE:

ALTER TABLESPACE userdata OFFLINE;

Taking a Tablespace ONLINE:

ALTER TABLESPACE userdata ONLINE;

Some Tablespace can not be taken OFFLINE:

1. SYSTEM Tablespace
2. SYSAUX Tablespae
3. Default Temporary Tablespace
4. Tablespace with active undo segment

Taking a Tablespace READ ONLY:

SQL>ALTER TABLESPACE userdata READ ONLY;

Taking a Tablespace READ WRITE:

SQL>ALTER TABLESPACE userdata READ WRITE;

Dropping Tablespaces:

SQL>DROP TABLESPACE userdata

INCLUDING CONTENTS AND DATAFILE;

Changing the size of the Datafile

SQL> ALTER DATABASE

DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\USERS02.DBF' RESIZE 15M;

Enable Automatic Extension

SQL> CREATE TABLESPACE test3

DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\TEST03.DBF' SIZE 10M

AUTOEXTEND ON NEXT 5M MAXSIZE 30M;

Adding DATA Files to a Tablespace

SQL> ALTER tablespace test


ADD DATAFILE 'C:\oracle\product\10.2.0\oradata\dba12\USERS02B.DBF' SIZE 10M;

Changing Location of Data Files

1. Take the tablespace offline


2. Copy the files or use the operating system command to move the file
3. Execute the following command:

ALTER TABLESPACE userdata

RENAME

DATAFILE ‘/u01/oradata/userdata01.dbf’

TO ‘/u01/oradata/userdata/userdata01.dbf’;

4. Bring the tablespace online


5. Delete the past location file if mandatory.

Or

1. Shut down the databse.


2. Copy the files or use the operating system command to move the file
3. Mount the Database
4. Execute the following command:

ALTER DATABSE RENAME

FILE ‘/u01/oradata/system01.dbf’

TO ‘/u03/oradata/system01.dbf’;

5. Open the database.

Convert Dictionary Managed Tablespace to Locally Managed Tablespace:


SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('USERS');

Obtaining Tablespace Information

o Tablespace Information

 DBA_TABLESPACES

 V$TABLESPACES

 DBA_USERS

o Data file Information

 DBA_DATA_FILES

 V$DATAFILE

o Temp file Information

 DBA_TEMP_FILES

 V$TEMPFILE

Redo Log File


Redo Log File:

Redo Log file contains any changes made to the data in database buffer cache. Every database should
have at least two redolog files groups.
Check Redo Log file Status:

SQL> select group#,status from v$log;

GROUP# STATUS

---------- ----------------

1 CURRENT

2 INACTIVE

3 INACTIVE

The log files have the following status values:

Name Meaning
USED Indicates either that a log has just been added but never used.
CURRENT Indicates a valid log that is in use.
ACTIVE Indicates a valid log file that is not currently in use.
CLEARING Indicates a log is being re-created as an empty log due to DBA
action.
CLEARING Means that a current log is being cleared of a closed thread. If a log
CURRENT stays in this status, it could indicate there is some failure in the log
switch.
INACTIVE Means that the log is no longer needed for instance recovery but
may be needed for media recovery.

The v$logfile table has a status indicator that gives these additional codes:
Name Meaning
INVALID File is inaccessible.
STALE File contents are incomplete (such as when an instance is shut down
with SHUTDOWN ABORT or due to a system crash).
DELETED File is no longer used.
Null File in use.

Adding Redo Log Groups:

SQL> ALTER DATABASE ADD LOGFILE GROUP 4

'C:\oracle\product\10.2.0\oradata\dba12\REDO04.LOG'

SIZE 10M;

Adding Redo Log Members:

SQL> ALTER DATABASE ADD LOGFILE MEMBER

'C:\oracle\product\10.2.0\oradata\dba12\REDO04b.LOG' TO GROUP 4;

Check the file Location of redo log files:

SQL> select group#,member from v$logfile;

GROUP# MEMBER

-------------------------------------------------------------

3 C:\ORACLE\PRODUCT\10.2.0\ORADATA\DBA12\REDO03.LOG

2 C:\ORACLE\PRODUCT\10.2.0\ORADATA\DBA12\REDO02.LOG

1 C:\ORACLE\PRODUCT\10.2.0\ORADATA\DBA12\REDO01.LOG

4 C:\ORACLE\PRODUCT\10.2.0\ORADATA\DBA12\REDO04.LOG

4 C:\ORACLE\PRODUCT\10.2.0\ORADATA\DBA12\REDO04B.LOG

Dropping Online Redo Log Member:


SQL> ALTER DATABASE DROP LOGFILE MEMBER

'C:\oracle\product\10.2.0\oradata\dba12\REDO04B.LOG';

Dropping Online Redo Log Groups:

SQL> ALTER DATABASE DROP LOGFILE GROUP 4;

Move Redo Log File Destinations

1. SQL>SHUTDOWN;
2. Copy the redo log file in new location.
3. SQL> STARTUP MOUNT;
4. SQL> ALTER DATABASE RENAME

FILE 'C:\oracle\product\10.2.0\oradata\dba12\REDO01.LOG'

TO 'C:\oracle\product\10.2.0\oradata\dba12\redologfile\REDO01.LOG';

5. SQL> alter database open;

Forcing Log Switch:

SQL> ALTER SYSTEM SWITCH LOGFILE;

Forcing Checkpoint:

SQL> ALTER SYSTEM CHECKPOINT;

Data Pump
SQL> CREATE DIRECTORY dpump AS 'C:\Dpump';

SQL> SELECT directory_path

FROM all_directories
WHERE directory_name = 'DPUMP’;

SQL> CREATE USER nasir IDENTIFIED BY nasir#1

DEFAULT TABLESPACE Users

TEMPORARY TABLESPACE temp

QUOTA UNLIMITED ON users;

SQL> GRANT CREATE SESSION,CREATE TABLE TO nasir;

SQL> GRANT READ,WRITE ON DIRECTORY dpump TO scott;

SQL> GRANT READ,WRITE ON DIRECTORY dpump TO nasir;

Table Export/Import:

H:\>EXPDP scott/abc#1 DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP


LOGFILE=EMP_DEPT.LOG TABLES=emp,dept

Remap Schema

H:\>IMPDP nasir/nasir#1 DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP


LOGFILE=IMP_EMP_DEPT.LOG TABLES=emp REMAP_SCHEMA=scott:nasir

Export Metadata Only

H:\>EXPDP scott/abc#1 DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP


LOGFILE=EMP_DEPT.LOG TABLES=emp,dept CONTENT=METADATA_ONLY

H:\>IMPDP nasir/nasir#1 DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP


LOGFILE=EMP_DEPT.LOG TABLES=emp,dept CONTENT=METADATA_ONLY
REMAP_SCHEMA=scott:nasir

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}


 ALL loads any data and metadata contained in the source. This is the default.

 DATA_ONLY loads only table row data into existing tables; no database objects are
created.

 METADATA_ONLY loads only database object definitions; no table row data is


loaded.

Schema Exports/Imports

H:\>EXPDP nasir/nasir#1 DIRECTORY=dpump DUMPFILE=SCOTT.DMP


LOGFILE=SCOTT.LOG SCHEMAS=scott

Remap Schema

H:\>IMPDP nasir/nasir#1 DIRECTORY=dpump DUMPFILE=SCOTT.DMP


LOGFILE=IMPSCOTT.LOG SCHEMAS=scott REMAP_SCHEMA=scott:moon

Remap Tablespace

H:\>IMPDP nasir/nasir#1 DIRECTORY=dpump DUMPFILE=SCOTT.DMP


LOGFILE=IMPSCOTT.LOG SCHEMAS=scott
REMAP_TABLESPACE=USER1:USER3 REMAP_TABLESPACE=USER2:USER4

Database Exports/Imports

H:\>EXPDP system/manager DIRECTORY=dpump DUMPFILE=DATABASE.DMP


LOGFILE=DATABASE.LOG FULL=Y

H:\>IMPDP system/sys DIRECTORY=dpump DUMPFILE=DATABASE.DMP


LOGFILE=DATABASE.LOG

FULL=Y REMAP_SCHEMA=system:system

Tablespace Exports/Imports
H:\>EXPDP 'sys/sys as sysdba' DIRECTORY= dpump DUMPFILE=TUSERS.DMP
LOGFILE=EXTUS.LOG TABLESPACES=USERS

H:\>IMPDP 'nasir/nasir#1' DIRECTORY= dpump DUMPFILE=TUSERS.DMP


LOGFILE=EXTUSERG TABLESPACES=USERS TABLE_EXISTS_ACTION=REPLACE
REMAP_SCHEMA=(scott:nasir moon:nasir rahi:nasir)

The TABLE_EXISTS_ACTION parameter for Data Pump impdp provides four options:
1. SKIP is the default: A table is skipped if it already exists.
2. APPEND will append rows if the target table’s geometry is compatible. This is the default when
the user specifies CONTENT=DATA_ONLY.
3. TRUNCATE will truncate the table, and then load rows from the source if the geometries are
compatible and truncation is possible. For example, it is not possible to truncate a table if it is the
target of referential constraints.
4. REPLACE will drop the existing table, then create and load it from the source.

Export and Import


C:\>exp

Export: Release 9.2.0.1.0 - Production on Thu Jan 29 23:58:01 2009

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Username: scott

Password:

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.1.0 - Production

Enter array fetch buffer size: 4096 >


Export file: EXPDAT.DMP > test

(2)U(sers), or (3)T(ables): (2)U > 3

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...

Table(T) or Partition(T:P) to be exported: (RETURN to quit) > emp

. . exporting table EMP 14 rows exported

Table(T) or Partition(T:P) to be exported: (RETURN to quit) >

Export terminated successfully without warnings.

SQL> create table emp1 as select * from emp;

Table created.

SQL> drop table emp;

Table dropped.

C:\>imp

Import: Release 9.2.0.1.0 - Production on Fri Jan 30 00:02:51 2009


Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Username: scott

Password:

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production

With the Partitioning, OLAP and Oracle Data Mining options

JServer Release 9.2.0.1.0 - Production

Import file: EXPDAT.DMP > test.dmp

Enter insert buffer size (minimum is 8192) 30720>

Export file created by EXPORT:V09.02.00 via conventional path

import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

List contents of import file only (yes/no): no >

Ignore create error due to object existence (yes/no): no >

Import grants (yes/no): yes >

Import table data (yes/no): yes >

Import entire export file (yes/no): no > y

. importing SCOTT's objects into SCOTT

. . importing table "EMP" 14 rows imported


About to enable constraints...

Import terminated successfully without warnings.

Flashback
Flashback Database

Flashback Database is faster than traditional point-in-time recovery. Traditional recovery uses redo log files and backu
Flashback Database is implemented using a new type of log file called Flashback Database logs. The Oracle database se
periodically logs before images of data blocks in the Flashback Database logs. The data block images are used to quic
back out changes to the database during Flashback Database.

RVWR Background Process

When Flashback Database is enabled, a new RVWR background process is started. This process is similar to the LG
(log writer) process. The new process writes Flashback Database data to the Flashback Database logs.

Enabling Flashback Databae:

1. Make sure the database is in archive mode and FLASHBACK_ON Yes

SQL>SELECT flashback_on, log_mode


FROM v$database;

2. Configure the recovery area(if necessary) by setting the two parameters:

- db_recovery_file_dest

- db_recovery_file_dest_size

3. Open the database in MONT mode and turn on the flashback feture:
SQL> STARTUP MOUNT;

SQL>ALTER DATABASE ARCHIVELOG; [If not in archive mode]

SQL> ALTER DATABASE FLASHBACK ON;

SQL> ALTER DATABASE OPEN;

4. SQL> create table test_flashback(name varchar(30));


5. SQL> insert into test_flashback values('*******TEST BEFORE*******');
6. SQL> commit;
7. SQL> select to_char(sysdate,'dd-mm-yy hh24:mi:ss') from dual;
8. SQL> SELECT current_scn FROM v$database;
9. SQL> insert into test_flashback values('*******TEST AFTER*******');
10. SQL> commit;
11. SQL> select * from test_flashback;
12. SQL> drop table test_flashback;
13. SQL> shutdown immediate;
14. SQL> startup mount;
15. SQL> FLASHBACK DATABASE to timestamp to_timestamp('16-07-2008 13:59:45', 'DD-MM-YYYY
HH24:MI:SS');

OR

SQL> FLASHBACK DATABASE TO SCN 3726625;

16. SELECT current_scn FROM v$database;


17. SQL> ALTER DATABASE OPEN RESETLOGS;
18. SQL> SELECT * FROM test_flashback;

Another Example:

1. SQL> conn moon/moon#1


2. SQL> create table test_flash(id number);
3. SQL> commit;
4. SQL> drop table test_flash;
5. SQL> flashback table test_flash to before drop;
6. SQL> select * from test_flash;

 Creating a Role: M
create role clerk; R

CERTIFICATIONS
Assign Privilege to role:

grant create session,create table to clerk;

Assign More Priviledge to the role:

SQL>Create table test(id number);

SQL>grant select,insert,update on test to clerk;

Add Another Layer To The Heirarchy:

SQL>CREATE ROLE manager;

SQL> GRANT clerk TO manager;

SQL> GRANT DELETE ON test TO manager;

Assigning Role to user:

GRANT clerk TO liton;

GRANT manager TO arif;

Granting System Priviledge:

GRANT CREATE SESSION to manager WITH ADMIN OPTION;

Revoke Role From a User:

REVOKE manager FROM arif;

Drop a Role:

DROP ROLE manager;


Obtaining Role Information:

 DBA__ROLES
 DBA_ROLES_PRIVS
 ROLE_ROL_PRIVS
 DBA_SYS_PRIVS
 ROLE_SYS_PRIVS
 ROLE_TAB_PRIVS
 SESSION_ROLES

 EMPLOYMENT HISTORY

 PUBLICATIONS

 STUDY RESOURCES

 TRAINING

 TECHNICAL DOCS

O ORACLE


O TERADATA

O MYSQL

O SQL SERVER

O LINUX

O AWS

 OTHER

 CONTACT

You might also like