You are on page 1of 107

oracle

2 | Page

DATABASE ARCHITECTURE

We know that most of ORACLE database has been written in C language and its main part
Kernel has also been written in C language .When we write any program in C language,
it create executable gets stored in the disk area but for the execution it gets
loaded in the memory area. This running program is known as processes .When we talk
about ORACLE instance this means that we are talking about ORALE processes and
memory. Instance architecture means are talking about memory area where the processes
are running certain memory is reserved by ORACLE. Once ORACLE is reserved the memory
it is going to load the some processes there after loading, parsing is done and after
parsing, execution is done. Parsing processes comprises of various activities

Check the syntax

Privilege check
Looks for the data dictionary

Loads the statement in the memory


Looks for optimizer

Soft parsing means getting data directly from database buffer cache and hard
parsing means I/O will happen and data will be fetched from datafiles and stored
in database buffer cache and send to the user. If # value for the SQL statement is
available then soft parsing is done and if not available then hard parsing will
happen
Some processes are loaded on the requirement and demand. The combination of memory
and processes is created by ORACLE is known as ORACLE instance.
In other words, predetermine set of memory and a process created by ORACLE is
known as ORACLE instance.

3 | Page

OR
The memory and processes collectively called as an oracle instance.
The data files is opened are collectively called as database.
MULTIPLE INSTANCEConsider situations where, there are numerous users are available. All users are
sending the request to accesses the data. But there are few oracle instance are
available. What happen in this case the request will be arrange in queue for
processing. This will reduce the processes efficiency. If we want to improve the
processes efficiency and reduce queue then we have to create multiple instance.
Hence increased numbers of instance running in parallel enhance or improve the
processing.
The multiple instances was first introduce in oracle 6.0.2 but was not implemented
in real business. Implementation of multiple instance was introduce in oracle 7
own words.
PROCESS SCALEBILITYMore number of instances running in parallel gives more processing scalability.
HOW MANY NO MAXIMUM NO OF INSTANCES CAN RUN IN PARALLELFrom oracle 7 onwards up to oracle 10g release one there was possibility of
creating 63 max instances which could run in parallel. From oracle 10g release 2
up to oracle 11g release 1, we could create up to 100 instances which can run in
parallel. And from oracle 11g release 2 we can create unlimited no of instances
they can run parallel.
NOTE: from oracle 7 to 8i was known as
Oracle parallel server (OPS)
Oracle parallel instances (OPI)
From oracle 9i onwards it was known as RAC (real application cluster)
SMP: -is a sound as symmetric multiprocessor, so called because the processors
inside all the machines are all most same kind.
CLUSTER SYSTEM
It is a system in which more than one machine (minimum two) are grouped together
or connected together in such a way that it will give a feeling of single machine.

TYPES OF CLUSTER SYSTEMThere are three types

4 | Page

Shared disk

MPP (Massively parallel processor)

GRID (10g)
Cluster system stands for joining together more than one machine. Each machine
with single CPU or multiple CPU and which can be grouped together or combined
together is known as cluster. But just making the physical connectivity does not
make the cluster. Physical connectivity is just a requirement for clustering. Its
the cluster software together with the physical connectivity completes the
cluster.
Cluster software given by ORACLE is called as ORACLE cluster ware also known as
GRID. Up to ORACLE 11g release 1, only cluster software was available and no
storage management was available. Storage management is called as ASM (Automatic
Storage Management) which is available only from ORACLE 11G releases 2 onwards.
Note: ORACLE 11g release 2 has clustered software as well as ASM both the features
combined together known as GRID infrastructure.
Vendors specified cluster software performance is very good because they are very
much optimized for vendors Operating System and the oracle software sits on the
top of the vendors cluster software for operation.
Few cluster software by vendors of oracle are:
VEREITOUS storage foundation software for RAC by VERITIUS
HACMP by IBM (for AIX version 6, it is known as power HA by IBM).
SHARED DISK SYSTEMThe entire four machines have their own local disk which is holding their
individual operating system. Only the database is shares by all the four machines.
The DB is in the central disk but the instances are lying in the machine.

DISK

IS IT POSSIBLE TO HAVE A MULTIPLE INSTANCE IN A SINGLE MACHINE?


YES and NO both depending on the context we are talking. In a single machine there
is something called as VIRTUALIZATION? By means of virtualization technique we can
virtually create multiple machines inside a single machine. Practically the
machine will be only one but using this technique the one single acts like
multiple machines in the background. But for the DB we will hardly get any benefit
because the same CPU or the same memory will be shared as there will be no use of
multiple instances in same machine using the virtualization technique
By virtualization we can create multiple machines or multiple instances inside one
single machine. In this case each virtually created machine will behave like an
independent machine from the background. Thus one physically single machine will
behave as a multiple machine from the background.

5 | Page
Suppose we are using a very and powerful server, 128 CPU based machine in this
single machine we can run different kind of application successfully at the same
time using virtualization technique. For, e.g. we can run LINEX, SOLARIS and
Windows at the same time. Since the machine is highly powerful and highly capable
we can break down single machine into a many different machines virtually and can
run different operations at the same time.
ADVANTAGE:

High scalability
High availability- if suppose one of the terminals fails then the processes do
not terminates. Processes will continue with other healthy terminals. If one of
the instances crashes then still the processing continue with other instances.
Users will not even rely about the failure. At the max users can experience
speed problem because, then the load will be shared 3 remaining healthy
instances.

Instance fail safe- considers a request is being done by instance 1 and


processing has completed only up to 60% and then instance has suddenly crashed.
Now in this case the new processes will be shared by remaining 3 instances. So
there is no problem for new coming request. But what will be happen to the
request which was processed halfway that is 60%.
This half processed request will also be transferred to the other remaining
instances for completion. This facility was introduced from oracle 8i only. This
is known as instance fail safe.
But before oracle 8i that is from oracle 7 to 8i, if suppose any instance get
crashed and if the request was half way processed by it then fail request will
come back to the user with failure message who submitted the request and then the
user will have to resubmit the request and after resubmitting the request will be
processed by other healthy instances.
But in oracle 8i they added a feature or mechanism is known as instance fail safe.
Whenever user submitted the request then request goes to the shared cache
architecture. This is a kind of global memory available across all the machines or
the instance in the cluster. Reality is that one copy of it is available in each
machine. User feels that there is only one available memory but in the reality
there is one memory available in all machines. Request goes to the shared cache
architecture which in turn is allocating the request for processing to an
instance. Now if this time if SCA has allocated to the request to instance 1 and
if instance 1 crashes after processing the request in halfway then in this case
also failure message will be generated but will not reach to the user. This
failure message will be captured by SCA. This is an intelligent infrastructure
which knows what to do. Thus this type of failure is not handled by users but by
SCA which automatically allocate the request to the healthy instance without the
knowledge of the user. There may be little ability in processing the request due
to resubmission. This is called instance fail safe meaning is that instance may
fail but request will be safe.
STORAGE FAIL SAFE MECHANISM-Suppose database or the storage devices itself has
crashed. The disk storage devices consist of a several disk inside. 12_based cache
multiplex device consist of 12 disks inside the storage.
Volume manager is software which is the part of operating system which creates the
volume of the disk. Here 6 different disks can be grouped together and name is,
suppose that volume 1. Similarly the other 6 disk can be grouped together and can
be named volume 2. So this is how two volumes can be created inside one single
storage device. Now database can be created in one of the volume, say vol1.
Now the volume manager software has got special feature called mirroring. Here
mirroring means whatever is created and stored in volume1 the same things will
also be available in volume2 as well. That is replica of volume1 will be created
in volume2. Hence even if volume1 fails, processing continues with volume2. This
is called as redundancy (RAID redundant array of independent disk). This process
is called disk fail safe or storage fail safe mechanism.
RAID 0: Striped Disk Array without Fault Tolerance:-

6 | Page
RAID Level 0 requires a minimum of 2 drives to implement
Characteristics & Advantages:
RAID 0 implements a striped disk array, the data is broken
each block is written to a separate disk drive
I/O performance is greatly improved by spreading the I/O load
and drives
1) Best performance is achieved when data is striped across
with only one drive per controller
2) No parity calculation overhead is involved
3) Very simple design
4) Easy to implement
Disadvantages:
1) Not a "True" RAID because it is NOT fault-tolerant
2) The failure of just one drive will result in all data in an
3) Should never be used in mission critical environment

down into blocks and


across many channels
multiple controllers

array being lost

RAID 1: MIRRORING AND DUPLEXING: For Highest performance, the controller must be able to perform two concurrent
separate Reads per mirrored pair or two duplicate Writes per mirrored pair
RAID Level 1 requires a minimum of 2 drives to implement
Characteristics & Advantages:
1) One Write or two reads possible per mirrored pair
2) Twice the Read transaction rate of single disks, same Write transaction rate as
single disks
3) 100% redundancy of data means no rebuild is necessary in case of a disk failure,
just a copy to the replacement disk
4) Transfer rate per block is equal to that of a single disk
5) Under certain circumstances, RAID 1 can sustain multiple simultaneous drive
failures
6) Simplest RAID storage subsystem design
Disadvantages:
1) Highest disk overhead of all RAID types (100%) - inefficient
2) Typically the RAID function is done by system software, loading the CPU/Server
and
possibly
degrading
throughput
at
high
activity
levels.
Hardware
implementation is strongly recommended
3) May not support hot swap of failed disk when implemented in "software

RAID 3: PARALLEL TRANSFER WITH PARIT:-

7 | Page
The data block is subdivided ("striped") and written on the data disks. Stripe
parity is generated on Writes, recorded on the parity disk and checked on Reads.
RAID Level 3 requires a minimum of 3 drives to implement
Characteristics& Advantages:
1) Very high Read data transfer rate
2) Very high Write data transfer rate
3) Disk failure has an insignificant impact on throughput
4) Low ratio of ECC (Parity) disks to data disks means high efficiency
Disadvantages:
1) Transaction rate equal to that of a single disk drive at best (if spindles are
synchronized)
2) Controller design is fairly complex
3) Very difficult and resource intensive to do as a "software" RAID

RAID 5: INDEPENDENT DATA DISK WITH DISTRIBUTED PARITY BLOCKS: Each entire data block is written on a data disk; parity for blocks in the same
rank is generated on Writes, recorded in a distributed location and checked on
Reads.
RAID Level 5 requires a minimum of 3 drives to implement
Characteristics & Advantages:
1) Highest Read data transaction rate
2) Medium Write data transaction rate
3) Low ratio of ECC (Parity) disks to data disks means high efficiency
4) Good aggregate transfer rate
Disadvantages:
1) Disk failure has a medium impact on throughput
2) Most complex controller design
3) Difficult to rebuild in the event of a disk failure (as compared to RAID level
1)
4) Individual block data transfer rate same as single disk

RAID 10: VERY HIGH RELIABILITY COMBINED WITH HIGH PERFORMENCE: RAID Level 10 requires a minimum of 4 drives to implement
Characteristics & Advantages:
1) RAID 10 is implemented as a striped array whose segments are RAID 1 arrays
2) RAID 10 has the same fault tolerance as RAID level 1
3) RAID 10 has the same overhead for fault-tolerance as mirroring alone
4) High I/O rates are achieved by striping RAID 1 segments
5) Under certain circumstances, RAID 10 array can sustain multiple simultaneous
drive failures

8 | Page
6) Excellent solution for sites that would have otherwise gone with RAID 1 but need
some additional performance boost
Disadvantages:
1) Very expensive / high overhead
2) All drives must move in parallel to proper track lowering sustained performance
3) Very limited scalability at a very high inherent cost
RAID 0+1: High Data Transfer Performances:RAID Level 0+1 requires a minimum of 4 drives to implement
Characteristics & Advantages:
1) RAID 0+1 is implemented as a mirrored array whose segments are RAID 0 arrays
2) RAID 0+1 has the same fault tolerance as RAID level 5
3) RAID 0+1 has the same overhead for fault-tolerance as mirroring alone
4) High I/O rates are achieved thanks to multiple stripe segments
5) Excellent solution for sites that need high performance but are not concerned
with achieving maximum reliability
Disadvantages:
1) RAID 0+1 is NOT to be confused with RAID 10. A single drive failure will cause
the whole array to become, in essence, a RAID Level 0 array
2) Very expensive / high overhead
3) All drives must move in parallel to proper track lowering sustained performance
4) Very limited scalability at a very high inherent cost
GRID: grid is just an enhancement of shared disk system with little extra
functionality of software evolution and not the hardware evolutions.
TRENDS PROMOTING GRID COMPUTING

Hardware trends

Software trends

Virtualization

Grid momentum.

Hardware trends: (rewrite)


Virtualization- it is the mechanism on which entire grid system is based
Concept: All the CPU and the memory resources will be pulled from each machine and
will be aggregated into the virtual pool. This is called virtualization. All the
resources of the entire machine in the cluster will be pulled into a single
virtual machine. This is also called as resource pulling and virtualization or
provisioning.
Used for load balancing. The same thing is known as cloud
computing.
SYSTEM GLOBAL AREA: - SGA is sounds as system global area. Group of shared memory
structure is known as SGA components that contains data and control information
for ORACLE database instance. It is shared by all the background database
processes. Information about SGA we can take with the help of following diagram

9 | Page

10 | P a g e

SQL>SHOW SGA;
Total System Global Area
Fixed Size
Variable Size
Database Buffers
Redo Buffers

36437964
6543794
19521536
16777216
73728

bytes
bytes
bytes
bytes
bytes

SQL>sho parameter SGA


DB_CACHE_SIZE: The size of the cache of standard blocks.
LOG_BUFFER:

The number of bytes allocated for the redo log buffer cache.

SHARED_POOL_SIZE: The size in bytes of the area devoted to shared SQL and PL/SQL.
LARGE_POOL_SIZE:
NOTE: sga_target
SQL>alter system
SQL>alter system
SQL>alter system

The size of the large pool; the default is zero


size should always be less than sga_max_size.
set sga_target=300000000 scope=both;
set sga_max_size=300000000 scope =spfile;
set pga_aggregate_size=300000000 scope =both;

Allocation of memory region in SGA is not done block by block. Allocation or deal
location in SGA is done chunks of something is known as granules. Granules are
nothing but minimum chunk of memory region which is allocated or deal located.
Granules size is determine by the total size of SGA size

The size of granules is 4 MB if the total size of SGA is less than 1 GB.
The size of granules is 16 MB if the total size of SGA is greater than 1 GB.
On 32 bit windows granules size is 8 MB for SGA larger than 1 GB.
You can take information about granules
SQL>desc V$BUFFER_POOL
A database can be configured to run in dedicated mode or shared mode

11 | P a g e
DEDECATED SERVER-

SHARED SERVER-

PROGRAM GLOBAL AREA:PGA is private for each server and background process. Therefore is one PGA for
each server. In other word we can say that PGA is a memory reserved for each user
processes that connects to an ORACLE database. PGA is allocated when processes is
created and dislocated when processes is terminate

12 | P a g e
WHEN SERVER IS RUNNING IN DEDICATED MODE PGA CONSIST:

Session information-include user privileges and performance statistics for the


session.
Sort area
Cursor state-indicate the stage in processing of the SQL statements that are
currently used by session.
Stack space-contains other session variable.

WHEN SERVER IS RUNNING IN SHARED MODE PGA CONSIST:Stack


UGA (user global area)

Session information

Sort area

Cursor state
All processes running inside the SGA. There are many background processes are
available here some are mandatory. Mandatory processes are

Smon-system monitor

Pmon-processes monitor

Dbwr-database writer

Lgwr-log writer

Ckpt-check point

Memory monitor
Mandatory means if any processes gets kill. Means entire ORACLE instance will
crash. In this case database will not open even though you have everything. Main
memory SGA contains many sub memory area like java pool, data buffer cache, shared
pool, stream pool, redo log buffer, large pool etc.
Total four kinds of file available in database architecture namely control file,
redo log file, data file, and SP (server parameter) file. These four kinds of
files are available internal to the database. In addition there are some external
file namely pfile, password file is used for authentication purpose .data file
contains data coming from the tables and tables are associated with objects like
index, view
The first block of every datafile is the header. The header includes important
information such as file size, block size, tablespace, and creation timestamp.
Whenever the database is opened, Oracle checks to see that the datafile header
information matches the information stored in the control file. If it does not,
then recovery is necessary. Oracle reads the data in a datafile during normal
operation and stores it in the buffer cache. For example, assume that a user wants
to access some data in a table. If the requested information is not already in the
buffer cache, Oracle reads it from the appropriate datafile and stores it in
memory. Control file keep the information about database structure and contain the
operating system filenames of all other files that constitute the database. It is
very small file. We can create 8 mirror copy control files. Practically control
file keep the information about database structure like information about data
file along their location, redo log file along their location. Archived redo log
file name, Database name, Time stamp of database creation, table spaces name
,current log sequence number, check point information, log history ,backup
information etc. It is small because it is like a dictionary for database. This
means that it is not a data it is only information about database. It is called
control file because it controls entire database functionality. If control file
lose then database will not work even though everything is available but it will
not work. So it is highly recommended to keep always mirror copy of control file.

13 | P a g e

Redo log files contains the information about commands which are executed by the
user or on behalf of user this means that these file contain the information about
moment to moment activities. All commands should be going to the redo log file via
redo log buffer. These files contain all information which is currently happening
thats a region redo log files are used for recovery purpose. Suppose that if
database instances are crashed. Then automatic instance recovery is taken place
without redo log file instance recovery is not possible. Two redo log files are
compulsory. Question is why we use at least two redo log files?
If we see in database architecture, a background processes known as Ckpt is
connected from three files namely control, redo log and data file. This means that
Ckpt synchronize these three files. We can understand this with the help of
following exampleSuppose that we run following command
Insert into table values (----);
Note that redo log files are maintain by log writer. Data files are maintain by
database writer and control file maintain by system monitor. When we insert any
values through the SQL query Insert into table values (----) will go in redo log
file, actual data will go in data file and header information will go in control
file. Even though these three file are maintained separately. But they all are
connected from the Ckpt. This means that Ckpt synchronize these 3 files .these
files doing 3 different jobs. First is (redo log file) storing SQL statement means
executing this statement? Second is storing the data and 3rd is keeping the
information about header this means that they are doing 3 different job for same
task. Means they must be synchronized. When they synchronize based on following
parameter
SQL>sho parameter log_check

14 | P a g e
Log_checkpoint_interval
Log_checkpoint_timeout
Log_checkpoint_to_alert

0
1800sec
false

SQL> DESC DBA_DATA_FILES


Name
-------------------------------------FILE_NAME
FILE_ID
TABLESPACE_NAME
BYTES
BLOCKS
STATUS
RELATIVE_FNO
AUTOEXTENSIBLE
MAXBYTES
MAXBLOCKS
INCREMENT_BY
USER_BYTES
USER_BLOCKS
ONLINE_STATUS

Null? Type
--------------VARCHAR2(513)
NUMBER
VARCHAR2(30)
NUMBER
NUMBER
VARCHAR2(9)
NUMBER
VARCHAR2(3)
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
VARCHAR2 (7)

SQL> desc dba_tablespaces


Name
Null?
Type
----------------------------------------- -----------------TABLESPACE_NAME
NOT NULL
VARCHAR2 (30)
BLOCK_SIZE
NOT NULL
NUMBER
INITIAL_EXTENT
NUMBER
NEXT_EXTENT
NUMBE
MIN_EXTENTS
NOTNULL
NUMBER
MAX_EXTENTS
NUMBER
MAX_SIZE
NUMBER
PCT_INCREASE
NUMBER
MIN_EXTLEN
NUMBER
STATUS
VARCHAR2 (9)
CONTENTS
VARCHAR2 (9)
LOGGING
VARCHAR2 (9)
FORCE_LOGGING
VARCHAR2 (3)
EXTENT_MANAGEMENT
VARCHAR2 (10)
ALLOCATION_TYPE
VARCHAR2 (9)
PLUGGED_IN
VARCHAR2 (3)
SEGMENT_SPACE_MANAGEMENT
VARCHAR2 (6)
DEF_TAB_COMPRESSION
VARCHAR2 (8)
RETENTION
VARCHAR2 (11)
BIGFILE
VARCHAR2 (3)
PREDICATE_EVALUATION
VARCHAR2 (7)
ENCRYPTED
VARCHAR2 (3)
COMPRESS_FOR
VARCHAR2 (12)

15 | P a g e

log_checkpoint_interval -this parameter gives the no of operating system block not


database block. If it is 0 this means that it is not set. Note that we cannot
define both parameters zero. Here the value of parameter Log_checkpoint_timeout is
1800 sec .this means that every after 1800 sec back ground check point process
will be enable and synchronize the status of these three files .it will look
command in redo log file (means one redo log file under in Ckpt and other is
recorded the command which are coming from the user). Then it will look same level
of data in data file whether it written or not .if it is not written then it will
force to the database writer for wring the data. Ckpt make sure it after the
command data is recorded in data file, address of the header information is
recorded in control file .once it is recorded then Ckpt gets over. Throughout the
period of synchronization one redo log file will take in the log checkpoint hold.
We need 2 redo log files because one is under the checkpoint then other file will
recording the command coming from the user.
After check points gets over all three file get synchronize then redo information
no longer required. They are kept there only till data are not completely written
and till control file does not know about it. It is done then redoes log file
information thrown up. This means that redo log files are not continuous file they
are cyclic files. As soon as Ckpt get over then new information is over write and
previous information is lost.
INSTANCE RECOVERY-Suppose that
one redo log-1 file is under Ckpt process and
other redo log-2 file recording the data and redo log-1 file containing 100
command .suppose 50 commands are committed under Ckpt at this moment somebody
issue a command shutdown abort what happen ? This means those instances are
crashed. Further database will not open. Next time when you will start the
database, a processes called system monitor will automatically start it will look
the redo log file-1 here 50 commands are completed but 50 commands are not
completed .the information about those commands which are not completed is also
lost. Then it extract the information about those commands which are not committed
,from control file
and role forward all command after mounting before opening.
This is called automatic instance recovery .it is done by system monitor. By
chance redo log files are deleted then instance recovery is not possible. Instance
recoveries taken place from redo log files.
Note:
Every database must have minimum two group
Maximum 5 log members can be in one group and minimum 1 log member.
Maximum 255 redo log files
Minimum 2 control file and maximum 8 mirror copy can be created.
CONVERT DATABASE NOARCHIVE TO ARCHIVE LOG MODE:Shutdown immediate
Startup mount
Alter database archivelog;
Alter database open;
Archive log list
CONVERT DATABASE ARCHIVE TO NOARCHIVE LOG MODE:Shutdown immediate
Startup mount
Alter database noarchivelog;
Alter database open;
YOU CAN SEE THE INFORMATION ABOUT ARCHIVE MODE:SQL>archive log list
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
8
Next log sequence to archive
10
Current log sequence
10
We can extract the information about archive -

16 | P a g e
V$DATABASE, V$ARCHIVED_LOG, V$ARCHIVE_DEST, V$LOG_HISTORY
Adjusting the Number of Archive Processes-:
The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the number of
ARCn processes that the database initially invokes. The default is two
processes. The LOG_ARCHIVE_MAX_PROCESSES parameter is dynamic, and can be changed
using the ALTER SYSTEM statement. The database must be mounted but not open. The
following statement increases (or decreases) the number of ARCn processes
currently running:
SQL>ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=3;
SQL>sho parameter db_rec
Archive files by default store in db_recovery_file_dest means inside the
flash_recovery_area
means,
/oracle/app/oracle/flash_recovery_area
When
we
convert the database to archive log mod .One additional background processes is
run called archive. The function of archive processes is that whenever it see redo
log files taking parts under synchronization. They are locked by checkpoint
getting synchronized. The moment after releasing and before overwriting archive
extract the information from redo log file and store separately for this purpose
it create separate archive log file. Database by default is created in no archive
mod .Performance is also better in no archive mod. Database recovery is possible
when database is in archive mode.
WORKING OF DATABASE BUFFERE CACHE: - Data_buffer_cache which resides in the memory
(SGA) ram. Data is which is already modified is known as dirty buffer. Free list
means the buffer area which is not used. Pinned means it is currently being
processes. Default buffer pool works in round robin fashion using least recently
used algorithm. When buffer gets filled up what lru does, that will written first
to the disk making enough memory for new data recorded. Server processes reading
the data from data files and positioning in buffer and doing processes here.
Whatever data gets fully modified based on your SQL command that is store in to
the dirty buffer. Data which is currently being processed this is known as least
recently used (lru) buffer it is not full modified it is under the processes that
is known as lru buffer, in this case consider there is some amount of data which
is currently being modified this buffer is known as pined buffer. Pined buffer
means the buffer which currently under the modification processes. The end point
of pinned buffer which is touching to the dirty buffer is known as lru end. And
end point of the pinned buffer which touches from the free area this is known as
mru end. Fully modified buffer is known as dirty buffer. Now at this point suppose
some commands are issued. What happen server processes read the data from data
files but it has to find out where has to keep the data in free list area. For
this purpose it starting from lru end for free list area cross over the pinned
buffer which was modified and load the data somewhere available here (mru end).
There are chances that when the server processes trying to find the free list area
to load the new data ,it also find some part of pinned buffer means modified
buffer now by time fully modified. Consider any part of buffer is fully modified
it was earlier pinned buffer now it is fully modified. What is it does it making
is this is fully dirty buffer. This is still there pinned buffer this will be
become now pinned buffer including the new read area after doing this it will
remarking lru end and most recently used (mru) end and now dirty buffer gets
extended here this processes will continue when every data to be required read
till the free list available there. If there is no free list available and new
data has read for processing then what is it going to do that time it will apply
lru algorithm. What server processes does it stops the searching for free list
area trigger the data base writer and ask it to write the sufficient amount of
dirty list area to the disk area to writing the data file after writing it will
invalidate dirty list area and use it for reading the new amount of data.
Database writer is writing when no free list available
THE SHARED POOL:
Size defined by SHARED_POOL_SIZE

Library cache contains statement text, parsed code, and an execution plan

17 | P a g e

Data dictionary cache contains table and column definitions and privileges
DATABASE BUFFER CACHE:DB buffer cache is the part of ram or one of the memory structures of sga which is
used to hold your recently processed data by read or write operation. I mean to
say when you access data first time from datafile then server process just copy
data blocks into ram that is DB buffer. So that if same data is needed by other or
same query then it can fetch from ram not from data file and retrieval will be
fast.

Defined by DB_BLOCK_BUFFERS

Based on DB_BLOCK_SIZE

Holds copies of the data blocks read from datafiles

It has two parts:


a) Dirty list: holds dirty buffers
b) LRU list holds
c) Dirty buffers: - modified buffers, not yet moved to the dirty list.
d) Free buffers: - buffers available for use.
Pinned buffers: - buffers that are currently being accessed

REDO LOG BUFFER:1. LOG_BUFFER determines the size (in bytes)


2. Records changes made through the instance
3. Redo entries are used for database recovery.
4. Can reconstruct, changes made to the database by DDL, DML etc.
5. Used sequentially
6. Circular buffer
DATABASE WRITER WRITES WHEN:-

18 | P a g e
Checkpoint, Dirty buffers threshold reached, No free buffers, Timeout, RAC ping
request, Tablespace offline, Tablespace read only, Table DROP or TRUNCATE,
Tablespace BEGIN BACKUP
LOG WRITER WRITES WHEN:LGWR performs sequential writes from the redo log buffer cache to the redo log
file under
The following situations:

When a transaction commits

When the redo log buffer cache is one-third full

When there is more than a megabyte of changes records in the redo log buffer
cache.

Before DBWn writes modified blocks in the database buffer cache to the data
files.

Every 3 seconds.
Because the redo is needed for recovery, LGWR confirms the commit only after the
redo is written to disk.
LGWR can also call on DBWn to write to the data files

SYSTEM MONITORS RESPONSBILITES:Instance recovery:


1) Rolls forward changes in the redo logs
2) Opens the database for user access

Coalesces free space ever 3 sec


Deal locates temporary segments

19 | P a g e

Open the database so that users can log on. Any data that is not locked by
unrecovered transactions are immediately available.

Roll back uncommitted transactions. They are rolled back by SMON or by the
Individual server processes as they access locked data.

It recovers dead transactions skipped during system failure and Combines, or


coalesces, adjacent areas of free space in the data files.

It also cleanup temporary segment that are no longer in used.


In real application cluster SMON process of one instance can perform recovery for
other instance that has failed.
Segments are used to store data during SQL statement processing
PROCESSES MONITOR RESPOSBILITES:The processes monitor performs processes recovery when a user processes fails. It
is responsible for cleaning up the cache and freeing resources that the processes
were using. It also checks on the dispatcher processes and server processes
restart them if they have failed.
RECO-The recoverer processes is used to resolve distributed transactions that are
pending due to network or system failure in distributed database. At timed
interval the local RECO attempts to connect to remote database and automatically
complete the commit or rollback of the local portion of any pending distributed
transaction.
SQL>ALTER DATABASE db01 MOUNT;
SQL>ALTER DATABASE db01 OPEN READ ONLY;
DATABASE CREATION STEP BY STEPWe create a database with the help of following two files
Parameter file parameter file is text file which is managed by user. During the
startup a database instance has to read parameter file first because the parameter
file will contain the configuration parameter which are read by the startup
processes to start the database instance.
This configuration parameter is also called an initialization parameter. There are
two types of parameter file
1) Pfile
2) Spfile
Go to any database and run this statement. Suppose we are working in
/orator/oracle
SQL>create pfile=/orator/oracle/init.ora from spfile;
This statement create a pfile inside /orator/oracle
After then switch LINUX prompt
$>mkdir saurabh
$>cp init.ora saurabh
$>cd saurabh
$>VI init.ora (this is parameter file)
*.audit_file_dest=/orator/oracle/saurabh/adump
*.audit_trail=db
*.control file=/orator/oracle/saurabh/control.ctl
*.db_block_size=8192
*.db_domain=server.com
*.db_name=saurabh
*.db_recovery_file_dest_size=3999268864
*.db_recovery_file_dest=/orator/oracle/saurabh/flash_recovery_area
*.diagnostic_dest= /orator/oracle/saurabh/diag
*.event
*.open_cursors=300
*.remote_login_passwordfile=EXCLUSIVE
*.sga_target=432013312
*.star_transformation_enabeled=true
*.undo_tablespace=undo1
After this step we create following directory
$>mkdir adump flash_recovery_area diag
Now we create crdb.sql file

20 | P a g e
$>VI crdb.sql
Create database saurabh
Datafile /orator/oracle/saurabh/sys1.ora size 350m autoextend on
Sysaux datafile /orator/oracle/saurabh/sysaux1.ora size 100m
Default temporary tablespace temp tempfile /orator/oracle/saurabh/tEMP1.ora size
100m
Undo tablespace undo1 datafile /orator/oracle/saurabh/undo1.orasize100m
Logfile
Group 1(/orator/oracle/saurabh/log11.ora) size 100m,
Group 2 (/orator/oracle/saurabh/log21.ora) size100m
maxinstances1
Maxlogfiles 100
Maxlogmembers 3
Character setAL32UTF8
User system identified by saurabh
User sys identified by saurabh
;
Now we have completed preparatory step for database creation next step is
$>export ORACLE_SID=saurabh
$>orapwd file=$ORACLE_HOME/dbs/orapwsaurabh password=oracle
We can write any password
$>sqlplus /as sysdba
Now we create sp file. This file is similar to the pfile the difference is only
pfile is maintained by user and sp file is maintained by oracle instance.
SQL>create spfile from pfile=/ orator/oracle/saurabh/init1.ora
If we run following command without creating spfile it will show error.
SQL>startup nomount
When you give nomount in this time oracle instance is created.
SQL>@crdb.sql
If everything is right then it will show the output database is created
SQL>shutdown immediate
SQL>exit
$> go to saurabh directory and
$>export ORACLE_SID=saurabh
$>VI postdb.sql
Create tablespace data datafile /orator/oracle/saurabh/data1.ora size 10m;
Create user neelu identified by saurabh default tablespace data quota unlimited on
data;
Grant connect, resource to neelu;
Alter database default tablespace data;
Create user saurabh identified by saurabh;
Grant connect, resource to saurabh;
@$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql
Conn system/saurabh
@$ORACLE_HOME/sqlplus/admin/pupbld.sql
Exit
;
After creating this file we will switch in SQL prompt
$>sqlplus /as sysdba
SQL>@postdb.sql
SETTING REMOTE_LOGIN_ PASSWORDFILE:
NONE:

Causes Oracle to behave as if the password file does not exist


EXCLUSIVE:

Can be used with only one database

Allows granting SYSDBA and SYSOPER system privileges to other individual users
SHARED:

Password file can be used by multiple databases.

The only user recognized by a SHARED password file is SYS

Viewing Password File Members


Select * from v$pwfile_users;

21 | P a g e

When we create database two users by default create


Sys the default password of sys is chang_on_install
System-the default password of system is manager
Sys- user holds all the data dictionary tables. They create when we run crdb.sql.
These tables are not directly accesses by user or database administrator.
System-it contains data dictionary views/synonyms. They create when we run
catalog.sql and catproc.sql.
There are two types of tables available in sys schema
1) Static tables
2) Dynamic tables
Static tables are those tables where data is preserved even though database is
shutdown.
Dynamic tables are those tables where data is not preserved. They preserve the
data in memory area and memory is volatile when oracle instance is shutdown. They
will be cleared. There structure is loaded in memory regions and collect the
information only memory region till oracle instance is on. The moment insistence
will be shutdown they will be cleared. Dynamic means they collect the data
dynamically means run time and after shutting down database the data available in
particular tables has lost. These tables are called dynamic performance table.
All the static and dynamic tables are created when create database runs and all
these tables are stored in sys schema but oracle doesnt recommend we should be
using these tables directly. All these tables are not in human readable format.
These tables are very important in nature.
For this region we create dictionary view and synonyms. That is created in system
user. System user holds data dictionary views and synonyms. Views and synonyms are
query over those tables. View is a stored query and synonym is also kind of stored
query. Views and synonyms they do not have any data they are only the some query
over the tables available in sys schema. They are created when we run catalog.sql
and catproc.sql. Catalog.sql creates those dictionary views and synonyms which are
required for normal table accessibility and catproc.sql creates those view and
synonyms which are required for procedural options.
System:Dynamic performance views -they are created in form X$.., v$.., g$.
G means global where multiple instance are available and gives all the
information
V gives the information only current instance where you are connected to. They
are very user friendly. They are only use for performance.
Example: V$INSTANCE, V$SESSION

Static views /synonyms --Example: user_tables, user_indexes


dba_tables, dba_indexes
all_tables, all_indexes
Tab, cat, dict, dictionary
dba_tablespaces, user_tablespace
Dynamic performance tables and views and synonyms gives the information about
database latest statistics .99.99% they are the source of any kind of performance
tuning
You can see these view and synonyms with the help of following example
SQL>desc dictionary
SQL>select table_name from dict order by table_name
SQL>spool diksha.txt
SQL>/
SQL>exit
$>VI diksha.txt
$>sqlplus /as sysdba
SQL>select table_name from dict where table_name like %&t;

22 | P a g e
Fix
SQL>desc V$FIXEED_TABLE
THIS IS THE WAY YOU CAN FIND INFORMATION ABOUT DYNAMIC PERFORMANCE TABLES
SQL>select name from V$FIXEED_TABLE;
TABLE SPACE:-Table space is group of data files in other word we can say that a
database is divided in to logical unit is called table space. OR
Table space is a logical counter part of one or more data files.
Every table space must have one data file.
Database Structure

TYPES OF TABLE SPACE BASED ON SIZE:BIG FILE TABLE SPACE


A bigfile tablespace contains only one datafile or one tempfile size can be
growing up to TB or which can contain up to 232 or 4G blocks. The maximum size of
the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K
blocks and 32TB for a tablespace with 8K blocks.
Restrictions on Bigfile Tablespaces:Bigfile tablespaces are subject to the following restrictions:

You can specify only one datafile in the DATAFILE clause or one tempfile in
the TEMPFILE clause.

You cannot specify EXTENT MANAGEMENT DICTIONARY


SQL>create bigfile tablespace tablespace_name datafile/orator/oracle/bigts1.ora
size 10t autoextend on;
SQL>alter database set default bigfile tablespace;
SMALL FILE TABLE SPACE:A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022
datafiles or tempfiles, each of which can contain up to 222 or 4M blocks.
SQL>create
smallfile
tablespace
datafile/orator/oracle/bigts1.ora size 10m;
SQL>Alter database set default smallfile tablespace;

tablespace_name

Examples:
In following case data of user diksha will go by default in system table space
SQL>Create tablespace data datafile /oracle/orator/data1.ora, size 100m;
SQL>Create user diksha identified by saurabh;
SQL>Grant connect, resource to diksha;
Now in following case data of user21 will go to default table space data.
SQL>Create user user21 identified by user21 default tablespace data quota 100m on
data;
SQL>Grant connect, resource to user21;
SQL>Alter database default tablespace data;

23 | P a g e
SQL>Create user neelu identified by saurabh;
SQL>Grant connect, resource to neelu;
Now default tablespace data. Further data related to any user will go to default
tablespace data.
When we create any tablespace by default it creates small file tablespace
SQL>Create bigfile tablespace ts3 datafile/orator/oracle/ts3.ora size 10m;
Now this will be created as bigfile table space because by default I have setup
now default is big file.
NOTEYou must specify EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT AUTO for the
SYSAUX tablespace. The DATAFILE clause is optional only if you have enabled
Oracle-managed files.
Restrictions on the SYSAUX Tablespace:You cannot specify OFFLINE or TEMPORARY for the SYSAUX tablespace
TYPES OF TABLE SPACE BASED ON OPERATIONS--

Permanent (by default)

Temporary

Undo

Permanent table space contains persistent (continually recurring to the mind)


schema object .object in permanent tablespace are store in data file.
Temporary table space used
1) For sort operation
2) It can not contain any permanent object.
Temporary tablespace can be either dictionary manage or locally manage.
contain schema objects only for the duration of session.

They

Restrictions on Temporary Tablespaces .The data stored in temporary tablespaces


persists only for the duration of a session. Therefore, only subsets of the CREATE
TABLESPACE clauses are relevant for temporary tablespaces. The only clauses you
can
specify
for
a
temporary
tablespace
are
the
TEMPFILE
clause,
the
tablespace_group_clause, and the extent_management_clause
Examples:SQL>Create temporary tablespace temp1 tempfile /orator/oracle/temp.ora size
100m;
Note temporary and tempfile are keywords.
Undo table space is a type of permanent table space used by oracle database to
manage the undo data if you are running your database in automatic undo management
mode.
When you perform dml (insert, update, delete) operation on database then actually
you are going to modify data in database. So that undo table space is used to
store your old data. So that if your dml operation fails then u can recover your
data stored in undo table space.
Specify UNDO to create an undo tablespace. When you run the database in automatic
undo management mode, Oracle Database manages undo space using the undo tablespace
instead of rollback segments. This clause is useful if you are now running in
automatic undo management mode but your database was not created in automatic undo
management mode.
Oracle provides a fully automated mechanism, referred to as automatic undo
management, for managing undo information and space. With automatic undo
management, the database manages undo segments in an undo tablespace. Beginning
with Release 11g, automatic undo management is the default mode for a newly
installed
database.
An
auto-extending
undo
tablespace
named UNDOTBS1 is
automatically created when you create the database with Database Configuration
Assistant (DBCA).

24 | P a g e
Oracle Database always assigns an undo tablespace when you start up the database
in automatic undo management mode. If no undo tablespace has been assigned to this
instance, then the database uses the SYSTEM rollback segment. You can avoid this
by creating an undo tablespace, which the database will implicitly assign to the
instance if no other undo tablespace is currently assigned.
tablespace_retention_clause This clause is valid only for undo tablespaces.
RETENTION GUARANTEE- specifies that Oracle Database should preserve unexpired undo
data in all undo segments of tablespace even if doing so forces the failure of
ongoing operations that need undo space in those segments. This setting is useful
if you need to issue an Oracle Flashback Query or an Oracle Flashback Transaction
Query to diagnose and correct a problem with the data.
RETENTION NOGUARANTEE- returns the undo behavior to normal. Space occupied by
unexpired undo data in undo segments can be consumed if necessary by ongoing
transactions. This is the default
SQL>Create undo tablespace undo2 datafile /orator/oracle/undo2.ora size 100m
retention guarantee;
SQL>Select
tablespace_name||
||file_name||
||
(bytes/1024)/1024
from
dba_data_files;
SQL>Select
tablespace_name||
||file_name||
||
(bytes/1024)/1024
from
dba_temp_files;
Note-when you start database if memory target not supported problem come you can
solve in following way
SQL>create pfile=/home/oracle/ini1.ora from spfile;
SQL>exit
$>VI init1.ora
And remove memory target
$>sqlplus /as sysdba
SQL>create spfile from pfile=/home/oracle/init1.ora;
SQL>startup
PLUGGED_IN NO means it is not transported to another database
First time undo table space was created in oracle 9i. It is type of permanent
table space used by oracle database to manage the undo data. If you are running in
your database in automatic undo management mode. This was actually the replacement
of rollback segment which was giving the support of transaction handling. In base
of rollback segment oracle 9i created the undo tablespace which does the
transaction handling. All the transaction stored there where commit, rollback are
taken place. Undo table space not only doing transaction handling it also the
place where recycle bin information are stored. All tables which belongs to
recycle bin they stored in undo table space
RETENTION GUARANTEE clause was not available in oracle 9i. It comes in oracle 10g.
what is happening suppose that you created undo table space may be 100m size
consider some tables gone to recycle bin and some transaction is also going and
some additional tables has been dropped. Some data getting deleted and go to
recycle bin and suppose there is no free space available here. So in oracle 9i
what does it removed previous recycle bin information and new one put in recycle
bin. Means list recently will be removed and latest one will be created there.
Result is if you wanted the flashback the tables and you give the command you find
you cant flashback.
RETENTION GUARANTEE means if undo table space filled up then new
go in recycle bin for new things auto extend will be on so it will
there it will be keeping and we give RETENTION GUARANTEE previous
be overwritten note that only one undo table space use at time.
information about undo table space

things will not


be extending so
things will not
For finding the

SQL>sho parameter undo


It will show you which undo table space are using with additional information

25 | P a g e
SQL>alter system set undo_tablespace=undo2 scope=both;
SQL>alter system set undo_retention=100 scope=both;
SCOPE:1) Memory
2) Spfile
3) Both
If we use the option memory, this parameter will get activated only for current
running instance. But value will not save in spfile as a result if instance will
be down and restarted then resetting back what is in spfile.
If we set option spfile, this parameter set for spfile but not be activated for
current instance. In current instance still will showing old value. When you shut
down and restart then only the parameter will be effective.
If we set option both it become activated for current running instance and it will
saved in spfile for future uses also
There are two types of parameters are available
1) Dynamic
2) Static
Dynamic parameters are that parameters which change dynamically. both and
memory both are dynamic parameter. Because memory means they can dynamically
activated in memory. If the parameter is not dynamic that time you can give option
spfile
SQL>sho parameter processes
SQL>alter system set processes = 500 scope = both;
It will give error. It will not run because processes is not a dynamic parameter
SQL>alter system set processes = 500 scope = spfile;
SQL>startup force
TYPES OF TABLE SPACE BASED ON STORAGE MANAGEMENT1) Dictionary managed
2) Locally managed
System by default dictionary managed. Remaining all the table spaces by default
created locally manages.
Locally Managed Tablespaces:

Concurrency and speed of space operations is improved, because space


allocations and DE allocations modify locally managed resources (bitmaps stored
in header files) rather than requiring centrally managed resources.

Performance is improved, because recursive operations that are


required during dictionary-managed space allocation are eliminated

Readable standby databases are allowed, because locally managed temporary


tablespaces are locally managed and thus do not generate any undo or redo.

Space allocation is simplified, because when the AUTOALLOCATE clause


specified, the database automatically selects the appropriate extent size.

User reliance on the data dictionary is reduced, because


information is stored in file headers and bitmap blocks.

Coalescing free extents is unnecessary for locally managed tablespaces.

the

sometimes

is

necessary

SQL>Create
tablespace
ts2
datafile
/orator/oracle/ts2.orasize100m
management dictionary;
Above statement creates dictionary manage table space

extent

The DBMS_SPACE_ADMIN package provides maintenance procedure for locally managed


tablespaces. It cannot be used for dictionary managed tablespaces
SQL>exec dbms_space_admin.tablespace_migrate_to_local (ts2)

26 | P a g e
It convert to dictionary manage to locally manage but when table space is Empty.
SQL>exec dbms_space_admin.tablespace_migrate_from_local (ts2)
It convert to locally manage to dictionary manage but when tablespace must be
Empty. If you have some data then direct conversion is not possible.
What is the meaning of manage. It is storage management. Storage management means
how the space is allocated and deal located of your object in table space. In
dictionary manage what happen, the status and information store in dictionary
tables. Like tab, cat they stored in dictionary table. Again it is queried back to
know that what has to be done. This means any space allocation is transitively
dependent upon dictionary. But this dictionary is managed internally by system. It
is not used by DBA and not use by user.
In locally manage, table space just monitor and doesnt go anywhere just check the
status based on RETENTION remedial. This is called local. Locally means status
information are stored as bit map in the data file itself and based on
requirement. Whatever, statistics is collecting immediately. It doesnt go to
dictionary and check try to see what has to be done. Locally means everything is
available there. From there it will immediate takes action, status information
monitor in form of bit map within the data file itself and based on doing
immediate.
In dictionary manage status information are stored in dictionary. So space takes
in dictionary.
It is firing some insert, update, and deletes operation to manage it. Whatever
command fire they go to redo log file. So extra redo logs information are
generated. Everything is managed by dictionary. It has gone to dictionary find out
and then takes decision.
But in locally manage nothing store in dictionary. So there is no dictionary space
being used, no redo logs are generated because no such commands are firing and
since it is locally manage everything doing smoothly and straight forward. Thats
the region we should use locally manage. Dictionary table space is available only
for back word computability.
SQL>Create tablespace data datafile /oracle/orator/data1.ora size 100m EXTENT
MANAGEMENT LOCAL AUTOALLOCATE;
SQL>Create tablespace data datafile /oracle/orator/data1.ora size 100m EXTENT
MANAGEMENT LOCAL UNIFORM SIZE 128K;
SQL>alter tablespace ts1 offline;
SQL>alter tablespace ts1 offline immediate;
SQL>alter tablespace ts1 online;
SQL>alter tablespace ts1 read only;
ONLINE-means it allows read and write operation
OFFLINE- means neither read nor write
READ ONLY- means only read. Query can be done but insert, update and delete
operations cannot be done.
SYSTEM tablespace and any tablespace with active rollback segments cannot be taken
offline. They must be online.
Suppose you given offline and suppose that table space having some pending
transaction inside from the many users, then till the time those pending
transaction are not officially completed by the users which they are not either
given rollback, commit and not completed till the time table space behave like an
online only. However new coming data doesnt allow. After giving offline it dont
allow new coming data but it will wait for existing transaction to properly close
from the user side before it goes to full offline.
But you gave offline immediate. It will terminate all the pending transaction and
take the table space offline mode.
Dropping the table space-:
SQL>drop tablespace ts1
It will work when tablespace is Empty

27 | P a g e
SQL>drop tablespace ts1 including contents;
SQL>drop tablespacets1including contents and datafile;
Renaming the table space:SQL>alter tablespace ts2 rename to ts1;
After renaming the table space any backup available belonging to this table space
become useless for future recovery
Monitoring Free Space:SQL>SELECT block_id, bytes, blocks FROM dba_free_space;
SQL>SELECT file_name, tablespace_name, bytes FROM dba_data_files;
SQL>SELECT file_name, tablespace_name, bytes FROM dba_temp_files;
Coalescing Free Space:SQL>ALTER TABLESPACE data01 COALESCE;
You can understand with the help of following figure

Displaying Statistics for Free Space (Extents) of Each Tablespace:SQL>SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,COUNT(*)
"PIECES",MAX(blocks)
"MAXIMUM", MIN(blocks)"MINIMUM",AVG(blocks) "AVERAGE",SUM(blocks) "TOTAL" FROM
DBA_FREE_SPACE GROUP BY TABLESPACE_NAME, FILE_ID;
We can take information about table space with the help of following views
V$TABLESPACE- Name and number of all tablespaces from the control file
DBA_TABLESPACES,
tablespaces.

USER_TABLESPACES:-

DBA_TABLESPACE_GROUPS-Displays
belong to them
DBA_SEGMENTS, USER_SEGMENTSaccessible) tablespaces

the

Descriptions
tablespace

Information

of

groups

about

all

(or

user

and

the

tablespaces

segments

within

accessible)

all

(or

that
user

DBA_EXTENTS, USER_EXTENTS:- Information about data extents within all (or user
accessible) tablespaces
DBA_FREE_SPACE, USER_FREE_SPACE- Information about free extents within all (or
user accessible) tablespaces
V$DATAFILE: - Information about all datafiles, including tablespace number of
owning tablespace.
V$TEMPFILE- Information about all tempfiles, including tablespace number of owning
tablespace
DBA_DATA_FILES- Shows files (datafiles) belonging to tablespaces
DBA_TEMP_FILES- Shows files (tempfiles) belonging to temporary tablespaces
DBA_USERS- Default and temporary tablespaces for all users
DBA_TS_QUOTAS- Lists tablespace quotas for all users
V$SORT_SEGMENT:- Information about every sort segment in a given instance. The
view is only updated when the tablespace is of the TEMPORARY type
V$TEMPSEG_USAGE:- Describes temporary (sort) segment usage by user for temporary
or permanent tablespaces.
HOW TO RENAME A DATAFILE:Rename and relocate meaning is same this can be done in two ways

28 | P a g e
WHEN DATABASE IS NOT OPEN:1) Shutdown database
2) Use operating system command to copy or move datafile to new name and location
3) Mount data base
4) Issue alter database rename file
5) Fires shutdown the database
SQL>shutdown immediate
SQL>exit
$>cd saurabh (suppose the directory where data file are located)
$>cd saurabh
$>mv ts2.ora (source) ts22.dbf (destination)
$>startup mount
SQL>alter database rename file /home/saurabh/ts2.ora to/home/saurabh/ts22.dbf;
SQL>alter database open;
(ask media recovery)
SQL>recover datafile 6;
SQL>alter database open;
WHEN DATABASE IS OPEN:1) Off line table space
2) Use operating system command to copy or move the data file
3) Issue alter database rename data file
4) Online table space
SQL>shutdown immediate
SQL>startup mount
SQL>alter database archivelog;
SQL>alter database open;
SQL>alter tablespace offline immediate;
SQL>exit
$>mv ts22.dbf ts222.ora
$>sqlplus /as sysdba
SQL>alter
tablespace
ts22
rename
datafile/home/oracle/ts22
/home/oracle/ts222.ora;
SQL>recover datafile 5;
SQL>alter tablespace online;
WE CAN RESIZE THE TABLESPACE IN FOLLOWING WAY:-

to

SQL>alter database datafile oracle/app/oracle/users1.dbf resize 200m;


SQL>alter
100m;

database

datafile

oracle/app/oracle/users1.dbf

autoextend

on

next

Q>HOW TO CREATE A NON DEFAULT BLOCK SIZE?


SQL>sho parameter db_blo
SQL> sho parameter db_blo
NAME
------------------------------Db_block_buffers
Db_block_checking
Db_block_checksum
Db_block_size
SQL>sho parameter db_%cache

TYPE

VALUE
---------------

integer
string
string

integer

FALSE
TYPICAL
8192

29 | P a g e
NAME
-------------------Db_16k_cache_size
Db_2k_cache_size
Db_32k_cache_size
Db_4k_cache_size
Db_8k_cache_size
Db_cache_advice
Db_cache_size
Db_flash_cache_file
Db_flash_cache_size
Db_keep_cache_size
Db_recycle_cache_size

TYPE
VALUE
--------- ---big integer 0
big integer 0
big integer 0
big integer 0
big integer 0
string
ON
big integer 0
string
big integer 0
big integer 0
big integer

First we create db_nk_cache_size


SQL>Alter system set db_16k_cache_size=10000000 scope=BOTH;
SQL> sho parameter db%cache
NAME

TYPE

VALUE
--------------------

--------- ---Db_16k_cache_size
Db_2k_cache_size
Db_32k_cache_size
Db_4k_cache_size
Db_8k_cache_size
Db_cache_advice
Db_cache_size
Db_flash_cache_file
Db_flash_cache_size
Db_keep_cache_size
Db_recycle_cache_size

big integer 12m


big integer 0
big integer 0
big integer 0
big integer 0
string
ON
big integer 0
string
big integer 0
big integer 0
big integer

SQL>Sho parameter db_%cache


SQL>create tablespace ts2 datafile /oracle/SGA/ts2.ora size 10m blocksize 16k;
SQL>select TABLESPACE_NAME||'*'||BLOCK_SIZE FROM DBA_TABLESPACES;
TABLESPACE_NAME||'*'||BLOCK_SIZE
-----------------------------------------------------------SYSTEM*8192
SYSAUX*8192
UNDOTBS1*8192
TEMP*8192
USERS*8192
EXAMPLE*8192
DIKSHA*8192
TS2*16384
HOW TO CREATE OMF (ORACLE MANAGED FILE) BASED TABLE SPACE?
SQL>sho parameter db_cr
NAME
TYPE
VALUE
-----------------------------------Db_create_file_dest
string
Db_create_online_log_dest_1
string

----------- -----

30 | P a g e
Db_create_online_log_dest_2
Db_create_online_log_dest_3
Db_create_online_log_dest_4
Db_create_online_log_dest_5

string
string
string
string

SQL>! Mkdir /oracle/app/oracle/oradata/orcl/omf


SQL>alter system set
db_create_file_dest='/oracle/app/oracle/oradata/orcl/omf'
scope=both;
SQL>sho parameter db_cr
NAME
TYPE
VALUE
------------------------------------ ----------- ----Db_create_file_dest
string
/oracle/app/oracle/oradata/orcl/omf
Db_create_online_log_dest_1
string
Db_create_online_log_dest_2
string
Db_create_online_log_dest_3
string
Db_create_online_log_dest_4
string
Db_create_online_log_dest_5
string
SQL> create tablespace ts3;
SQL>select TABLESPACE_NAME||''||FILE_NAME||''||BLOCKS||'
FROM DBA_DATA_FILES;

'||

(BYTES/1024)/1024

TABLESPACE_NAME||''||FILE_NAME||''||BLOCKS||''|| (BYTES/1024)/1024
-------------------------------------------------------------------------------------------------USERS/oracle/app/oracle/oradata/orcl/users01.dbf2560 20
UNDOTBS1/oracle/app/oracle/oradata/orcl/undotbs01.dbf13440 105
SYSAUX/oracle/app/oracle/oradata/orcl/sysaux01.dbf65280 510
SYSTEM/oracle/app/oracle/oradata/orcl/system01.dbf87040 680
EXAMPLE/oracle/app/oracle/oradata/orcl/example01.dbf12800 100
DIKSHA/oracle/app/oracle/oradata/orcl/saurabh.ora2560 20
TS2/oracle/app/oracle/oradata/orcl/ts2.ora320 5
TS3/oracle/app/oracle/oradata/orcl/omf/ORCL/datafile/o1_mf_ts3_90spvf92_.dbf1280 0
100
HOW TO CREATE OMF BASE TABLE SPACE WITH NONDEFAULT BLOCK SIZE?
SQL>Alter system set db_16k_cache_size=10000000 scope=BOTH;
SQL>create tablespace ts3 blocksize16k;
REDO LOG FILES:Redo log files are filled with redo records. A redo record, also called a redo
entry, is made up of a group of change vectors, each of which is a description of
a change made to a single block in the database. For example, if you change a
salary value in an Employee table, you generate a redo record containing change
vectors that describe changes to the data segment block for the table, the undo
segment data block, and the transaction table of the undo segments. Redo entries
record data that you can use to reconstruct all changes made to the database,
including the undo segments. Therefore, the redo log also protects rollback data.
When you recover the database using redo data, the database reads the change
vectors in the redo records and applies the changes to the relevant blocks. Redo
records are buffered in a circular fashion in the redo log buffer of the SGA and
are written to one of the redo log files by the Log Writer (LGWR) database
background process. Whenever a transaction is committed, LGWR writes the
transaction redo records from the redo log buffer of the SGA to a redo log file,
and assigns a system change number (SCN) to identify the redo records for each
committed transaction. Only when all redo records associated with a given

31 | P a g e
transaction are safely on disk in the online logs is the user process notified
that the transaction has been committed
Redo records can also be written to a redo log file before the corresponding
transaction is committed. If the redo log buffer fills, or another transaction
commits, LGWR flushes the entire redo log entries in the redo log buffer to a redo
log file, even though some redo records may not be committed. If necessary, the
database can roll back these changes
ACTIVE, CURRENT AND INACTIVE REDO LOG FILES:Oracle Database uses only one redo log files at a time to store redo records
written from the redo log buffer. The redo log file that LGWR is actively writing
to is called the current redo log file. Redo log files that are required for
instance recovery is called active redo log files. Redo log files that are no
longer required for instance recovery are called inactive redo log files
LOG SWITCHES AND LOG SEQUENCE NUMBER:A log switch is the point at which the database stops writing to one redo log file
and begins writing to another. Normally, a log switch occurs when the current redo
log file is completely filled and writing must continue to the next redo log file.
However, you can configure log switches to occur at regular intervals, regardless
of whether the current redo log file is completely filled. You can also force log
switches manually
Oracle Database assigns each redo log file a new log sequence number every time a
log switch occurs and LGWR begins writing to it. When the database archives redo
log files, the archived log retains its log sequence number. A redo log file that
is cycled back for use is given the next available log sequence number.
Each online or archived redo log file is uniquely identified by its log sequence
number. During crash, instance, or media recovery, the database properly applies
redo log files in ascending order by using the log sequence number of the
necessary archived and redo log files.
LGWR never writes concurrently to members of different groups (for example,
to A_LOG1 and B_LOG2).Whenever LGWR cannot write to a member of a group, the
database marks that member as INVALID and writes an error message to the LGWR
trace file and to the database alert log to indicate the problem with the
inaccessible files. The minimum size permitted for a redo log file is 4 MB. The
default size of redo log files is operating system dependent.
HOW WE ADD GROUP
We can see group with the help of following commandSQL>select group#||
||members from V$LOG;
SQL>select member from V$LOGFILE;
We can add the group with the help of following commandSQL>alter database add logfile ('/oracle/dbs/log1c.rdo',
/oracle/dbs/log2c.rdo') size 100m;
OR
SQL>alter database add logfile group 3(/oracle/orator/log31.ora) size 100m;
We can create mirror copy

32 | P a g e
SQL>alter database add logfile member /oracle/orator/log31.ora to group 3;
After then run following commandSQL>alter system switch logfile;
NOTE: A log switch occurs when LGWR stops writing to one redo log group and starts
writing to another. By default, a log switch occurs automatically when the current
redo log file group fills.
You can force a log switch to make the currently active group inactive and
available for redo log maintenance operations. For example, you want to drop the
currently active group, but are not able to do so until the group is inactive. You
may also wish to force a log switch if the currently active group needs to be
archived at a specific time before the members of the group are completely filled.
This option is useful in configurations with large redo log files that take a long
time to fill.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER
SYSTEM statement with the SWITCH LOGFILE clause.
The following statement forces a log switch
SQL>alter system switch logfile;
Then run this commandSQL>alter database drop logfile member /oracle/orator/log31.ora to group 3;
You can drop group also
SQL>alter database drop logfile group 3;
VERIFYING BLOCKS OF REDO LOG FILES:You can configure the database to use checksums to verify blocks in the redo log
files. If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the
database computes a checksum for each database block when it is written to disk,
including each redo log block as it is being written to the current log. The
checksum is stored the header of the block.
Oracle Database uses the checksum to detect corruption in a redo log block. The
database verifies the redo log block when the block is read from an archived log
during recovery and when it writes the block to an archive log file. An error is
raised and written to the alert log if corruption is detected.
If corruption is detected in a redo log block while trying to archive it, the
system attempts to read the block from another member in the group. If the block
is corrupted in all members of the redo log group, then archiving cannot proceed.
The default value of DB_BLOCK_CHECKSUM is TRUE. The value of this parameter can be
changed dynamically using the ALTER SYSTEM statement.
CLEARING A REDO LOG FILE:A redo log file might become corrupted while the database is open, and ultimately
stop database activity because archiving cannot continue. In this situation
the ALTER DATABASE CLEAR LOGFILE statement can be used to reinitialize the file
without shutting down the database.
The following statement clears the log files in redo log group number 3:
SQL>ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement
possible:

overcomes

two

situations

where

dropping

of

redo

logs

is

not

1) If there are only two log groups


2) The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in
the statement
SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The
cleared redo logs are available for use even though they were not archived.

33 | P a g e
If you clear a log file that is needed for recovery of a backup, then you can no
longer recover from that backup. The database writes a message in the alert log
describing the backups from which you cannot recover
If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER DATABASE
CLEAR LOGFILE statement.
If you clear a redo log needed to bring an offline tablespace online, you will not
be able to bring the tablespace online again. You will have to drop the tablespace
or perform an incomplete recovery. Note that tablespaces taken offline normal do
not require recovery.
You can find the information about log file with the help of following views
V$LOG- Displays the redo log file information from the control file
V$LOGFILE- Identifies redo log groups and members and member status
V$LOG_HISTORY- Contains log history information
SQL>SELECT * FROM V$LOG;
GROUP# THREAD#
SEQ
BYTES MEMBERS ARC STATUS
FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- --------1
1 10605 1048576
1 YES ACTIVE
11515628 16-APR-00
2
1 10606 1048576
1 NO CURRENT
11517595 16-APR-00
3
1 10603 1048576
1 YES INACTIVE
11511666 16-APR-00
4
1 10604 1048576
1 YES INACTIVE
11513647 16-APR-00
SQL>SELECT * FROM V$LOGFILE;
GROUP#

STATUS

------

-------

MEMBER
----------------------------------

D:\ORANT\ORADATA\IDDB2\REDO04.LOG

D:\ORANT\ORADATA\IDDB2\REDO03.LOG

D:\ORANT\ORADATA\IDDB2\REDO02.LOG

4
D:\ORANT\ORADATA\IDDB2\REDO01.LOG
SQL>SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG
GROUP# ARC STATUS
--------- --- ---------------1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
STEPS FOR RENAMING THE REDO LOG MEMBER:1) Shut down the database
SQL>shutdown immediate
2) Copy the redo log files to the new location.
$>mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
$>mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
3) Startup the database, mount, but do not open it
SQL>CONNECT / as SYSDBA
SQL>STARTUP MOUNT
4) Rename the redo log members.
SQL>ALTER DATABASE RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo' TO
'/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';
5) Open the database for normal operation
SQL>ALTER DATABASE OPEN;
Q>HOW TO CREATE A MIRROR OF CONTROL FILE?
First we create pfile
SQL>create pfile=/oracle/init1.ora from spfile;
SQL>sho parameter control
SQL>shutdown abort

34 | P a g e
SQL>exit
Go to control file where it is located
$>cp control.ctl control2.ctl
$>VI init1.ora
And add control file=
$>sqlplus /as sysdba
SQL>create spfile from pfile=/oracle/init1.ora;
SQL>startup
SQL>sho parameter control
Before 11 g, in place of dump_dest we use three parameter separately
1) Back_ground_dump_dest
2) Core_dump_dest
3) User_dup_dest
If you find the information about these you can find following way
SQL>sho parameter diag
SQL>sho parameter back
SQL>sho parameter user
SQL>sho parameter core
Above three parameters are still significant in 11g only one difference is that we
do not need define them separately. We define only diagnostic_dest and oracle
generate automatically.
In Back_ground_dump_dest path oracle creates two kinds of file
1) Alert log file
2) Trace file
Alert log file are those files which will keeping chronological log of message
like database instance, start up, shut down, database creation or any structure
changes to the database, all initialize parameter values are recorded here. In
back ground one more trace files are generated. For every back ground processes
like log writer, data base writer, smon, pmon trace file are generated. These
files are very important for diagnostic the initial stage of problem. We can
understand this with the help of following example
Suppose that we are creating a database and database creation is fail show the
error disconnection force. Meaning of this error is parameter of spfile and
crdb.sql are not matching. You can find information about error from these files.
We should check alert log file regularly. These files also detect the internal
errors and block corruption errors. It also monitors database operations.
User dump dest is a path where users statement related trace files are generated.
Example: if you are running select * from EMP, then one trace file will be
generated related to above statement and this information will go inside user dump
dest.
Core_dump_dest keeping the information about any kind of operating system level
fatal error.
BACKUP AND RECOVOVERY OF A DATABASE:A backup is a copy of data of a database that you can use to reconstruct data.
Basic recovery involves two parts: restoring a physical backup and then updating
it with the changes made to the database since the last backup. The most important
aspect of recovery is making sure all data files are consistent with respect to
the same point in time.

35 | P a g e
There are three basic types of recovery: instance recovery, crash recovery, and
media recovery. Oracle performs the first two types of recovery automatically at
instance start up. Only media recovery requires the user to issue commands. An
instance recovery, which is only possible in an Oracle Real Applications Cluster
configuration, occurs in an open database when one instance discovers that another
instance has crashed. A surviving instance automatically uses the redo log to
recover the committed data in the database buffers that was lost when the instance
failed. Oracle also undoes any transactions that were in progress on the failed
instance when it crashed, then clears any locks held by the crashed instance after
recovery is complete.
A crash recovery occur when either a single instance database crashes or all
instances of a multi-instance database crash. In crash recovery, an instance must
first open the database and then execute recovery operations. In general, the
first instance to open the database after a crash or SHUTDOWN ABORT automatically
performs crash recovery.
Unlike crash and instance recovery, a media recovery is executed on the user's
command, usually in response to media failure. In media recovery, online or
archived redo logs can be used to make a restored backup current or to update it
to a specific point in time. Media recovery can restore the whole database, a
tablespace or a data file and recover them to a specified time. Whenever redo logs
are used or a database is recovered to some non-current time, media recovery is
being performed.
A restored backup can always be used to perform the recovery. The principal
division in media recovery is between complete and incomplete recovery. Complete
recovery involves using redo data combined with a backup of a database,
tablespace, or datafile to update it to the most current point in time. It is
called complete because Oracle applies all of the redo changes to the backup.
Typically, media recovery is performed after a media failure damages datafiles or
the control file.
Scenario:-take full back up at 15 march-13 at 6 pm using any one of the following

Full cold backup and after backup database is restarted in no archive log
mode.

Full cold backup and after backup database is restarted in archive log mode.

Full hot backup and after backup database is restarted in no archive log
mode

Full hot and after backup database is restarted in archive log mode.
What happen, 23 -march-13 at 10 am one of the data file was deleted namely. There
are two option for you

Only storing it physically it will not work out. It says data file is old
file.

Shutdown database, remove all files, restore all of them from 15-march-13.
Now data base will work as status of 23- march-13 and we are missing the
data last one week.
Solution: - data1.ora file was deleted deletion was done by operating system
command but database was accepting is available. What we will do physically. It is
already removed for this purpose we run following command
SQL>alter database datafile/oracle/xyz/data1.ora offline drop;
But that file I lost it for ever
Second solution is restoring data1.ora. It restores the data files according to
the 15 march 13. It will ask for media recovery because file is older and recover
data file. It will apply all commands from archive.
Restore means copy the file from previous backup. Media recovery means commands
will apply from archive. Recovery means applying the archives available after the
last backup.
BACKUP AND RECOVERY GOALS:1) Protect the database from failures

36 | P a g e
2) Increase Mean-Time-Between-Failures (MTBF)
3) Decrease Mean-Time-To-Recover (MTTR)
4) Minimize data loss
There are two type of backup:1) Logical backup and recovery
2) Physical backup and recovery
Logical means commands are exported and imported and physical means files are
copy.
2.1) user managed
a) Cold backup or close database backup. Cold backup means you shut down the
database and then copy the file using operating system command to some location.
Cold means database is not open.
b) Hot backup or open database backup. In hot backup database must be in archive
log mod and people are also accessing the database in that time you are copying
the file. Hot backup is also known as midnight back up.
2.2) RMAN (recovery manager) backup
COLD BACKUP AND HOT BACKUP:Suppose that we are taking the backup of database which name is nbss1.
First we find out where data files are available for this purpose we run following
commands
$>export ORACLE_SID=nbss1
$>sqlplus /as sysdba
SQL> select tablespace_name||
||file_name||
|| (bytes/1024)/1024 from
dba_data_files;
It will so where files are located
Suppose files are located in following path
/home/oracle/app/oracle/oradata/nbss1
Next step is
$ >cd /home/oracle/app/oracle/oradata/nbss1
Nbss1$ sqlplus /as sysdba
SQL>shutdown abort
SQL>exit
Nbss1>cp user01.dbf usero1.dbf.old
Nbss1>sqlplus /as sysdba
SQL>startup
Here we have taken the backup user01.dbf with name user01.dbf.old. This is called
cold backup because database is shutdown
Suppose that somebody is deleting the file user01.dbf.now next time it will not
open. With the help of following processes we can recover this
Nbss1>rm users01.dbf
Nbss1>sqlplus /as sysdba
SQL>startup
(It will mount but not open and show error with missing file name)
If you run following command
SQL>alter database open ;( again it will not open)
SQL>exit
Nbss1>cp user01.dbf.old user01.dbf
When, we copy the file from backup. This process is called restoring.
Nbss1>sqlplus /as sysdba
SQL>alter database open;
It will not open. It will ask for media recovery
SQL>recover datafile 4; (media recovery is completed)
SQL>alter database open;
Now it will be open. This process is called cold backup and recovery.
HOT BACKUP AND RECOVERY:For hot backup first we run following command when database is up. Currently I am
inside the nbss1 database.

37 | P a g e
SQL>alter tablespace users begin backup;
Note here we are taking hot backup of USERS table space. We can take back full
database by using following command
SQL>Alter database begin backup;
When we issue this command alter tablespace users begin backup in this moment it
starts generated additional redo log information for users tablespace which is
needed for recovery during the restore. It is always creating archive because
database is running in archive mode. Begin backup is a pointer, the pointer being
recorded in control file and from that points own word additional redo entries are
generated so in future if any failure occur. The file is restoring the portion of
the file which has recovered that further additional arc hivers are generated.
>host
Nbss1>cp users01.dbf users.dbf.hot
Nbss1>sqlplus / as sysdba
SQL>alter tablespace users end backup;
This is called hot backup.
Suppose that somebody is deleting the file user01.dbf.now next time it will not
open. With the help of following processes we can recover this
SQL>shutdown abort
SQL>exit
nbss1>rm users01.dbf
nbss1>sqlplus /as sysdba
SQL>startup mount

(it will mount but not open)

SQL>alter database open;

(it will not open)

SQL>exit
nbss1>cp users01.dbf.hot userso1.dbf
nbss1>sqlplus /as sysdba
SQL>alter database open;

(ask for media recovery)

SQL>recover datafile 4;
SQL>alter database open;
Now it will open
Control file backup:You can take control file backup manual with the help of following command
SQL>ALTER DATABASE BACKUP CONTROLFILE to 'location';
When physical disk fail, physical database file corrupt then media recovery
required
Scenario: - take full database backup 20-march -13 after taking backup data base
is running in archive log mode. On 23 march 13 at 9am we run following command
Drop table EMP
Purge recyclebin
On 24 march 13 at 10 am you realize that drop of EMP table was blunder mistake
How to get back EMP table without any kind of data loss?
Recovery step:
Take full backup at 10 am 24 march 13

Shut down data base and delete all data files and redo log files do not delete
control file.

Restore all data files and redo log files at original location. Note that data
files as of 20 march 13 and control file as 24 march 13.

Start database in mount mode. Note that database can not open because datafiles
are older than control file and while opening it, it will ask for media
recovery.

Recover database until time (just before drop time 8.55 on 23 march -13)

Alter database open reset log;

Use exp to create a export dump file for EMP table (emp1.dmp)

Shut down database and remove all files

38 | P a g e

Restore all files from step 1and start database.


Use imp to import EMP table using emp1.dmp
Take full database backup and start database in archive log mode.
RMAN BASED BACKUP:RMAN backup by default stores in flash_recovery_area. This area also keeps the
information about archive log and also contains information about database level
flashback information.
When, we take backup with the help of RMAN we need sys password and database must
be in archive mod.
Suppose we are taking backup of any database follow following steps
$>export ORACLE_SID=nbss1
$>sqlplus /as sysdba
SQL>alter user sys identified by nbsingh1;
SQL>host
$>export ORACLE_SID=nbss1
$>rman target sys/nbsingh1nocatalog
RMAN>backup database;
RMAN>quit
By default this backup will go in flashback_recovery_area. You can specify your
position. Suppose that somebody deleted your file you can recover in following way
$> export ORACLE_SID=nbss1
$>rman target sys/nbsingh1 nocatalog
RMAN>run {
Allocate channel c1 type disk;
Restore datafile 4;
Recover datafile 4;
SQL alter database open
}
You can specify your location in following way
RMAN>run {
Allocate channel c1 type disk;
Backup database filesperset 2 format /home/oracle/bck/dec15_%S.%t;
}
You can see backup list also
RMAN>list backup
If you want to remove the backup
RMAN>change backupset 989 delete;
Or
RMAN>delete backup set 203;
When, we take back up with nocatalog base. This information is saved as a history
in control file of same database. Note that control file keep only information
about backup not backup.
But the problem with nocatalog base backup is it keeps the information about
backup in control file of same database. We know that control file is a very small
file. If suppose that control is lost then in future we cannot recover. Even
though everything is available but we cannot do anything. If we keep this history
in another database it doesnt matter what kind of failure but always second
database is available for recovery purpose. Note that recovery catalog and control
file keeping the information about backup not backup. Recovery catalog can store
Meta data history much longer than control file. A recovery catalog is
centralizing Meta data for your entire target database. Storing the Meta data in
single place makes reporting and administration task easier to perform.
So we should keep the backup history in recovery catalog which is practicably kept
in other database. Then there is no danger of losing the backup even control file
of target database is lost.

39 | P a g e
Second problem is if you are using control file for keeping the backup history.
Then control file is growing much. So it is better we manage this situation in
another database. There is a parameter of control file known as record keep time
.This parameter gives the no of days. Suppose I set this parameter 7. So only for
the past 7 days information about the RMAN will be stored in the control file. If
any history beyond 7 days it will loss. So we can take unlimited history in
control file.
You can check record keep time parameter
SQL>sho parameter keep_tim
Third problem is we cannot store backup script in control file. But in recovery
catalog we can create backup script and store in recovery catalog.
Net manager is software in oracle
connectivity to the oracle database.

which

allows

doing

the

client

server

For catalog based backup we will use two database


1. Target database
2. Catalog database
Catalog is nothing it is a series of tables which created in other database.
Where, it takes RMAN base backup and recovery information.
First step is start listener in following way
$>lsnrctl start
We can check the status in following way
$>lsnrctl
Lsnrctl>status
Sometime what happen listener does not start then follow following step
$ Cd $ORACLE_HOME/bin
There is file inside bin with name netca
Run this file
$>./netca (works in graphical mode)
Second step is
Go to graphical mode type netmgr crate connection string with name cat (for
catalog) and tgt (for target) you can give any name. Suppose that
1) Target database=db14 (tgt is connection string name)
2) Catalog database=orcl (cat is name of connection string)
Next step is we will create a tablespace and create user for backup and recovery
information;
$>export ORACLE_SID=orcl
$>sqlplus /as sysdba
SQL>select tablespace_name||
dba_data_files;

||file_name||

SQL>create tablespace rmants


200m autoextend on next 100m;

datafile /oracle/oracle/orcl/RMANts1.ora size

SQL>create user RMAN16


unlimited on rmants;

identified

by

saurabh

||

default

(bytes/1024)/1024

tablespace

SQL>grant connect, resource, dba, recovery_catalog_owner to RMAN16;


SQL>exit
You can connect either this way
$>rman target sys/nbsingh1@tgt catalog rman16/saurabh@cat
Or this way
$>export ORACLE_SID=db14
$>rman target sys/nbsingh1 catalog rman16/saurabh@cat
RMAN>create catalog;
RMAN>register database;

rmants

from

quota

40 | P a g e
In this time target database will be registered in DB table you can check this
following way
Open second window and export catalog database means orcl and connect from user
name RMAN16 and run following command
SQL>select * from db;
It will so our target database has registered
RMAN>run {
Allocate channel c1 type disk;
Backup database filesperset 2 format /home/oracle/db14/dec15_%S.%t.%p;
}
RMAN>quit
Backup is completed.
We can recover the data files through RMAN:RMAN>run {
Allocate channel c1 type disk;
Restore datafile 4, 10, 15;
Recover datafile 4, 10, 15;
SQL alter database open;
}
We can recover the database through RMAN:RMAN>run {
Allocate channel c1 type disk;
Restore datafile;
Recover datafile;
SQL alter database open;
}
We can take back up of table space through RMAN:RMAN>run {
Allocate channel c1 type disk;
Allocate channel c2 type disk;
Backup filesperset =3
Tablespace inventory, sales include current control file;
}
We can recover tablespace through RMAN:RMAN>run {
Allocate channel c1 type disk;
Restore tablespace ts1, ts2, ts3;
Recover tablespace ts1, ts2, ts3;
}

41 | P a g e
We can take backup archive log also through RMAN:RMAN>run {
Allocate channel c1 type disk;
Backup database plus archivelog;
}
Following command takes the backup of archive and removes archive:RMAN>run {
Allocate channel c1 type disk;
Backup database plus archivelog all delete input;
}
We can take back up between dates:RMAN>run {
Allocate channel c1 type disk;
Backup archivelog from time sysdate -30 until time sysdate-7;
}
RMAN uses the command language interpreter that can execute command in interactive
or batch mode. When you do not use recovery catalog the control file contains two
types of record.
1) Circular reuse record
2) No circular reuse record
Circular reuse record contains non critical information. That is eligible to
overwritten if the need arises. This record contains the information that is
continuously generated by database. These records arranged in logical ring.
No circular reuse record contains critical record that does not change often and
cannot be overwritten. Some example of No circular reuse records include data
file, on line redo logs, redo threads.
ALLOCATION OF CHANNEL:1) Channel starts the server processes in target database.
2) Channel affects the degree of parallelism.
3) Every restore, recover, backup needs at least one channel.
4) Channel writes to different media type.
5) Channel can be imposing the limit.
IMAGE COPY:RMAN>run {
Allocate channel c1 type disk;
Copy level 0

42 | P a g e
Datafile data/df3.dbf to backup/df3.dbf tag=df1,
Archivelog arch_1060.rdo to arch_1060.bak;
}
INCREMENTBACKUP:RMAN>run {
Allocate channel c1 type disk;
Backup increment level=0 format df_%d. %S. %p.bus
Database filesperset=2 include current control file;
}
RMAN>run {
Allocate channel t1 type disk;
Backup filesperset 10 format /disk1/backup/ar_%t_%s_%p`
(Archivelog from logseq=1056 until
logseq=1058 thread=1 delete input);
}
RMAN>report schema (for verify registration was successful or not)
RMAN>backup tablespace SYSTEM, UNDOTBS, USERS;
RMAN>backup datafile 2 format '/backups/PROD/df_t%t_s%s_p%p;
RMAN>backup datafile 1, 2, 3, 6, 7, 8;
RMAN>backup current controlfile;
RMAN>backup current controlfile format '/backups/PROD/df_t%t_s%s_p%p';
RMAN>backup spfile;
RMAN>backup archivelog all;
RMAN>backup archivelog from time 'sysdate-30' until time 'sysdate-7';
RMAN>backup archivelog from logseq=XXX
'/backups/PROD/%d_archive_%T_%u_s%s_p%p';

until

logseq=YYY

delete

input

format

RMAN>backup database plus archivelog delete input format '/backups/PROD/df_t%t_s


%s_p%p';
FULL BACKUP- full backup contain all data file blocks. Means back up of data
files.
INCREMENTAL BACKUP-stores only blocks changed since a previous backup. Thus, they
provide more compact backups and faster recovery, thereby reducing the need to
apply redo during data file media recovery. An incremental backup is either a
level 0, which include every blocks in file. Except block which is compressed out.
Because they have never been used or a level 1 backup which includes those backup
that has been changed since the parent backup was taken.
A level 0 increment backup is physically identical to the full backup. The only
difference is that level 0 backup is recorded as increment backup in RMAN
repository. So it can be used a parent for a level 1 backup.

43 | P a g e
CUMULATIVE BACKUP-contains only modified blocks from the lower or same level.
Whole database backup is a portion of database. Control file may or may not be
included. (DATAFILE+ARCHIVE+PASSWORD FILE)
Full back up is a consistent backup because the SCN (system change number) of data
file headers matches the SCN in the control files.
Cold or off line backup, backs up each block that contain data and that is within
the file being backed up.
On line backup includes all data files and at least one control file.
Partial database backup is an inconsistent backup because there is no guarantee
that data files are synchronized with the control files.
CONSISTENT BACKUP:A backup is taken when database is mounted but not open. After, normal shutdown,
Check point, SCN of the data files matches the header information in control file.
Consistent backup can be restored without recovery. If you restore a consistent
backup and open the database in read/write mode without recovery. Transaction
after the backup are lost you still need perform an open reset.
INCONSISTENT BACKUPA backup of any portion of database when it is open or when a crash occurred or
shutdown abort was run prior to mounting. An inconsistent backup require recovery
to become consistent.
RMAN>list backup;
RMAN>delete obsolete;
Note: - when we run CREATE CATALOG command it creates1) DBMS_RCVCAT to maintain information in the recovery catalog
2) DBMS_RCVMAN to query the recovery catalog or control file.
DBMS_BACKUP_RESTORE package is created by dbmsbkrs.sql and prvtbkrs.plb scripts.
Catproc.sql script automatically
runs them.
When we create any database every database have by default DBMS_BACKUP_RESTORE
package. If this package is not available this means that RMAN backup will not
support this database. Then we should run following two scripts
Dbmsbkrs.sql and prvtbkrs.plb scripts
RECOVERY MANAGER FEATURES:o Back up the database, table spaces, datafiles, control files, and archive logs
o Store backup and recovery scripts in a database
o Incremental block level backups
o Compress unused blocks
o Specify limits for backups
o Detect corrupted blocks during backup
o Support Oracle Parallel Server
o Test whether specified backups can be restored.
o Increase performance through:
o Automatic parallelization

44 | P a g e
o

Less redo generated

Restricted IO for backups

Tape streaming
RESETTING THE RECOVERY CATALOG:RMAN>List incarnation;
RMAN>reset database to incarnation 2;
After resetting the database, issue restore and recover commands
RMAN>run {
Allocate channel ch1 type disk;
Restore database;
Recover database;
Alter database open resetlogs;
}
RECOVERY CATALOG MAINTENANCE:
Register, resynch, and reset a database

Change, delete, and catalog commands


Generate reports and lists

Create, store, and run scripts


Commands Requiring Resynch
Resynch the target database when:
1) Adding or dropping a tablespace
2) Adding a new datafile to an existing tablespace
3) Adding or dropping a rollback segment
RMAN>resync catalog;
The Catalog command stores:
Archived logs, datafile copies, and control file copies
File copies not created by RMAN
Only files with same database incarnation number
Only Oracle Version 8 files and above
Only files that belong to the database

45 | P a g e
DELETE BACKUP:RMAN>allocate channel for delete type = disk;
RMAN>change datafilecopy system01.bak delete;
RMAN>change backuppiece 101 delete;
RMAN>change controlfilecopy 63 delete;
RMAN>change archivelog until logseq = 300 delete
REPORT COMMAND:RMAN> report need backup days 3 database;
LIST COMMAND:RMAN> list copy of tablespace system;
CREATE SCRIPT:RMAN>create
script
NightlyBackup
allocate
channel
c1
type
backup
incremental
level
format
filesperset
(database
include
current
SQL alter database archive log current`;
}
RMAN>run {
Host ls -l;
}
RMAN>run {
sql alter system switch logfile;
}
RMAN>run {
Execute script NightlyBackup;
}
RMAN>delete script NightlyBackup;
RESTORE AND RECOVER DATAFILE:RMAN>run {
Allocate channel c1 type disk;
Set newname for datafile 2to disk1/data/df2.dbf`;
SQL alter database datafiledisk2/data/df2.dbf`` offline;

{
disk;
0
df_%d_%s_%p`
5
controlfile);

46 | P a g e
Restore datafile 2;
Switch datafile 2;
Recover datafile 2;
SQL alter database datafiledisk1/data/df2.dbf`` online;
}
SOME RECOVER STATEMENT:SQL>recover database until cancel;
SQL>recover database until time 1997-12-04:14:22:03;
SQL>recover database until time 1997-12-04:14:22:03
Using backup controlfile;
RMAN>run {
Allocate channel c1 type DISK;
Allocate channel c2 type DISK;
Set until time = '1997-12-09:11:44:00;
Restore database;
Recover database;
SQL "alter database open resetlogs";
}
EXPORT AND IMPORT UTILITY:Export and import utility is logical method for backup, this utility moves the
data from one database to another database. Only users with DBA role or
EXP_FULL_DATABASE role can export in full database mode. In following example
entire database is exported on the file dba.dmp with all grants and all the data.
SIMPLE EXAMPLE OF EXP AND IMP:
[oracle@saurabh Desktop]$ export ORACLESID=neelu
[oracle@saurabh Desktop]$ exp
Export: Release 11.2.0.1.0 - Production on Mon Aug 12 13:21:41 2013
Copyright (c) ,1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: scott/tiger
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
Production
With the Partitioning, OLAP, Data mining and Real Application testing options
Enter array fetch buffer size: 4096 > enter

47 | P a g e
Export file: expdat.dmp > diksha.dmp
(1)E (ntire database), (2) U (sers), or (3) T (ables): (2) U > 3
Export table data (yes/no): yes > yes
Compress extents (yes/no): yes > enter
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
Server uses AL32UTF8 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export specified tables via Conventional Path ...
Table (T) or Partition (T: P) to be exported: (RETURN to quit) > EMP
. . Exporting table
EMP
14 rows are exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
Table (T) or Partition (T: P) to be exported: (RETURN to quit) > dept
. . Exporting table
DEPT
4 rows are exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
Table (T) or Partition (T: P) to be exported: (RETURN to quit) > enter
Export terminated successfully with warnings.
[oracle@saurabh Desktop]$ export ORACLESID=diksha
[oracle@saurabh Desktop]$ imp
Import: Release 11.2.0.1.0 - Production on Mon Aug 12 13:23:45 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: saurabh/diksha
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
Production
With the Partitioning, OLAP, Data mining and Real Application testing options
Import data only (yes/no): no > no
Import file: expdat.dmp > diksha.dmp
Enter insert buffer size (minimum is 8192) 30720> enter
Export file created by EXPORT: V11.02.00 via conventional path
Warning: the objects were exported by SCOTT, not by you

48 | P a g e
Import done in US7ASCII character set and AL16UTF16 NCHAR character set
Import server uses AL32UTF8 character set (possible charset conversion)
List contents of import file only (yes/no): no > enter
Ignore create error due to object existence (yes/no): no > yes
Import grants (yes/no): yes > enter
Import table data (yes/no): yes > no
Import entire export file (yes/no): no > yes
. Importing SCOTT's objects into SAURABH
. Importing SCOTT's objects into SAURABH
About to enable constraints
Import terminated successfully without warnings.
[oracle@saurabh Desktop]$ Sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Aug 12 13:24:50 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data mining and Real Application testing options
SQL> conn saurabh/diksha
Connected
SQL> select * from tab;
TNAME
--------DEPT
EMP
STUDENT
SQL> exit
PARAMETER FILES METHOD:-

TABTYPE
---------TABLE
TABLE
TABLE

CLUSTERID

$>export ORACLE_SID=nbss1 (Sid means system identifier)


Create a params.dat file which contain following information

49 | P a g e
$>VI params.dat
FILE=dba.dmp
GRANTS=y
FULL=y
ROWS=y
$>exp system/nbsingh1 PARFILE=params.dat
COMMAND LINE METHOD:$>export ORACLE_SID=nbss1
$>exp system/nbsingh1 FULL=y FILE=dba.dmp GRANTS=y ROWS=y
EXAMPLE OF EXPORT IN USER MODE:$>exp scott/tiger PARFILE=params.dat
The params.dat file contain
FILE=scott.dmp
OWNER=Scott
GRANTS=y
FULL=y
ROWS=y
OR
$> exp scott/tiger FILE=scott.dmp OENER=Scott GRANTS=y ROWS=y COMPRESS=y
You can also use export in following way
$> exp system/nbsingh1 TABLES= (a, scott.b, c, mary.d)
Here a and c tables belongs to system and b table belong to Scott user and d table
belong to Mary user.
DBA CAN EXPORT THE TABLES FOR TWO USERS:$> exp system/nbsingh1 PARFILE=params.dat
The params.dat file contain
FILE=expdat.dmp
//THIS IS A DEFAULT FILE
TABLES= (scott.EMP, blake.DEPT)

50 | P a g e
GRANTS=y
INDEXES=y
OR
$> exp system/nbsingh1 FILE=expdat.dmp TABLES= (scott.EMP, blake.DEPT) GRANTS=y
INDEXES=y
USER CAN EXPORT TABLE THAT HE OWN:$> exp scott/tiger PARFILE=params.dat
The params.dat file contain
FILE=scott.dmp
TABLES= (EMP, DEPT)
ROWS=y
COMPRESS=y
OR
$>exp scott/tiger FILE=scott.dmp TABLES= (EMP, DEPT) ROWS=y COMPRESS=y
$>exp system/nbsingh1 PARFILE=params.dat
The params.dat file contain
FILE=expdat.dmp
//THIS IS A DEFAULT FILE
TABLES= (Scott. %P%, Blake. %, Scott. %S%)
USING PATTERN MATCHING TO EXPORTS VARIOUS TABLES:IN A PARTITION LEVEL EXPORT YOU CAN SPACIFY THE PARTITION AND SUBPARTITION OF THE
TABLE THAT YOU WANT TO EXPORT.
$>exp scott/tiger PARFILE=params.dat
The params.dat file contain
TABLES= (EMP: M)
ROWS=y
OR
$>exp scott/tiger TABLES= (EMP: M) ROWS=y
EXPORTING THE COMPOSIT PARTITIONS: - Assume that table EMP has two partitions, m
and z and supposes that table partition m has two sub partition sp1 and sp2 and
partition z has sub partition sp3 and sp4.

51 | P a g e
$>exp scott/tiger PARFILE=params.dat
The params.dat file contain
TABLES= (EMP: M, EMP: SP4)
ROWS=y
OR
$>exp scott/tiger TABLES= (EMP: M, EMP: SP4) ROWS=y
Export provides two methods for exporting the table:CONVENTIONAL PATH EXPORTUse the SQL SELECT statement to extract the data from the tables. Data is read
from DISK into A BUFFER CACHE and rows are transfer to the EVALUATION BUFFER. The
data after passing the expression evaluation is transfer to the export client.
This then writes the data into export file.
DIRECT PATH EXPORTIs much faster than CONVENTIONAL PATH EXPORT because data is read into from disk
into a buffer cache and rows are transferred directly to the export client. The
evaluating buffer is bypassed. The data is already in the format that export
expects, thus avoiding unnecessary data conversion. The data is transferred to the
export client, which then writes the data into the export file.

Oracle8 Server

Default: y Specifies whether or not Export uses the SET TRANSACTION READ
ONLY statement to ensure that the data seen by Export is consistent to a single
point in time and does not change during the execution of the exp command. You
should specify CONSISTENT=y when you anticipate that other applications will be
updating the target data after an export has started. If you specify CONSISTENT=n,
each table is usually exported in a single transaction. However, if a table
contains nested tables, the outer table and each inner table are exported as
separate transactions. If a table is partitioned, each partition is exported as a
separate transaction. Therefore, if nested tables and partitioned tables are being
updated by other applications, the data that is exported could be inconsistent. To
minimize this possibility, export those tables at a time when updates are not
being done.
The following chart shows a sequence of events by two users: user1 exports
partitions in a table and user2 updates data in that table.

52 | P a g e

Time Sequence

user1

Begins export of TAB: P1

user2

Updates
TAB:
Updates
TAB:
Commits
transaction

Ends export of TAB: P1

Exports TAB:P2

P2
P1

53 | P a g e

If the export uses CONSISTENT=y, none of the updates by user2 are written to the
export file.
If the export uses CONSISTENT=n, the updates to TAB: P1 are not written to the
export file. However, the updates to TAB: P2 are written to the export file
because the update transaction is committed before the export of TAB: P2 begins.
As a result, the user2 transaction is only partially recorded in the export file,
making it inconsistent. If you use CONSISTENT=y and the volume of updates is
large, the rollback segment usage will be large. In addition, the export of each
table will be slower because the rollback segment must be scanned for uncommitted
transactions.
Keep in mind the following points about using CONSISTENT=y:
CONSISTENT=y is unsupported for exports that are performed when you are connected
as user SYS or you are using AS SYSDBA, or both.
Export of certain metadata may require the use of the SYS schema within recursive
SQL. In such situations, the use of CONSISTENT=y will be ignored. Oracle
Corporation recommends that you avoid making metadata changes during an export
process in which CONSISTENT=y is selected.
To minimize the time and space required for such exports, you should export tables
that need to remain consistent separately from those that do not. For example,
export the EMP and dept tables together in a consistent export, and then export
the remainder of the database in a second pass.
A "snapshot too old" error occurs when rollback space is used up, and space taken
up by committed transactions is reused for new transactions. Reusing space in the
rollback segment allows database integrity to be preserved with minimum space
requirements, but it imposes a limit on the amount of time that a read-consistent
image can be preserved. If a committed transaction has been overwritten and the
information is needed for a read consistent view of the database a snapshot too
old" error results.
To avoid this error, you should minimize the time taken by a read-consistent
export. (Do this by restricting the number of objects exported and, if possible,
by reducing the database transaction rate.) Also, make the rollback segment as
large as possible.
CONSTRAINTS: - Default: y specifies whether or not the Export utility exports
table constraints.
DIRECT: - Default: n specifies whether you use direct path or conventional path
Export. Specifying DIRECT=y causes Export to extract data by reading the data
directly, bypassing the SQL command-processing layer (evaluating buffer). This
method can be much faster than a conventional path Export.
FILE:- Default: expdat.dmp specifies the names of the export files. The default
extension is dmp, but you can specify any extension.
FILESIZE: - Default: Data is written to one file until the maximum size.
FULL:-Default: n Indicates that the Export is a full database mode Export (that
is, it exports the entire database). Specify FULL=y to export in full database
mode. You need to have the EXP_FULL_DATABASE role to export in this mode.
GRANTS:-Default: y specifies whether or not the Export utility exports object
grants. The object grants that are exported depend on whether you use full
database mode or user mode. In full database mode, all grants on a table are
exported. In user mode, only those granted by the owner of the table are exported.
System privilege grants are always exported.
HELP:-Default: n displays a description of the Export parameters.
INDEXES: - Default: y specifies whether or not the Export utility exports indexes.
LOG: - Default: none specifies a filename to receive informational and error
messages. For example:
$>exp SYSTEM/password LOG=export.log
If you specify this parameter, messages are logged in the log file and displayed
to the terminal display.

54 | P a g e
OWNER:-Default: non Indicates that the Export is a user-mode Export and lists the
users whose objects will be exported. If the user initiating the export is the
DBA, multiple users may be listed.
PARFILE: - Default: none specifies a filename for a file that contains a list of
Export parameters.
QUERY:-Default: none
ROWS: - Default: y specifies whether or not the rows of table data are exported.
STATISTICS:-Default: ESTIMATE specifies the type of database optimizer statistics
to
generate
when
the
exported
data
is
imported.
Options
are ESTIMATE, COMPUTE, and NONE.
Transportable tablespace
In imp and exp only commands are exported and imported physical files are not
copy. Transporting table space is a mix of logical and physical. Logical imp and
exp is very slow utility.
$>export ORACLE_SID=inf91
$>sqlplus /as sysdba
SQL>SELECT
TABLESPACE_NAME||'
'||FILE_NAME||'
'||
(BYTES/1024)/1024
FROM
DBA_DATA_FILES;
TABLESPACE_NAME||''||FILE_NAME||''|| (BYTES/1024)/1024
---------------------------------------------------------------------------------USERS /oracle/app/oracle/oradata/inf91/users01.dbf 5
UNDOTBS1 /oracle/app/oracle/oradata/inf91/undotbs01.dbf 75
SYSAUX /oracle/app/oracle/oradata/inf91/sysaux01.dbf 480
SYSTEM /oracle/app/oracle/oradata/inf91/system01.dbf 680
SQL>create tablespace ts1 datafile1/oracle/app/oracle/oradata/inf91 size 10m;
SQL>create user user23 identified by user23 default tablespace ts1quota unlimited
on ts1;
SQL>grant connect, resource to user23;
SQL>conn user23/user23
SQL>create table test (a number (10));
SQL>insert into test values (&no);
SQL>commit
SQL>exit
Now we transfer this table space to other database
$>sqlplus /as sysdba
SQL> sho parameter db_blo

55 | P a g e
SQL>EXECUTE dbms_tts.transport_set_check ('ts1', TRUE);
Check for self content after invoking this PL/SQL package, you can see all
violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of
tablespaces is self-contained, this view is empty.
SQL>SELECT * FROM TRANSPORT_SET_VIOLATIONS;
Now we will generate a transportable tablespace that you want to transport. For
this purpose we have to follow following steps
SQL>ALTER TABLESPACE ts1 READ ONLY;
SQL>exit
$>mkdir load
$>cd load
$>sqlplus /as sysdba
SQL>EXP TRANSPORT_TABLESPACE=y TABLESPACES= (ts1)
TRIGGERS=y CONSTRAINTS=n GRANTS=n FILE=ts1.dmp
Note: - You must always specify TABLESPACES. In this example, we also specify
that:
Triggers are to be exported.
If you set TRIGGERS=y, triggers are exported without a validity check. Invalid
triggers cause compilation errors during the subsequent import. If you
set TRIGGERS=n, triggers are not exported.
Referential integrity constraints are not to be exported
Grants are not to be exported.
The name of the structural information export file to be created is expdat.dmp.
If you are performing TSPITR or transport with a strict containment check, use:
EXP TRANSPORT_TABLESPACE=y TABLESPACES= (sales_1, sales_2)
TTS_FULL_CHECK=Y FILE=expdat.dmp
If the tablespace sets being transported are not self-contained, exports fails and
indicate that the transportable set is not self-contained. You must then return to
Step 1 to resolve all violations.
SQL> exit
$>ls l
It will show you ts1.dmp file.
SQL>SELECT
TABLESPACE_NAME||'
'||FILE_NAME||'
'||
(BYTES/1024)/1024
FROM
DBA_DATA_FILES;
TABLESPACE_NAME||''||FILE_NAME||''||(BYTES/1024)/1024
---------------------------------------------------------------------------------USERS /oracle/app/oracle/oradata/inf91/users01.dbf 5
UNDOTBS1 /oracle/app/oracle/oradata/inf91/undotbs01.dbf 75
SYSAUX /oracle/app/oracle/oradata/inf91/sysaux01.dbf 480
SYSTEM /oracle/app/oracle/oradata/inf91/system01.dbf 680
Ts1 /oracle/app/oracle/oradata/inf91/ts1.ora 10
And one file will generate in location /oracle/app/oracle/oradata/inf91/ts1.ora
SQL>exit
$>cd /oracle/app/oracle/oradata/
$> cp ts1.ora /oracle/load
$>cd load
$>ls l
Here we get following two files
Ts1.ora and ts1.dmp (combination of logical and physical)
SQL>export ORACLE_SID=db14
SQL>sqlplus /as sysdba
SQL>create user user23 identified by user23 default tablespace ts1quota unlimited
on ts1;
SQL>grant connect, resource to user23;

56 | P a g e
SQL>alter user sys identified by sys;
To plug in a tablespace set, perform the following tasks:
Plug in the tablespaces and integrate the structural information using the Import
utility.
IMP TRANSPORT_TABLESPACE=y FILE=expdat.dmp
DATAFILES= ('/db/sales_jan','/db/sales_feb',...)
TABLESPACES= (sales_1,sales_2) TTS_OWNERS=(dcranney,jfee)
FROMUSER= (dcranney, jfee) TOUSER=(smith,williams)
$> IMP TRANSPORT_TABLESPACE=y FILE=ts1.dmp
DATAFILES=/oracle/load/ts1.ora
TABLESPACES=sales_1 TTS_OWNERS=user23 FROMUSER=user23
TOUSER=user23
SQL>export ORACLE_SID=db14
SQL>sqlplus /as sysdba
SQL>SELECT
TABLESPACE_NAME||'
'||FILE_NAME||'
'||
(BYTES/1024)/1024
FROM
DBA_DATA_FILES;

57 | P a g e
TABLESPACE_NAME||''||FILE_NAME||''|| (BYTES/1024)/1024
---------------------------------------------------------------------------------USERS /oracle/app/oracle/oradata/db14/users01.dbf 5
UNDOTBS1 /oracle/app/oracle/oradata/db14/undotbs01.dbf 75
SYSAUX /oracle/app/oracle/oradata/db14/sysaux01.dbf 480
SYSTEM /oracle/app/oracle/oradata/db14/system01.dbf 680
Ts1 /oracle/load/ts1.ora 10
In this example we specify the following:
TRANSPORT_TABLESPACE=y tells the Export utility that we are transporting a
tablespace. The exported file containing the metadata for the tablespaces
is expdat.dmp.
DATAFILES specifies the datafiles of the transported tablespaces and must be
specified. The tablespace names are sales_1 and sales_2.
When you specify TABLESPACES, the supplied tablespace names are compared to those
in the export file. Import returns an error if there is any mismatch. Otherwise,
tablespace names are extracted from the export file.
TTS_OWNERS lists all users who own data in the tablespace set.
When you specify TTS_OWNERS, the user names are compared to those in the export
file. Import returns an error if there is any mismatch. Otherwise, owner names are
extracted from the export file. FROMUSER and TOUSER are specified to change the
ownership of database objects. If you do not specify FROMUSER and TOUSER, all
database objects (such as tables and indexes) are created under the same user as
in the source database. Those users must already exist in the target database. If
not, import returns an error indicating that some required users do not exist in
the target database.
You can use FROMUSER and TOUSER to change the owners of objects. In this example
we specify FROMUSER= (dcranney, jfee) and TOUSER= (smith, Williams) Objects in the
tablespace set owned by dcranney in the source database will be owned by smith in
the target database after the tablespace set is plugged in. Similarly, objects
owned by jfee in the source database will be owned by Williams in the target
database. In this case, the target database is not required to have users
cranney and jfee, but must have users smith and Williams.
After this statement successfully executes, all tablespaces in the set being
copied remain in read-only mode. Check the import logs to ensure no error has
occurred.
Is my oracle is 32 bit or 64 bit???
Run following query
SQL> select length (addr)*4 || '-bits' word_length from v$process where ROWNUM =1;
OR
SQL>Select PLATFORM_ID, PLATFORM_NAME from V$database;
OR
$>Cd $ORACLE_HOME/bin
File orcl*
This will show the output of following way
Oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
Oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
SQL>desc product_component_version
It will show which oracle you are using.
SQL>desc V$version
SQL>select * from V$version;
SQL>select * from dba_registry;
SQL>desc V$database

58 | P a g e
$> cd proc
$>cat meminfo|grep mem
This will show the size of ram.
Is my operating system is 32 bit or 64 bit:SOLARIS- #/bin/isainfo kv
AIX #bootinfo -k
#getconf/HARDWARE_BITMODE
HP-UX #/bin/getconf KERNEL_BITS
LINUX #grep model name/proc/cpuinfo
OR uname m, -a, -r
$>cat /etc/* -release
$>cat /proc/version
How can we decrease the average write time for a physical write operation on
datafile
Increase DBWn processes DB_WRITER_PROCESSES
Which oracle process is used to resolve deadlocks?
LMON-: Lock monitor processes monitor all instances in a cluster to detect the
failure of instance. It then facilitates the recovery of the global locks held by
the failed instance. It is also responsible for reconfiguring locks and other
recourses when instance leave or added to the cluster.
How to find out segment owner with object number
With the help of DBA_OBJECTS AND DBA_SEGMENTS
How to find last deleted/truncated table time information?
SQL> desc DBA_TAB_MODIFICATIONS;
Name
----------TABLE_OWNER
TABLE_NAME
PARTITION_NAME
SUBPARTITION_NAME
INSERTS
UPDATES
DELETES
TIMESTAMP
TRUNCATED
VARCHAR2(3)

Type
--------------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
NUMBER
NUMBER
NUMBER
DATE

NETWORKING:NFS: - is sound as networking file system means files available in main machine
and accesses by another machine. But for this purpose you will have to ROOT level
privilege. Without these privilege NFS is not possible.
99.99.33.101>su root
With the help of above command we enter in ROOT.
Suppose we are working in 99.99.33.101 suppose I want to share any directory of
this machine with other machine.
99.99.33.101>showmount e atpl33101
This command will show any file system of this machine shared or not.
99.99.33.101>DF h
Suppose we are shearing /orator directory.
99.99.33.101>cat /etc/exports

59 | P a g e
99.99.33.101>VI /etc/exports (we will edit this file)
/orator

*(rw, sync)

This means that this directory accessible for any machine. If we write following
way
/orator

99.99.33*(rw, sync)

This means that it is accessible only for those machines having 99.99.33.n type of
address. * Means for the entire machine in network and rw means read and write.
Sync means if we change here it reflect in remote machine. After doing this
99.99.33.101>service nfs start
99.99.33.101>service netfs start
99.99.33.101>service nfslock restart
99.99.33.101>service nfs restart
99.99.33.101>showmount e atpl33.101
It will show
/orator

(output of above command)

This shows the status of being shared by the machines and now it can be accessed
by any machine. Now, if we want to mount this directory with other machines then
follow the below mentioned steps:
We want to mount /orator directory in 99.99.23.13
After completing above step, next step is
99.99.23.101>cat /etc/hosts
99.99.23.101
atpl33101.server.com
atpl33101
(Output of above command)
Copy above entry
99.99.23.13> VI /etc/hosts
And paste here
99.99.23.101
atpl33101.server.com
atpl33101
This is called IP address mapping with machine name.
99.99.23.13>mkdir /rijvaan
(This is the directory where we will mount
/orator)
99.99.23.13>mount atpl33101:/orator
/rijvaan
Mounting is completed. For cross checking
99.99.23.13>DF h
(it will show following type of output)
Atpl33101:/orator
39G
34G
3G
92%
/rijvaan
Now whatever you do in /rijvaan it will reflect in remote machine.
For automatic mounting
99.99.23.13> VI /etc/fstab
Atpl33101:/orator
/rijvaan
nfs
For unmounting
99.99.23.13> umount /rijvaan
99.99.23.13>DF h
99.99.23.13>service netfs restart
Example:
99.99.33.101>mkdir /rinky
(We want to mount this with other machine)
99.99.33.101>VI /etc/exports
/rinky
*(rw, sync)
99.99.33.101>service nfs restart
99.99.33.101>showmount e atpl33101
99.99.23.13>mkdir /appu
99.99.23.13>mount atpl33101:/rinky
/appu
99.99.23.13>DF h
It will mount 99.99.23.13 but this directory only got root level permission. If
you want to use by oracle then you have to do some extra job
99.99.33.101>chown R oracle: oinstall /rinky

60 | P a g e
99.99.33.101> chmod R 755 /rinky
Where oracle is a user and oinstall is a group.
DATABASE CLOANING WITH RMAN:Suppose we are working in following two machines
1. 99.99.23.15- db30
(source database)
2. 99.99.23.14- dup30
(clone also called target database) and
Orcl
(for RMAN backup also called catalog database)
In first phase we start the listener
$>lsnrctl start
In second phase create connection strings
1. One string for orcl
2. One string for db30
3. One string for dup30 and also define
dup30.server.com Sid dup30

listener

where

global

name

99.99.23.14>netmgr
(create connation strings for all database)
99.99.23.14>lsnrctl stop
99.99.23.14>lsnrctl start
Second phase is completed.
In third phase we set networking management99.99.23.15> cat /etc/host
Entry of this file will copy in 99.99.23.14
99.99.23.14> VI /etc/hosts
Pest here
99.99.23.15 root>mkdir /backup
(here we put default backup of db30)
99.99.23.15 root>chown R oracle: oinstall /backup/
99.99.23.15 root>chmod R 755 /backup/
/backup is a default location of backup of db30. Now we will share this directory,
/backup with other machines namely 99.99.23.14
99.99.23.15 root> VI /etc/exports
/backup *(rw, sync)
99.99.23.15 root>service nfs restart
99.99.23.15 root>service nfslock restart
99.99.23.15 root>service netfs restart
99.99.23.15 root>showmount e atpl2315
(machine name)
Now /backup directory has shared. After sharing this directory we will mount
in99.99.23.14 machine. For this purpose we do following thing
99.99.23.14 root>mkdir /backup
99.99.23.14 root>mkdir /backup_remote
99.99.23.14 root>chown R oracle: oinstall /backup/
99.99.23.14 root>chmod R 777 /backup
99.99.23.14 root>chown R oracle: oinstall /backup_remote/
99.99.23.14 root>chmod R 777 /backup_remote
99.99.23.14 root>mount atpl2315:/backup /backup_remote
For permanent mounting put the entry in following file
99.99.23.14 root> cat /etc/fstab
99.99.23.15>export ORACLE_SID=db30
(in x-manager)
99.99.23.15>sqlplus /as sysdba
SQL>create pfile=/backup/init1.ora from spfile;
SQL>alter user sys identified by oracle;

61 | P a g e
SQL>alter user system identified by oracle;
SQL>shutdown immediate
SQL>exit
99.99.23.15>cd /backup
99.99.23.15>VI init1.ora
And change in following location
Db_recovery_file_dest=/backup/flash_recovery_area
: wq
99.99.23.15>mkdir flash_recovery_area
(inside /backup)
99.99.23.15>sqlplus /as sysdba
SQL>create spfile from pfile=/backup/init1.ora;
SQL>startup mount
SQL>alter database archivelog;
SQL>alter database open;
SQL>alter database force logging;
SQL>archive log list;
SQL>show parameter db_rec;
Here we are just checking the connection string
99.99.23.14>sqlplus system/oracle@db30
SQL>select * from V$instance;
SQL>exit
Here we will create a tablespace or increase the size of existing table space for
RMAN backup
99.99.23.14>export ORACLE_SID=orcl

(for recovery catalog)

99.99.23.14>sqlplus /as sysdba


SQL>select tablespace_name||
dba_data_files;

||file_name||

||

(bytes/1024)/1024

SQL>alter database datafile /oracle/orcl/user01.dbf resize 500m;


SQL>alter database datafile /oracle/orcl/user01.dbf autoextend on next 100m;
SQL> alter database default tablespace users;
SQL>create user RMAN30 identified by nbsingh1;
SQL>grant connect, resource, recovery_catalog_owner, dba to RMAN30;

from

62 | P a g e
Now we have completed primary step.
99.99.23.14> cd /backup_remote/
99.99.23.14>ls l
Note that /backup_remote is a mount point of /backup directory which are the
location of default backup of database db30.
Note that in 99.99.23.14>/backup directory we keep all clone files. These files
are mounted in of 99.99.23.14>/backup_remote directory. Means after editing we
transfer all files of 99.99.23.14>/backup_remote directory in 99.99.23.14>/backup
directory.
We have already created init1.ora file 99.99.23.15>/backup. It must be mounted in
99.99.23.14>/backup_remote directory.
99.99.23.14> cp init1.ora /backup
99.99.23.14> cd /backup
99.99.23.14>VI init1.ora
(we will modify this file for dup 30)
(Remove all parameter before audit_file_dest and start with db30. Everything
should be inside /backup directory)
Audit_file_dest=/backup/adump
Control_file=/backup/dup30/control.ctl
Db_name=dup30
Db_recovery_area=/backup/flash_recovery_area
Diagnostic_dest=/backup/dup30'
Db_file_name_convert=oracle/app/oracle/oradata/db30,/backup/dup30
log_file_name_convert=oracle/app/oracle/oradata/db30,/backup/dup30
To find the source path login in 99.99.23.15 machine. Export db30 and run
following query
SQL>select tablespace_name||
||file_name||
|| (bytes/1024)/1024 from
dba_data_files;
99.99.23.14>pwd
/backup
99.99.23.14>mkdir adump flash_recovery_area diag
99.99.23.14>mkdir dup30
(Now we will export dup30)
99.99.23.14>export ORACLE_SID=dup30
99.99.23.14>orapwd file=$ORACLE_HOME/dbs/orapwdup30 Password=oracle
99.99.23.14>sqlplus sys/oracle@dup30 as sysdba
SQL>create spfile from pfile=/backup/init1.ora
SQL>startup force nomount
SQL>
Now second window for 99.99.23.14 and export database orcl means catalog database

63 | P a g e
99.99.23.14>rman target sys/oracle@db30 catalog RMAN30/nbsingh1@orcl auxiliary
sys/oracle@dup30
RMAN>create catalog;
RMAN>register database;
RMAN>run {
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT /backup/dec30_%s.%t.%p;
(This is a original path means backup will go 99.99.23.15>/backup)
BACKUP DATABASE PLUS ARCHIVELOG;
BACKUP CURRENT CONTROL FILE FORMAT /backup/dec_%s.%t.%p;
}
This backup will go to the 99.99.23.15 /backup directory and this is also
available in 99.99.23.14 /backup_remote directory because mounting is going on.
Now if you see
99.99.23.15>cd /backup
99.99.23.15>ls l OR ll
It will show the backup
99.99.23.14> cd /backup_remote
99.99.23.14>ls l
99.99.23.14>cp r * /backup
This means that we have copy all files of 99.99.23.14 /backup_remote directory to
99.99.23.14/backup directory. After completing the copy we go 99.99.23.14 machine
where catalog database is open and RMAN already running.
RMAN>run {
Allocate channel t1 type disk;
Allocate auxiliary channel t2 type disk;
Duplicate target database to dup30 NOFILENAMECHECK;
}
NOFILENAMECHECK Prevents RMAN from checking whether target datafiles sharing the
same names as the duplicated files are in use. NOFILENAMECHECK option is required
when the standby and primary datafiles and logs have identical filenames.

If you want the duplicate filenames to be the same as the target filenames,
and if the databases are in different hosts, then you must specify
NOFILENAMECHECK.

If duplicating a database on the same host as the target database, do not


specify the NOFILENAMECHECK option. Otherwise, RMAN may signal this error
RMAN-10035: exception raised in RPC:
ORA-19504:
failed
to
create
file
"/oracle/dbs/tbs_01.f"
ORA-27086:
skgfglk:
unable
to
lock
file
already
in
use
SVR4
Error:
11:
Resource
temporarily
unavailable
Additional
information:
8
RMAN-10031:
ORA-19624
occurred
during
call
to
DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE
We Can also use following command
RMAN>run {

64 | P a g e
Allocate auxiliary channel c1 device type disk;
Allocate channel c2 device type disk;
Duplicate target database to CLONE;
}
Now cloning is completed.
SECURITY DOMAIN:Security domains are following below1. Account locking
2. Default table space
3. Temporary table space
4. Table space quotas
5. Resource limit
6. Direct privilege
7. Role privilege
8. Authentication mechanism
CREATING USER-:
1. Choose a username and password.
2. Identify tablespaces to store objects.
3. Decide tablespace quota
4. Assign default and temporary tablespaces.
5. Create the user.
6. Grant roles and privileges to the user.
SQL>CREATE USER scott IDENTIFIED BY tiger DEFAULT TABLESPACE
TABLESPACE temp QUOTA 15m ON data01 PASSWORD EXPIRE;
SQL>ALTER USER Scott IDENTIFIED BY xyz PASSWORD EXPIRE;
SQL>ALTER USER Scott QUOTA 0 ON data01;
NOTE: system is a default permanent table space in the database.
MONITORING USERS:

data01TEMPORARY

65 | P a g e
DBA_TS_QUOTAS
USERNAME
TABLESPACE_NAME
BYTES
MAX_BYTES
BLOCKS
MAX_BLOCKS

DBA_USERS
USERNAME
USER_ID
CREATED
ACCOUNT_STATUS
LOCK_DATE EXPIRY_DATE
DEFAULT_TABLESPACE
TEMPORARY_TAB

LESPACE
SQL>SELECT USERNAME||
USERNAME LIKE %&U;

||USER_ID||

||ACCOUNT_STATUS FROM DBA_USERS WHERE

PROFILES-:
o

Are named sets of resource and password limits

Are assigned to users

Can be enabled or disabled

Can relate to the DEFAULT profile

Can limit system resources on session or call level


SETTING RESOURCE LIMIT AT SESSION LEVEL:Resource
description
CPU_PER_SESSION -Total CPU time measured in hundredths of seconds
SESSIONS_PER_USER - Number of concurrent sessions allowed for each username
CONNECT_TIME - Elapsed connect time measured in minutes
IDLE_TIME - Periods of inactive time measured in minutes
LOGICAL_READS_PER _SESSION - Number of data blocks (physical and logical reads)
PRIVATE_SGA - Private space in the SGA measured in bytes (for MTS only)
SETTING RESOURCE AT CALL LEVELResource
description
CPU_PER_CALL CPU- time per call in hundredths of seconds

66 | P a g e
LOGICAL_READS_PER _CALL - Number of data blocks
SQL>CREATE PROFILE system_manager LIMIT SESSIONS_PER_USER
UNLIMITED
CPU_PER_SESSION UNLIMITED CPU_PER_CALL
3000 CONNECT_TIME 45
LOGICAL_READS_PER_SESSION DEFAULT LOGICAL_READS_PER_CALL
1000 PRIVATE SGA
15K COMPOSITE_LIMIT
5000000;
CREATING ASSIGNING PROFILES TO A USERSQL>CREATE PROFILE my_prof LIMIT SESSIONS_PER_USER 2
CPU_PER_SESSION 10000 IDLE_TIME 5 CONNECT_TIME 10;
SQL>CREATE USER user3 IDENTIFIED BY user3 DEFAULT TABLESPACE data01 TEMPORARY
TABLESPACE temp QUOTA unlimited ON data01 PROFILE my_prof;
SQL>ALTER USER scott1 PROFILE my_prof;
SQL> sho parameter res
ENABILING THE RESOURCE LIMIT:SQL>Set RESOURCE_LIMIT = TRUE
SQL>ALTER SYSTEM SET RESOURCE_LIMIT=TRUE SCOPE=BOTH;
ALTERING A PROFILESQL>ALTER PROFILE default LIMIT SESSIONS_PER_USER 5 CPU_PER_CALL 3600 IDLE_TIME
30;
DROPING A PROFILESQL>DROP PROFILE my_prof CASCADE;
CREATING A PROFILE PASSWORD SETTINGSQL>CREATE PROFILE grace_5 LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 30
PASSWORD_REUSE_TIME
30
PASSWORD_VERIFY_FUNCTION
verify_function
PASSWORD_GRACE_TIME 5;
PASSWORD SETTINGParameter
description
FAILED_LOGIN_ATTEMPTS: Number of failed login attempts before
Lockout of the account
PASSWORD_LOCK_TIME: Number of days for which the account remains locked upon
password
expiration

PASSWORD_LIFE_TIME: Lifetime of the password in days after which the password


expires

67 | P a g e
PASSWORD_GRACE_TIME: Grace period in days for changing the password after the
first successful login after the password has expired
PASSWORD_REUSE_TIME: Number of days before a password can be reused
PASSWORD_REUSE_MAX:

Maximum

number

of

times

password

can

be

reused

PASSWORD_VERIFY_FUNCTION: PL/SQL function that makes a password complexity check


before a password is assigned
USERS PROVIDED PASSWORD FUNCTION: Must be created in sys schema with the following
specification
function_name (
Userid_parameter IN VARCHAR2 (30),
Password_parameter IN VARCHAR2 (30),
Old_password_parameter IN VARCHAR2 (30))
RETURN BOOLEAN

PASSWORD VARIFICATION FUNCTION:Minimum length is four characters

Password should not be equal to username


At least one alpha, one numeric, and one special character

At least

three letters should be different from the previous password

VIEWING PAASWORD INFORMATION:DBA_USERS

Profile

Username
account_status

lock_date
expiry_date

DBA_PROFILES
Profile

68 | P a g e

resource_name

resource_type (PASSWORD)
limit
MANAGING PRIVILEGE
Types of privileges:
SYSTEM: users can perform particular actions in the database.
OBJECT: users can access and manipulate a specific object.
SYSDBA AND SYSOPER PRIVILEGE:
SYSOPER-STARTUP, SHUTDOWN, ALTER DATABASE OPEN | MOUNT, ALTER DATABASE BACKUP
CONTROLFILE,
ALTER
TABLESPACE
BEGIN/END
BACKUP
RECOVER
DATABASE,
ALTER
DATABASE
ARCHIVELOG
RESTRICTED SESSION
SYSDBA- SYSOPER privileges WITH ADMIN OPTION, CREATE DATABASE, DROP DATABASE,
RECOVER DATABASE UNTIL
SQL>grant select, insert, update, delete on EMP to user1;
(Object privilege)
SQL> grant select, insert, update, delete on EMP to user1 with grant option;
Granting /Revoking System Privileges:SQL>GRANT CREATE SESSION, CREATE TABLE TO user1;
SQL>GRANT CREATE SESSION TO Scott WITH ADMIN OPTION;

(System privilege)
SQL>REVOKE CREATE TABLE FROM user1;
SQL>REVOKE SELECT ON EMP FROM user3;
(Its contain cascading property)
SQL>REVOKE CREATE SESSION FROM Scott;
SQL>GRANT EXECUTE ON dbms_pipe TO public;
SQL>GRANT UPDATE (ename, sal) ON EMP TO user1 WITH GRANT OPTION;
SQL>REVOKE execute ON dbms_pipe FROM Scott;
DISPLAY OBJECT PRIVILEGES:DBA_TAB_PRIVS, DBA_COL_PRIVS

69 | P a g e

DISPLAY SYSTEM PRIVILEGE:Database level

DBA_SYS_PRIVS
GRANTEE
PRIVILEGE
ADMIN OPTION
Session levelSESSION_PRIVS

PRIVILEGE
Note: admin option causes its given for system privilege and grant option given
for object privilege. Object privilege having a cascading effect this means that
one users revoking its revoke for entire hierarchy. But admin options do not have
cascading effect.
Some other object privilege
Alter, delete, execute, index, insert, references select, update
SQL>grant update (ename, sal) on EMP to user1 with grant option;
DBA_ROLES,
USER_ROLE_PRIVS,
DBA_ROLE_PRIVS
USER_SYS_PRIVS,
DBA_SYS_PRIVS,
COLUMN_PRIVILEGES ROLE_ROLE_PRIVS, ROLE_SYS_PRIVS, ROLE_TAB_PRIVS, SESSION_PRIVS,
SESSION_ROLES
ROLES roles is a group of privilege.
Connect and resource these two roles are providing for background compatibility.
DBA: - all system privilege with admin option.
EXP_FULL_DATABASE-Privileges to export the DB
IMP_FULL_DATABASE- Privileges to import the DB
DELETE_CATALOG_ROLE-DELETE- privileges on DD tables
EXECUTE_CATALOG_ROLE- EXECUTE-privilege on
DD packages
SELECT_CATALOG_ROLE- SELECT- privilege on DD tables

BENEFITS OF ROLES:Reduced granting of privileges

70 | P a g e

Dynamic privilege management

Selective availability of privileges


Granted through the OS

No cascading revokes
Improved performance
CREATING AND MODIFYING THE ROLESQL>CREATE ROLE sales;
SQL>CREATE ROLE hr IDENTIFIED BY bonus;
SQL>CREATE ROLE mgr IDENTIFIED EXTERNALLY;
SQL>ALTER ROLE sales IDENTIFIED BY commission;
SQL>ALTER ROLE hr IDENTIFIED EXTERNALLY;
SQL>ALTER ROLE hr NOT IDENTIFIED;
ASSIGNING ROLESSQL>GRANT sales TO Scott;
SQL>GRANT sales TO hr;
SQL>GRANT hr TO Scott WITH ADMIN OPTION;

ESTEBLISHING DEFAULT ROLEALTER USER Scott


DEFAULT ROLE hr, sales;
DEFAULT ROLE ALL;
DEFAULT ROLE ALL EXCEPT hr;
DEFAULT ROLE NONE;
SQL>ALTER USER Scott DEFAULT ROLE hr_clerk, sales_clerk;
SQL>ALTER USER Scott DEFAULT ROLE ALL;
SQL>ALTER USER Scott DEFAULT ROLE ALL EXCEPT hr_clerk;
SQL>ALTER USER Scott DEFAULT ROLE NONE;
ENABLING AND DISABLING THE ROLE:

Disable a role to temporarily revoke the role from a user.

Enable a role to temporarily grant it.

The SET ROLE command enables and disables roles.


Default roles are enabled for a user at login.
A password may be required to enable a role.
SQL>SET ROLE sales IDENTIFIED BY commission;
SQL>SET ROLE ALL EXCEPT sales;
SQL>SET ROLE NONE;
SQL>REVOKE sales FROM Scott;
SQL>REVOKE hr FROM PUBLIC;
SQL>DROP ROLE hr_manager;
DISPLAYING ROLE INFORMATION:Role View
Description
DBA_ROLES
All roles which exist in the database
DBA_ROLE_PRIVS
Roles granted to users and roles
ROLE_ROLE_PRIVS
Roles which are granted to roles
DBA_SYS_PRIVS
System privileges granted to users and
Roles
ROLE_SYS_PRIVS
System privileges granted to roles
ROLE_TAB_PRIVS
Table privileges granted to roles
SESSION_ROLES
Roles which the user currently has
enabled.
PRODUCT_USER_PROFILE Table:-

71 | P a g e

PRODUC
T

USERI
D

ATTRIBU
TE

SCOP
E

NUMBER
IC

CHAR

DATE

LONG

VALUE

VALU
E

VALU
E

VALUE
SQL*Pl
us

Scott

HOST

DISABL
ED

SQL*Pl
us

INSERT

DISABL
ED

SQL*Plus uses the PRODUCT_USER_PROFILE (PUP) table, a table in the SYSTEM account,
to provide product-level security that supplements the user-level security
provided by the SQL GRANT and REVOKE commands and user roles.
When SYSTEM, SYS, or a user authenticating with SYSDBA or SYSOPER privileges
connects or logs in, SQL*Plus does not read the PUP table. Therefore, no
restrictions apply to these users.
Example: Setting Restrictions in the PUP Table
Log in as SYSTEM with the command
$>SQLPLUS SYSTEM/your_password
Insert a row into the PUP table with the command:
SQL>INSERT INTO PRODUCT_USER_PROFILE VALUES ('SQL*Plus', SCOTT', 'SELECT', NULL,
NULL, 'DISABLED', NULL, NULL);

72 | P a g e
Connect as SCOTT and try to SELECT something:
CONNECT SCOTT/TIGER;
SQL>SELECT * FROM EMP_DETAILS_VIEW;
SP2-0544: Command SELECT disabled in Product User Profile
To delete this row and remove the restriction from the user HR, CONNECT again as
SYSTEM and enter:
SQL>DELETE FROM PRODUCT_USER_PROFILE WHERE USERID = 'HR';
Note: when you create table is granted. Having create table privilege you able
to drop the table, alter the table also. Select, update, delete everything you can
be done. Suppose you wanted to disable side or associated privilege then we can
done this with the help of PUPBLD. So PUPBLD allows you to maintain the privilege
Login with system user
$>sqlplus system/nbsingh1
SQL>@ORACLE_HOME/sqlplus/admin/pupbld
SQL> desc PRODUCT_USER_PROFILE
SQL>INSERT INTO PRODUCT_USER_PROFILE VALUES ('SQL*Plus', SCOTT', 'SELECT', NULL,
NULL, 'DISABLED', NULL, NULL);
SQL> conn scott/tiger
SQL>select * from tab;
Nothing will be selected because we have already disabled the SELECT privilege.
Note: this is not applicable for sys and system users.
To remove it
SQL>delete PRODUCT_USER_PROFILE;
SQL>commit;

SOME IMPORTANT TOPIC OF SQL


INDEX:An index is data base objects which improve the performance of query. Mainly
indexes are created on the columns. These are very frequently used in where
clause, group by clause and distinct clause of the SQL query. So index logically
sort the columns values and physically stored the data away from the table. So
that index occupies physical memory of the database. An index can have maximum 32
columns under it. So the creation of the index on the columns of the table will
not affect the physical status of the table. Every index internally maintains two
columns

It is the column on which we create the index.

Row id it is the actual address of physical row in the database.


In index, ascending or descending order data stored in database. And query is
selected from index which is already stored there.
SQL> select * from EMP where sal>0;
It will so the data in natural order.
SQL> create index i1 on EMP (sal);

73 | P a g e
Here we create an index i1 on column, sal of EMP;
Now run following query
SQL> select * from EMP where sal>0;
Now it will so the data in ascending order according to the sal column. This means
that a column which appear with where clause has got index then it will do the
data selection as per as index only.
Now if we run the following query
SQL> select * from EMP where sal+0>0;
Again it will display the data in natural order means neither ascending nor
descending. This means that above query is not using index. This means that the
column which appearing in the where clause it must be appearing in the pure
format. If a column appearing along with operator or function then index will not
be used.
The above index is called binary tree index.
Reverse key index is also a binary tree index. Only the difference is it is done
in reverse order.
SQL> create index i1 on EMP (sal);
SQL> alter index i1 rebuild reverse;
Now index i1 has become reverse key index.
Another method:SQL>drop index i1;
SQL>create index i1 on EMP (sal) reverse;
If you want to convert this index as a binary tree index
SQL>alter index i1 rebuild noreverse;
Here reverse means table will be reversed then it will be sorted.
You can understand this with the help of following exampleSQL>select ename, reverse (ename), sal, reverse (to_char (sal)) from EMP;
SQL>select ename, reverse (ename), sal, reverse (to_char (sal)) from EMP order by
&column;
Bitmap index- bitmap index is not suitable for all kind of data. Bit map index is
suitable for low cardinality columns having few distinct values .low cardinality
columns generally available in the data where housing. Therefore bit map index
practically created index oracle store the data in sorted order corresponding row
id of the value. For example ename is in sorted order then its corresponding row
id will store there.
But in bitmap indexes it is not sorting as row id , it is sorting as bit means 0
or 1.thats why they are called bitmap,0 for not availability and 1 for
availability of the values.
Example of bitmap index
Customer
Marital
status
101
Single
102
Married
103
Married
104
Divers
105
Single
106
Married

Region

Gender

Income level

East
Central
West
West
Central
Central

M
F
F
M
F
F

1
4
2
4
2
3

74 | P a g e

Here customer is high


cardinality column.

cardinality

column,

region

and

marital

status

is

low

Bit map table for region


Region=east
1
0
0
0
0
0

Region=central
0
1
0
0
1
1

Region=west
0
0
1
1
0
0

Bit map table for marital status


Single
Divorced
Married
1
0
0
0
1
0
0
0
1
0
0
1
1
1
0
0
1
0
SQL> create bitmap index on customer (Marital_status, region);
SQL>select count (*) from customer where Marital_ status= married and region
(central, west);
Status married
0
0
1
1 (and)
0
0
Status married
0
0
1
1
0
0

Region central
0
1
0
0
(or)
1
1
Result central or (+) west
0
1
1
1
1
1

Region west
0
0
1
1
0
0
Result
0and
1
1
0
0
1

75 | P a g e

This means
condition.
In bit map
validation

that in BTREE index it will select the value and later on check the
Because condition checking applicable on the values.
condition is first validation because this validation not on the data,
on the bit.

BITMAP JOIN INDEX:SQL>Create bitmap index c1 on sales (customer cname) where sales.cid=customer.cid;
SQL>select * from sales, customer where sales.cid=customer.cid and customer.
cname=xyz
ONLINE INDEX:SQL>drop index i1;
SQL>Create index i1 on EMP (sal) online;
This is also a type of BTERR index.
Difference is only in only creation
processes. Whenever, you create an index creation. The table which involve in
table creation is locked. Nobody can do any dml operation over the table only
people can do query because table gets lock there. Consider it is very huge table
and if I run this index creation without online keyword then table will be lock
consider for 2 hours. Because index build up take 2 hour. So for two hour table
will be lock and nobody can do any dml operation on the table. Only maximum they
can do only query. What this online does, online allow to you create the index on
line. Online means table remains available for user execution when index build is
on. Online means table does not get lock table remains unlock and accessible to
the user during much of the index build up period. Actually on line index also in
the last phase of the index build up actual table locks for few minutes. When we
do online index build up. Then actual index buildup takes two phases. In first
faze without locking the table and in second phase by lo0cking the table. So what
is happening in first phase it picks up the data from the table from the columns.
Suppose sal column, so when we give on line it pickup the data from sal column as
it is available currently available on the table and put into the temporary
segment. This space is available in temporary table space. And do the sorting in
without locking the table when sorting gets over. Then it does the second phase of
comparison short.
But already sorting is done here. In this moment it locks the table for few
minutes and compares the changes available to the columns value with the data
which is already available in sorted area. There are few changes are taken place.
So only because of the few changes taken place it is going to resorted because
changes are very little. Resort will be very fast. During this resort period
actually the table gets lock.
FUNCTION BASED INDEX:This will use result of the function as a key instead of using column as the value
for key.
SQL>create index i1 on EMP (upper (ename));
SQL>select * from EMP where upper (ename) like scott;
SQL>create index idx on fbi_tab (l*w);
SQL>select * from fbi_tab where l*w<100;
SKIP SCAN INDEX:-

76 | P a g e
SQL> create index i1 on EMP (job, sal);
In above query first column (job) is known as leading column and remaining other
column is known as lagging columns.
Up to oracle 8i

Leading column must be appear in the where clause


with or without lagging
column to use the index

If only lagging columns are appear in where clause index will not be used.
For example:SQL>select * from EMP where job=clerk;
(Index will be used)
SQL>select * from EMP where job=clerk and sal>1000;
(Index will be used)
SQL>select * from EMP where sal>1000;
(Index will not be used)
Changing from oracle 9i to 11g
Index will be used even if only lagging columns appear in where clause.
SQL>select * from EMP where job=clerk;
(Index will be used)
SQL>select * from EMP where sal>1000;
(Index will be used)
INDEX ORGANIZED TABLE
SQL> desc user_indexes
SQL>alter table EMP add constraints eno_pk primary key (empno);
SQL>select constraint_name, constraint_type, status from user_constraints where
table_name=EMP;
SQL>Select index_name|| ||index_type|| ||table_owner|| ||table_name|| ||
table_type|| || uniqueness from user_indexes where table_name=EMP;
Index is not logical it is a physical segment in index organized table (IOT) where
data is organized as per as primary key. By default it is organized as primary key
no separate index segment is created there. Same key value self stored like a
dictionary. That is called IOT.IOT is very suitable for primary key based query.
Those tables where maximum query based on primary key this is very good because
additional space is occupied by index. Second whole data is index organized.
IOT CREATION:Create table sales (
Pno number constraint pno_pk primary key,
Pname varchar (10),
Qty number (10)
)
Organization index;

77 | P a g e
LIMITATIONLimitation in oracle 8.0.x

IOT cannot be partitioned

IOT cannot have lob columns


IOT cannot have secondary index
Note: from oracle 8i all the above restrictions are removed.

PERFORMANCE TUNING OF DATABASE:Under the performance tuning we perform following task

Set the STATISTICS_LEVEL for collection for database


statistics.

Performance Planning

Instance Tuning

Automatic Performance Tuning


V$ Performance Views
Oracle Performance Tuning:
Tuning methodology

Performance diagnostic tools

I/O, storage and database configuration problems


Latch and lock contention

Optimize sort operations


Multithreaded Server

Performance problems in different applications


Measurable Tuning Goals:
Response time

Database availability
Database hit percentages

Memory utilization
Tuning Goals:
Access the least number of blocks

Cache blocks in memory

and

operating

system

78 | P a g e

Share application code

Read and write data as fast as possible


Ensure users do no wait for resources

Perform backups and housekeeping while minimizing impact


Tuning Steps:
Tune the design

Tune the application


Tune memory

Tune IO
Tune contention

Tune operating system


Diagnostic Information: - we can diagnose with the help of following

1.
2.
3.

Trace Files
Alert log file.
Background process trace files
User trace files.
UTLESTAT.SQL/UTLBSTAT.SQL

STATSPACK

AWR and ADDM (10g)

Events:
1. Oracles wait events.
2. OEM events
Top Ten Mistakes Found in Oracle Systems

Bad Connection Management

Bad Use of Cursors and the Shared Pool

Bad SQL
Use of Nonstandard Initialization Parameters

Getting Database I/O Wrong


Redo Log Setup Problems

Serialization of data blocks in the buffer cache due to lack of free lists,
free list groups, transaction slots (INITRANS), or shortage of rollback
segments.

79 | P a g e

Long Full Table Scans

High Amounts of Recursive (SYS) SQL


Deployment and Migration Errors.

STATISTICS_LEVEL:STATISTICS_LEVEL specifies the level of collection for database and operating


system statistics. Parameter can be set on ALTER SESSION, ALTER SYSTEM
or
init.ora file level
PARAMETERS:TYPICAL: Default Value. Collection of all major statistics required for database
self management.
Adequate for most environments.
ALL: Additional statistics are added to the set of statistics collected with the
TYPICAL setting
BASIC: Disables the collection of many of the important statistics.
System statistics collected when
STATISTICS_LEVEL =TYPICAL
o Automatic Workload Repository (AWR) Snapshots
o Automatic Database Diagnostic Monitor (ADDM)
o All server-generated alerts
o Automatic SGA Memory Management
o Automatic optimizer statistics collection
o Object level statistics
o End to End Application Tracing (V$CLIENT_STATS)
o Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)
o Service level statistics
o Buffer cache advisory
o MTTR advisory
o Shared pool sizing advisory
o Segment level statistics
o PGA Target advisory
o Timed statistics
o Monitoring of statistics
System statistics collected when
STATISTICS_LEVEL =ALL
Additional statistics
are added to the set of statistics collected with the
TYPICAL setting. The additional statistics are

80 | P a g e

Timed OS statistics

Plan execution statistics.

System statistics collected when


STATISTICS_LEVEL =BASIC
Disables the collection of many of the important statistics required by Oracle
Database features and functionality, including:

Automatic Workload Repository (AWR) Snapshots

Automatic Database Diagnostic Monitor (ADDM)

All server-generated alerts

Automatic SGA Memory Management

Automatic optimizer statistics collection

Object level statistics

End to End Application Tracing (V$CLIENT_STATS)

Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)

Service level statistics

Buffer cache advisory

MTTR advisory

Shared pool sizing advisory

Segment level statistics

PGA Target advisory


Timed statistics

Monitoring of statistics
Statistics with ALTER SESSION/ ALTER SYSTEM
When the STATISTICS_LEVEL parameter is modified by ALTER SYSTEM, all advisories or
statistics are dynamically turned on or off, depending on the new value of
STATISTICS_LEVEL.
When modified by ALTER SESSION, the following advisories or statistics are turned
on or off in the local session only. Their system wide state is not changed:
1. Timed statistics
2. Timed OS statistics
3. Plan execution statistics
V$STATISTICS_LEVEL
V$STATISTICS_LEVEL view displays information about the status of the statistics or
advisories controlled by the STATISTICS_LEVEL parameter.
SQL>SELECT
SESSION_STATUS||'
'||SYSTEM_STATUS
||' '||ACTIVATION_LEVEL||' '||STATISTICS_NAMEFROM V$STATISTICS_LEVEL;

81 | P a g e
SQL>ALTER SYSTEM SET STATISTICS_LEVEL =all;
SESSION_SETTABLE
COLUMN
OF
V$STATISTICS_LEVEL
Indicates
whether
the
statistic/advisory can be set at the session level (YES) or not (NO)
SQL>SELECT STATISTICS_NAME, SESSION_SETTABLE FROM V$STATISTICS_LEVEL;
Scalability: - Scalability is a system's ability to process more workload, with a
proportional increase in system resource usage.
An application is said to be UN scalable if it exhausts a system resource to the
point where no more throughput is possible when its workload is increased.
Bad scalability may result, due to resource conflicts in the following:
Poor Application Design, Implementation, and Configuration, Incorrect Sizing of
Hardware Components, Limitations of Software Components, Limitations of Hardware
Components
Alert Log File:
It consists of chronological messages and errors.

Check the Alert log file regularly to:


1. Detect internal errors (ORA-600) and block corruption errors.
2. Monitor database operations.
3. View the non default initialization parameters.

Remove or trim it regularly


Controlling the Alert Log File
BACKGROUND_DUMP_DEST=$ORACLE_HOME/rdbms/log
User Trace Files:
The SQL trace facility provides performance information on individual SQL
statements.

A user trace file is useful for SQL tuning.

Server process tracing is enabled or disabled at the session or instance level


by
o
ALTER SESSION command
o
SET_SQL_TRACE_IN_SESSION procedure
o Initialization parameter SQL_TRACE
Controlled by Initialization parameter USER_DUMP_DEST
ROW SOURCE GENERATER-The row source generator receives the optimal plan from the
optimizer. It outputs the execution plan for the SQL statement. The execution plan
is a collection of row sources structured in the form of a tree. Each row source
returns a set of rows for that step.
SQL EXECUTION ENGINE-SQL execution is the component that operates on the execution
plan associated with a SQL statement. It then produces the results of the query.
Each row source produced by the row source generator is executed by the SQL
execution engine
UTLBSTAT AND UTLESTATE are the utilities for gathering the statistics of SQL
statements. In UTLBSTAT B means, the Begin and IN UTLESTAT and E mean the End.
Execution of UTLBSTAT.SQL script:
Suppose that, we are executing UTLBSTAT.SQL script around 10 oclock in the
morning. It will internally execute the SQL statements like create table---and
will create a table ,say T1 and will collect the statistics in the table T1 by
executing the statements like select * from v$viewname. The table T1 is called
begin table.
Execution of UTLESTAT.SQL script:

82 | P a g e
Suppose that, we are executing this script around 11 oclock in the morning. It
will internally execute the SQL statements like create table--- and will create
A table, say T2 and will collect the statistics in the table T2 by executing the
statements like select * from v$viewname. The table T2 is called End table. And
it will take the difference of T1 and T2 and generate a report with file name
report.txt.
DBA uses this script for finding the problem. This utility was added first time in
oracle 7. Its modification version called STATPACK in oracle 8i.It does the same
job like UTLESTAT.SQL but in modified way.
Oracle 10g create AWR and ADDM
AUTOMATIC WORKLOAD REPOSITORY (AWR)-AWR also does same job like UTLESTAT.SQL AND
UTLBSTAT.SQL, MEANS collect the processes and maintains statistics from v$views.
But this is done automatically. You cannot do anything. Thats why it is called
AWR. It is automatic generate the workload and storing statistics certain set of
background table Once the statistics are stored in the table. It generates the
report over the table using ADDM. Oracle Database periodically collects
statistical summary in the Automatic Workload Repository (AWR), residing in the
SYSAUX tablespace. These snapshots are stored in this repository for a set time (a
week by default) before they are purged.
SQL TUNING ADVISOR- allows a quick and efficient technique for optimizing SQL
statement without modifying any statement.
SQL ACCESSES ADVISOR-provides advice on materialized views, indexes, and
materialized view logs
END TO END APPLICATION TRACING- identifies excessive workloads on the system by
specific user, service, or application component.
SERVER-GENERATED
ALERTS-automatically
provide
notifications
when
impending
problems are detected AUTOMATIC DATABASE DIGNOSTIC MONITOR-ADDM is a tool. It runs
the query from the AWR tables and provides you intelligent report. Means in other
words we can say that ADDM examines data captured in AWR and performs analysis to
determine the major issues on the system on a proactive basis. ADDM also
recommends solutions and quantifies expected benefits.
PROBLEMS DETECTED BY ADDM: Problems detected by ADDM include the following:

CPU Bottlenecks

Poor connection management


Excessive parsing

Lock contention
I/O capacity

Under sizing of Oracle memory structures such as the PGA, buffer cache, or log
buffer
High load SQL statements
High PL/SQL and Java time

RAC specific issues

ADDM ANALYSIS-An ADDM analysis is performed every time an AWR snapshot is taken
and the results are saved in the database.
You can view the results of the analysis using Oracle Enterprise Manager or by
viewing a report in a SQL*Plus session.
ADDM provides the following benefits:

Automatic performance diagnostic report every hour by default

Problem diagnosis based on decades of tuning expertise


Time-based quantification of problem impacts and recommendation benefits

Identification of root cause, not symptoms


Recommendations for treating the root causes of problems

Identification of non-problem areas of the system


Minimal overhead to the system during the diagnostic process

83 | P a g e

Diagnosing Database Performance Issues with ADDM


SQL>@$ORACLE_HOME/rdbms/admin/addmrpt.sql
Enter value for begin_snap: 137
Enter value for end_snap: 141
Enter value for report_name:
ADDM does a top-down system analysis every half hour by default and reports its
findings on the Oracle Enterprise Manager Home page.
NOTE: V$ views are the performance information source. Based on X$ tables
Listed in V$FIXED_TABLE
X$ tables Not usually queried directly
Dynamic and constantly changing
Names abbreviated and obscure
Populated at startup and cleared at shutdown.

HOW WE EXECUTE UTLBSTAT.SQL AND UTLESTAT.SQL AND GATHER


STATISTICS
$>export ORACLE_SID = diksha
$> cd $ORACLE_HOME/rdbms/admin
$>sqlplus as sysdba
SQL>@UTLBSTAT.SQL
This create begin table and preserve the data. After some time we run second
script for getting report
SQL>@UTLESTAT.SQL
SQL>exit
$> VI REPORT.TXT

84 | P a g e

STATPACK UTILITY$ cd $ORACLE_HOME/rdbms/admin


$ Ls l sp*.sql
(it will show following files)

Spcreate.sql
(sp meansstat pack)

Spcuser.sql
Spctab.sql

Spcpkg.sql
Spdrop.sql

Spdtab.sql

Spdusr.sql

Spurge.sql
Sptrunc.sql

(c-create user-perfstat)
(tab-tables)
(pkg-package)

(for removing the old statistics)

Spreport.sql
(to generate the user report similar to the report.txt)
$>sqlplus as sysdba
SQL>@spcreate
(it creates the tables for
statistics)
If we type following
SQL>sho user
SQL>select * from tab;
all tables)

(it will display the perfstat user)


(it will show

At this time
SQL>desc STATS$SGA
SQL> SELECT SNAP_ID||
||NAME||
||VALUE FROM STATS$SGA;
It will show the result no row selected
To get the first statistics we run following proc
SQL>EXEC STATSPACK.SNAP
SQL> SELECT SNAP_ID||
||NAME||
||VALUE FROM STATS$SGA;
Now it will show the result. For collecting the statistics we take many snapshot.
Another method
SQL>variable x number

85 | P a g e
SQL>exec: statspack.snap
SQL>print x
SQL>variable j number
SQL>exec dbms_job.submit (:j,begin statspack.snap;end;,sysdate+1/1440)
SQL>print j
SQL>desc dba_jobs
SQL>select job, what from user_jobs;

(it gives the job schedule)

SQL>exit
$>conn perfstat/x
SQL> SELECT SNAP_ID||

||NAME||

||VALUE FROM STATS$SGA;

It will display the result which is generated by the scheduler. For report we will
execute
SQL>@spreport
(After executing this script it will ask some question and also ask about the name
of report. You can provide any name like r1.txt)
SQL>exit
$>VI r1.txt
Note: oracle say dont do this. Run the query over table according to your
interest.
HOW WE START ENTERPRICE MANAGER:$> export ORACLE_SID=HERSH
$>sqlplus /as sysdba
SQL>cat /etc/hosts
It will show following type of entries
99.99.33.101
atpl33101.server.com
atpl33101
Copy this entry
SQL>exit
Go to c: windows\system32\drivers\etc
Here you find a host files open this file in WordPad and pest above entry.
$> export ORACLE_SID=HERSH
$>emctl start dbconsole
After typing this command it will show an uniform resource locater (URL) like

86 | P a g e
http:/ atpl33101.server.com/em/console/about application Copy this URL and pest
on internet explorer. Now enterprise manager will be open.
OPTIMIZER- The optimizer uses internal rules or costing methods to determine the
most efficient way of producing the result of the query. The output from the
optimizer is a plan that describes an optimum method of execution. The Oracle
server provides two methods of optimization:

Cost-based optimizer (CBO)

Rule-based optimizer (RBO)

GOLES OF OPTIMIZERBEST THROUHPUT=>ALL_ROWS


BEST RESPONSE TIME =>FIRST_ROWS_n N=10, 100, 1000
Optimizer is a part of oracle which uses the best possible execution plane for
given statement. For any statement oracle got many way to execute them.
Same
statement execute in 14 different ways on oracle 6. Oracle 6 was created an
optimizer called RULE BASED OPTIMIZER (RBO). It is based on heuristic rule (A
common sense rule (or set of rules) intended to increase the probability of
solving some problem) operations means this is based on operations. Further oracle
7 created new optimizer is called COST BASED OPTIMIZER (CBO). RBO actually
removed. Frankly still not removed it is also available in oracle 9i, 10g, and 11g
only for backward compatibility region. You may activate them and run it. By
default CBO is used. The difference between CBO and RBO is, RBO is based on 14
ranks and CBO is based on actual time taken in execution by the statement and
actual CPU memory resources utilize by them. What happen whenever you try to
execute the statement, oracle generates the entire possible execution plane and
finally choose one of them which is best. RBO will choose the plane which is
lowest as per as the rank and ranks are store in the dictionary. In CBO it will
generate the entire execution plane and evaluate cost of each execution plane and
finally it will choose that plane which is lowest in cost. Lowest cost means
minimum resource uses in elapsed time. There are two goals of CBO

BEST THROUGHPUT

BEST RESPONSE TIME


Best throughput means minimum cost while executing all the rows of SQL statement.
And best response time means minimum cost while executing the first few rows. The
rows are specified by the parameter FIRST_ROWS_n
values which either can be
10,100 or 1000. Suppose we take 10 then it will evaluate the cost of first 10 rows
only if we take 100 then it will evaluate the cost of 100 rows. The advantage of
this approaches is that, if table has huge amount of data, millions records of
there then it is doing execution cost of all the millions record. It could be very
slow processes. Evaluation the cost of millions record may be very slow but cost
evaluation is very high there.
But in case of BEST RESPONSE TIME it will not evaluate the cost of all rows. It
evaluates cost of only few rows.
Best throughput is a default option for CBO and best response time is not
preferred them because there are various region
One of the regions is that best response time is used for only simple query it is
not preferred for complex query. Not used for other DML statements. Best
throughput is used for all DML statement and also used for complex query.
Now which optimizer you are going to activate that is determined based on the
parameter. There is a parameter OPTIMIZER_MODE which can be given in the init.ora
file or sp file.
INSTANCE LEVEL (INIT.ORA): (priority 3)
OPTIMIZER_MODE=CHOOSE / RULE, ALL_ROWS/ FIRST_ROWS_n
SESSION LEVEL: (priority 2)
ALTER SESSION SET OPTIMIZER_MODE=CHOOSE / RULE, ALL_ROWS/ FIRST_ROWS_n
STATEMENT LEVEL (OPTIMIZER HINT): (priority 1)

87 | P a g e
SELECT

/*+ALL_ROWS */ ENAME, SAL FROM EMP

RULE OF CHOOSE:When activating the choose is using statistics stored in background tables and
they are store in dictionary. If statistics are available then it will choose CBO.
If statistics are not available then it will choose RBO.
DICTIONARY STATISTICS:ANALYZE
7, 8,8i
SQL> ANALYZE TABLE EMP COMPUTE/ESTIMATE STATISTICS;
SQL> ANALYZE INDEX I1 COMPUTE/ESTIMATE STATISTICS;
SQL> ANALYZE INDEX I1 VALIDATE STRUCTUTE;

DBMS_STAT (IT IS A PACKAGE TO FIND THE STATISTICS


8i, 9i, 10g)
SQL>EXEC DBMS_STATS.GATHER_DATABASE_STATS ()
SQL>EXEC DBMS_STATS.GATHER_SCHEMA_STATS (SCOTT)
SQL>EXEC DBMS_STATS.GATHER_TABLE_STATS (SCOTT,EMP)
SQL>EXEC DBMS_STATS.GATHER_INDEX_STATS (SCOTT,i1)
SQL>EXEC DBMS_STATS.DELETE_SCHEMA_STATS (SCOTT)

DIAGINISTIC TOOLS:EXPLAIN PLAN


SQLTRACE
TKPROF (Transient Kernel Profiler)
AUTOTRACE
SQL>conn scott/tiger
SQL>desc user_tables
SQL>select table_name|| ||tablespace_name|| ||num_rows|| ||blocks|| ||
chain_cnt|| ||avg_row_len from user_tables;
It gives some statistics.
SQL>EXEC DBMS_STATS.DELETE_SCHEMA_STATS (SCOTT)
It will delete all the statistics which are related to Scott
SQL>select table_name|| ||tablespace_name|| ||num_rows||||blocks|| ||
chain_cnt|| ||avg_row_len from user_tables;
Now it will not show any statistics.
Now in this moment CHOOSE will choose RBO because dictionary statistic about table
are not available. Now for gathering the statistics we execute following
SQL>EXEC DBMS_STATS.GATHER_SCHEMA_STATS (SCOTT)
SQL>select table_name|| ||tablespace_name|| ||num_rows|| ||blocks|| ||
chain_cnt|| ||avg_row_len from user_tables;
Again it will show the statistics again now in this time CHOOSE will choose CBO.
To cross check it there is a way available here. But for this we need following
command
EXPLAIN PLAN
This will run when you create plan table for creating the plan table follow
following way
SQL>sho user

88 | P a g e
It will so Scott
SQL>@$ORACLE_HOME/rdbms/admin/utlxplan.sql
(Utlxplan.sqlthis script creates a plane table.)
SQL>desc plan_table (In plan_table nothing will be available right now.)
SQL>select * from plan_table; (It will so no row selected.)
SQL>explain plan for select * from EMP, dept where emp.deptno=dept.deptno;
SQL>select * from plan_table;
Now it will display something. But we can get this result in better way with the
help of following way because this result is not user friendly.
SQL> EXPLAIN PLAN SET STATEMENT_ID = 'TEST'FOR SELECT ename, job, sal, dname FROM
EMP, dept WHERE emp.deptno=dept.deptno AND NOT EXISTS (SELECT *
FROM salgrade
WHERE emp.sal BETWEEN losal AND hisal); (We can give any name in place of 'TEST')
SQL>CREATE VIEW test AS
SELECT id, parent_id,
Lpad(' ', 2*(level-1))||operation||' '||options||' '||object_name||' '||decode(id,
0, 'Cost = '||position) ||' '||optimizer||' '||cpu_cost||' '||io_cost||' '||bytes
QP
FROM plan_table
START WITH id = 0 and statement_id = 'TEST'
CONNECT BY prior id = parent_id and statement_id = 'TEMP';
NOTE: this query will create a view. If you will see this view
SQL>desc test (It will so 3 columns ID, PARENT_ID and QP)
SQL>select ID|| ||PARENT_ID|| ||SUBSTR (QP, 1, 40) QP FROM test;
It will so the result.
NOW WE CAN RUN THESE QUERY WITH THE HELP OF CHOOSE
SQL> alter session set optimizer_mode=choose;
SQL>delete plan_table;
SQL> EXPLAIN PLAN SET STATEMENT_ID = 'TEST'FOR SELECT ename, job, sal, dname FROM
EMP, dept WHERE emp.deptno=dept.deptno AND NOT EXISTS (SELECT *
FROM salgrade
WHERE EMP.sal BETWEEN losal AND hisal);
SQL>select ID|| ||PARENT_ID|| ||SUBSTR (QP, 1, 40) QP FROM test;
Now in this time choose will choose CBO BECAUSE dictionary statistics about tables
are available. Now suppose that we delete the statistics
SQL> EXEC DBMS_STATS.DELETE_SCHEMA_STATS (SCOTT)
It will delete the statistics now we will run above query
SQL>delete plan_table;

89 | P a g e
SQL> EXPLAIN PLAN SET STATEMENT_ID = 'TEST'FOR SELECT ename, job, sal, dname FROM
EMP, dept WHERE emp.deptno=dept.deptno AND NOT EXISTS (SELECT *
FROM salgrade
WHERE EMP.sal BETWEEN losal AND hisal);
SQL>select ID||||PARENT_ID||||SUBSTR (QP,1,40) QP FROM test;
Now in this case choose will choose RBO because statistics about tables are not
available. If we set
ALTER SESSION SET OPTIMIZER_MODE= RULE;
If we set optimizer_mode =rule then by default optimizer will be RBO and if we set
ALTER SESSION SET OPTIMIZER_MODE=ALL_ROWS;
Then by default optimizer will be CBO.
After the statement has executed, you can display the plan by querying the
V$SQL_PLAN view.
V$SQL_PLAN contains the execution plan for every statement stored in the cursor
cache. Its definition is similar to the PLAN_TABLE.
We can also display the result with the help of following
SQL>@$ORACLE_HOME/rdbms/admin/utlxpls.sql
It will so the explain plan look like
SQL>EXPLAIN PLAN FOR DELETE FROM EMP20 WHERE ROWID NOT IN (SELECT MAX(ROWID) FROM
EMP20 GROUP BY empno);
---------------------------------------------------------------------------------| Id | Operation
| Name
| Rows | Bytes |tempspc| Cost (%CPU)|
---------------------------------------------------------------------------------|
0 | DELETE STATEMENT
|
|
227K| 3104K|
| 3527
(2)|
|
1 | DELETE
| EMP20
|
|
|
|
|
| 2
|
HASH JOIN ANTI
|
|
227K| 3104K| 4256K| 3527
(2)|
|
3 |
TABLE ACCESS FULL | EMP20
|
229K| 1568K|
|
967
(2)|
|
4 |
VIEW
| VW_NSO_1
|
14 |
98 |
|
|
|
5 |
SORT GROUP BY
|
|
14 |
140 |
|
996
(4)|
|
6 | TABLE ACCESS FULL| EMP20
|
229K| 2240K|
|
967
(2)|
---------------------------------------------------------------------------------Predicate Information (identified by operation id):
Above query display the plan table output for serial processing and following
query
SQL>@$ORACLE_HOME/rdbms/admin/utlxpls.sql
It will Displays the plan table output including parallel execution. We can also
use following way

90 | P a g e
SQL>SELECT PLAN_TABLE_OUTPUT FROM

TABLE (DBMS_XPLAN.DISPLAY());

DBMS_XPLAN.DISPLAY- This procedure accepts options for displaying the plan table
output. You can specify:

A plan table name.

A statement Id
A format option that determines the level of detail: BASIC, SERIAL, and TYPICAL,
ALL
Example: SELECT PLAN_TABLE_OUTPUT FROM TABLE (DBMS_XPLAN.DISPLAY ('MY_PLAN_TABLE',
'st1','TYPICAL'));
This means that choose will be choosing CBO if the dictionary statistics about
tables are available and if dictionary statistics are not available then it will
choose RBO.
Explain plan gives the execution plan details about type of scanning, type of
optimizer etc.

SQL TRACE:There is one more utility is available here called SQL trace. This utility gives
you all the details what you get in explain plan in addition to that it gives you
more information. SQL trace generate following statistics for each statement.

Parse, execute, and fetch count


: PRSCNT
EXECNT
FCHCNT

CPU and elapsed times


: PRSCPU EXECPU
FCHCPU
: PRSELA
EXEELA
FCHELA

Physical reads and logical reads


: PRSDSK
EXEDSK
FCHDSK
:
PRSQRY
EXEQRYF
CHQRY

Number of rows processed

Misses on the library cache


Username under which each parse occurred each commit and rollback
EXECUTION PLAN (EXPLAIN PLAN)
CPU waits times means how much time it has waited for CPU resources. Because it is
not free and elapsed times means how much time took finally to processing.
Physical and logical reads means is that disk read and data block read which
already done there. Whatever disk reads are done by the query by select statement
the record writes logical reads. Insert update and delete are called physical
reads.
Generally SQL tracing is not used because it generates too much of the trace files
for all the activities. Which become over head?
SWITCHING TRACE ON AND OFF
Instance level:
SQL_TRACE = [TRUE | FALSE]

91 | P a g e
Session level using one of:
ALTER SESSION SET sql_trace= [TRUE | FALSE];
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'my_trace_id';
EXECUTE sys.dbms_session.set_SQL_trace ([TRUE | FALSE]);
For a Different User Session
SELECT Sid, serial# FROM v$session
EXECUTE sys.dbms_system.set_SQL_trace_in_session (session_id, serial_id, [TRUE |
FALSE);
HOW WE ENABLE OR DISABLE SQL TRACESuppose we have two windows for 99.99.33.101 and 99.99.33.102
$1 >export ORACLE_SID=neelu
$1 >sqlplus /as sysdba
SQL>!tty
It will so you terminal no.
SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = 'saurabh';
Go to second window and connect to the sane data base
$2>export ORACLE_SID=neelu
$2>sqlplus /as sysdba
SQL>sho parameter diag
SQL>sho parameter dump
SQL>exit
Switch on trace file
$2>cd /app/oracle//trace
$2>rm * (delete all trace file inside user dump dest)
$2>sqlplus /as sysdba
SQL>select sid|| ||serial#|| ||terminal|| ||username|| || from v$session;
SQL> EXECUTE sys.dbms_system.set_SQL_trace_in_session (76, 346, TRUE)
Now you will do anything for each command a trace file will be generate inside
saurabh
Go to first window
$1>sqlplus /as sysdba

92 | P a g e
SQL>select * from EMP;

(for this trace file will generate)

SQL>create table EMP1 as select * from EMP;


SQL>delete EMP1;
SQL> exit
Go to second window
$1>sqlplus /as sysdba
NOW FOR DISABLING THE TRACING WERUN FOLLOWING TECHNIQE
SQL> EXECUTE sys.dbms_system.set_SQL_trace_in_session (76, 346, FALSE)
SQL>EXIT
Now switch to trace file (via User_dup_dest) you can check these files but these
files are not user friendly. For this purpose we use following technique. This
technique is called TKPROF (Transient Kernel Profiler)

TKPROF is used to:


Format the contents of the trace file and place the output into a readable
output file.

Determine the execution plans of SQL statements

Create a SQL script that stores the statistics in the database


$>tkprof neelu_ora_21159_saurabh.trc x1.txt
$>tkprof neelu_ora_21159_saurabh.trc x2.txt sys=no
In this case system statement is not formatted. Only user statement is formatted.
$>tkprof neelu_ora_21159_saurabh.trc x3.txt sys=no insert=x1.sql
In this case it will generate SQL file also which you can run against the user to
generate the data in table format.
$>VI x1.txt
$>VI x2.txt
$>sqlplus Scott/tiger
SQL>@x1.SQL
(It will create tkprof_table)
SQL>desc tkprof_table

93 | P a g e

AUTO TRACEThere is another utility available for tracing called auto trace. It is very quick
utility and provides explain plan immediately. Oracle autotrace supports the
following options:
autotrace on Enables all options
autotrace on explain Displays returned rows and the explain plan
autotrace
on
statistics

Displays
returned
rows
and
statistics
autotrace trace explain Displays the execution plan for a select statement
without actually executing it.
"set
autotrace
trace
explain"
autotrace traceonly Displays execution plan and statistics without displaying
the returned rows. This option should be used when a large result set is expected.
Oracle autotrace is so easy to use that it should be the first tracing utility
used for most SQL performance tuning issues. Tkprof can be used for more detailed
analysis.

94 | P a g e

Create plan_table:
SQL>@utlxplan.sql
Create role plus_trace:
SQL>plustrce.sql
Set Autotrace:
SQL> set autotrace traceonly explain
SQL> set autotrace [on | traceonly]
$>export ORACLE_SID=neelu
$>sqlplus /as sysdba
SQL>@$ORACLE_HOME/rdbms/admin/plustrace.sql
SQL>grant plustrace to Scott;
SQL>exit
$>sqlplus scott/tiger
SQL>set autotrace on
Now you will run any query it provide
SQL>
select
ename
from
EMP

the result
where

with explain
empno
=

plan.
12;

Execution
Plan
---------------------------------------------------------0
SELECT
STATEMENT
Optimizer=CHOOSE
1
0
TABLE
ACCESS
(BY
INDEX
ROWID)
OF
'EMP'
2
1
INDEX
(UNIQUE
SCAN)
OF
'PK_EMP'
(UNIQUE)
Statistics
---------------------------------------------------------83
recursive
calls
0
db
block
gets
21
consistent
gets
3
physical
reads
0
redo
size
221
bytes
sent
via
SQL*Net
to
client
368
bytes
received
via
SQL*Net
from
client
1
SQL*Net
roundtrips
to/from
client
0
sorts
(memory)
0
sorts
(disk)
0
rows
processed

Auto trace means to get the result very quickly. But this formatting on line so it
becomes overhead.
SGA, java pool, buffer cache, buffer pools are used by every user. Suppose I am
executing a statement and you are also executing a statement and statement is
going to in memory area. This is called shared memory area. Suppose I am executing
my statement and your statements comes letter and overwrite my statement so my
process gets terminated. So I must have a mechanism that can protect my processes
and many processes can work but they are not interfering with the workshop others.
For this purpose there are internal locks available. We call these locks as a
LATCH.LATCHES is protective program in memory to protect the shared data structure
against the distractive transitions. Every buffer cache should have one latch.
1 is useful for BUFFER_POOL_KEEP
1 is useful for BUFFER_POOL_RECYCLEBIN
1 is for DEFAULT
Every database works with one latch.
A latch is a low-level internal lock used by Oracle to protect memory structures.
There are a number of latch-related dynamic performance views. These include:
V$latch lists all latches available in the system
V$latchname a straightforward translation from number to name
V$latch_parent /v$latch_children has nearly identical columns to v$latch, but
de-aggregated

95 | P a g e
V$latch_misses details on where latch acquisition attempts have been missed
V$latchholder contains a SID and PID for each held latch
V$event_histogram contains a histogram of waits, sorted by the event that caused
them (including latches) and how many milliseconds they lasted
Response time means if you submit the request how much time it takes to give the
result to you.
Wait Events
A server process can wait for the following:

A resource to become available, such as a buffer or a latch

An action to complete, such as an I/O

More work to do, such as waiting for the client to provide the next SQL
statement to execute.

Events that identify that a server process is waiting for more work are known as
idle events
Rules of interpreting the statistics and tracing result

If parse (CPU, elapse, physical) is higher than execute and fetch


Then do dictionary cache (part of share pool) tuning.
Parsing is the processes of handling over the user processes to oracle processes.
During the parsing phase oracle does many activities.

SYNTAX CHECK

PRIV

DATA DICTIONARY LOOKUP

LOADS STATEMENT IN MEMORY

PROVIDES PARSE LOCK

LOOKS FOR OPTIMIZER

CHECK

ROUTES TO REMODE NODE, IF NEEDED


Then it will execute. Share pool means statement once loaded and once it is parse
then it is going to available for other session to be execute. Execution is not a
part of parsing. After parsing execution take place. Suppose one session already
done the parsing then other session will note do the parsing it will just execute.
Statements getting share because same statement once parse and execute several
times.

If parse count is very high then do OPEN CURSOR TUNNING.


Open cursor is the parameter which determines how many cursors can remain open
concurrently on per session basis.
SQL>sho parameter open
It will show
Open_cursors, open_liks, open_links_per_instance etc. suppose here open cursor
value is 300. This means that up to 300 cursors remains open in the memory for
each session on per session basis. Now if the open cursor parameter is less and
its open more cursor then also the previous cursor will be getting close. Then it
will allow the new statement come out. So again parsing will increase because
previous parsing has been closed. Some statement required to execute again so it
will be reparsing again. This means that again going to reparsing. There could be
two regions why reparsing will happen.
If shared pool is too small and not able to keep the statement in memory. Then
statement gets invalidated for keeping the latest statement. The statement which
are invalidated they are required to be executed again and again oracle has
reparse them. This could be a cause of reparsing.

96 | P a g e
Even though memory is very high so it can keep the cursor in on the statement in
the memory but open cursor parameter is very small more than that cursor cannot
keep open. So the no. of cursor exceeding beyond the open cursor parameter than
those cursor which was least recently used. Open cursor available out of existing
one will be getting closed. That will be getting invalidated and again same
statement required oracle has to reparse again.

Stages of SQL Processing

97 | P a g e

The solution meets the requirement


Increase share pool
Increase open cursor
Pin the object in memory
SQL> desc v$db_object_cache
#sysdba
Pin means it is currently getting executed. Execution means
executed. Kept means, whether it is permanently pined up in memory
Share pool checking

it

is

already

98 | P a g e

SQL>select
owner||||name||||sharable_mem||||loads||||execution||||
locks||||pins||||
from
v$db_object_cache
where
owner=scott;
#sysdba
SQL>select sum (sharable_mem) from v$db_object_cache;
#sysdba
It will show the share pool size which is currently being used you can change
these values directly in init.ora file.
SQL>sho
parameter
share
#sysdba
It will so how much share pool is allocated. We can understand it in easy way.
SQL>create or replace procedure proc1 (eno number)
Is
En emp.ename%type;
#Scott
S emp.sal%type;
Begin
Select ename, sal into en, s from EMP where empno=eno;
Dbms_output.put_line (en||
||s);
End;
/
For pinning the object we run following script
SQL>@ORACLE_HOME/rdbms/admin/dbmspool.sql #sysdba
It will show package
DBMS_SHARED_POOL

is

created.

This

script

create

package

called

SQL>exec dbms_shared_pool.keep (scott.proc1)


It will pin the object proc1 which has been already created. Then it will show
kept=yes
SQL> exec dbms_shared_pool.unkeep (scott.proc1)
After shut down the system it will automatic remove from memory.

99 | P a g e
SQL>exec proc1 (7900)

#scott

SQL>select
owner||||name||||sharable_mem||||loads||||execution||||
locks||||pins|||| from v$db_object_cache where owner=Scott and name=proc1
#sysdba
It will so loads = 1 and execution =1
SQL>exec proc1 (7900)
#scott
SQL>select
owner||||name||||sharable_mem||||loads||||execution||||
locks||||pins|||| from v$db_object_cache where owner=Scott and name=proc1
#sysdba
Now it will so load =1 and execution=2

If load (you can say parse) <execution this means that performance is
absolutely better.

If load=execution this means that it is in balance but not good. In this


moment you should pin the object.
DBMS_SHARED_POOL package contain three parameter
Exec dbms_shared_pool.keep (scott.proc1,p)
Actually it contains three parameters
We can put in place of p
t-- Means object is an object type
r-- Means trigger type
c-- Cursor
p-- Your object is procedure type
SQL>create table test (a number);
SQL>create sequence seq;
SQL> create or replace procedure proc1 (I number)
Is
begin
for a in 1..i
insert into test values(a);
commit;
end loop;
end;
/
SQL>exec proc2 (1000000)

If sum (execute (physical) +execute (physical)) is more than 10% of sum, sum
(execution (consistent) +execution (current)) then hit ratio for finding the
data is too low.
Physical means actual data block read
Consistent means consistent mod read (data reading done by query means data is
already stored)
Cur means current mod read (data reading done on behalf of dml operations that is
insert, update, and delete)
This say execution is more but actual data in memory less. Memory means data
dictionary cache.
Solution is DB_BLOCK_BUFFER OR DB_CACHE_SIZE;

If fetch count is about twice then fetch row this means that implicit cursor
are more than create explicit cursors are more and always use bulk collect.

If execution (consistent) is high then execution (rows) and execution


(current) lower than your table probably needs indexing.
Consistent means data which is already there in the database, committed and that
you are selecting.
Current means data is available in memory for modification in your table lesser
operation related of row modification but more like a query. If table is highly
query able then you only create an index.

100 | P a g e

Excellent response time 2.5 sec execution (CPU) should be less than always 1
second.
If your database does not meet the above 3 bench marks then stop all unwanted
processes on server machine. This will reduce the load on server. Show maximum
resource and memory become for your database. Set priority using resource manager.
Resource manager is a utility which provides you priority for specific
application. Upgrade or change the higher configuration.

Tune or rewrite statement for better performance.

Allow full table scan


Use not exist in place of not in

Use for all and bulk collect


Use no copy hint.

Use native compilation.

Histogram means how data distributed in column.

Types of performance bottleneck:


MEMORY CONTENTION - A situation in which two different programs, or two parts of a
program, try to read items in the same block of memory at the same time
DISK I/O CINTENTION- disk contention is situation occurring when multiple
processes try to access same disks and requests outrun the response time of the
drives. This happen because most disks are restricted in the amount of data they
can transfer and the number of accesses (I/O operations) they can handle each
second. Disk contention become a more prominent problem when cpu speed dynamically
increased over the past 15 years, putting even greater demands on disk based data
storage and causing the modern data center most significant operational
constraint.
Disk input output contention memory management, poor distribution of table spaces
and file across disks or combination of both.
CPU CONTENTION- Although UNIX kernel usually allocate CPU resources effectively.
Many processes compete (Seek or strive for the same thing as someone else; engage
in a contest) for CPU cycle and this can cause the contention. If you install
oracle in multiprocessors environment, there might be a different type of
contention on each CPU.
ORACLE RESOURCE CONTENTION- this contention is also common for oracle resources
such as locks and latches.
SOME IMPORTANT TOOLS OF OERATING SYSTEM FOR TUNNING
Vmstat -Use the Vmstat command to view process, virtual memory, and disk, trap,
and CPU activity, depending on the switches that you supply with the command. Run
one of the following commands to display a summary of CPU activity six times, at
five-second intervals:
$ vmstat S 5 6
#hp-ux
$ vmstat 5 6

#Linux

The following is sample output of this command


procs
r b w
0 0 0
99
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0

memory
swap free si
1892
5864
85356
85356
85356
85356
85356

8372
8372
8372
8372
8372

0
0
0
0
0

page
disk
so pi po fr de sr f0 s0 s1 s3
0
0 0 0 0 0 0 0 0
0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
0
0
0

0
0
2
0
0

faults
cpu
in
sy
cs us sy
0 0
90
74
24
46
47
53
87
48

25
20
22
23
41

21
18
20
21
23

0
0
0
0
0

0
0
0
0
0

id
0
100
100
100
100
100

101 | P a g e
The w sub column, under the procs column, shows the number of potential processes
that have been swapped out and written to disk. If the value is not zero, then
swapping occurs and the system is short of memory.
The si and so columns under the page column indicate the number of swap-ins and
swap-outs per second, respectively. Swap-ins and swap-outs should always be zero.
The sr column under the page column indicates the scan rate. High scan rates are
caused by a shortage of available memory.
The pi and po columns under the page column indicate the number of page-ins and
page-outs per second, respectively. It is normal for the number of page-ins and
page-outs to increase. Some paging always occurs even on systems with sufficient
available memory.
SAR- Depending on the switches that you supply with the command uses
the SAR (system activity reporter) command to display cumulative activity counters
in the operating system.
http://docs.oracle.com/cd/B28359_01/server.111/b32009/tuning.htm#BABDECCF
On an HP-UX system, the following command displays a summary of Input-Output
activity ten times, at ten-second intervals
$>SAR b 10 10
$>SAR 5 3

Linux

$>SAR u

monitor CPU usage

$>SAR d

monitor disk usage

$>SAR b
display summary of I/O activities
The following example shows the output of this command
13:32:45 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
13:32:55
0
14
100
3
10
69
0
0
13:33:05
0
12
100
4
4
5
0
13:33:15
0
1
100
0
0
0
0
13:33:25
0
1
100
0
0
0
0
13:33:35
0
17
100
5
6
7
0
13:33:45
0
1
100
0
0
0
0
13:33:55
0
9
100
2
8
80
0
13:34:05
0
10
100
4
4
5
0
13:34:15
0
7
100
2
2
0
0
13:34:25
0
0
100
0
0
100
0
Average
0
7
100
2
4
41
0

0
0
0
0
0
0
0
0
0
0

The SAR output provides a snapshot of system Input-Output activity at a given


point in time. If you specify the interval time with more than one option, then
the output can become difficult to read. If you specify an interval time of less
than 5, then the SAR activity itself can affect the output.
IOSTAT- Use the iostat command to view terminal and disk activity, depending on
the switches that you supply with the command. The output from the iostat command
does not include disk request queues, but it shows which disks are busy. This
information can be used to balance Input-Output loads. The following command
displays terminal and disk activity five times, at five-second intervals
$iostat 5 5
The following is sample output of the command on Solaris
tty
fd0
sd0
sd1
sd3
cpu
tin tout Kps tps serv Kps tps serv Kps tps serv Kps tps serv us sy wt id
0
1
0
0
0
0
0
31
0
0
18
3
0
42
0 0 0 99
0
16
0
0
0
0
0
0
0
0
0
1
0
14
0 0 0 100
0
16
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 100
0
16
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 100
0
16
0
0
0
0
0
0
2
0
14
12
2
47
0 0 1 98

102 | P a g e
Use the iostat command to look for large disk request queues. A request queue
shows how long the Input-Output requests on a particular disk device must wait to
be serviced. Request queues are caused by a high volume of Input-Output requests
to that disk or by Input-Output with long average seek times. Ideally, disk
request queues should be at or near zero.
SWAP, SWAPINFO, SWAPON, or LSPS Use the swap, swapinfo, swapon, or lsps command
to report information about swap space usage. A shortage of swap space can stop
processes responding, leading to process failures with Out of Memory errors. The
following table lists the appropriate command to use for each platform

PLATEFORM
AIX
HP-UX
LINUX
SOLARIS

COMMANDS
Lsps a
Swapinfo m
Swapon s,-V(root)
Swap l and swap s

103 | P a g e

$>swap l
Swapfile
/dev/dsk/c0t3d0s1

#Solaris
dev
32,

swaplo
25

blocks
197592

On Linux systems, use the top, free, and cat /proc/meminfo commands
information about swap space, memory, and buffer usage.

free
162136
to

view

Parameter valueSQL>select name|||| value from v$parameter where name like %P;
SEMAPHOURS: - semaphore guarantee that the thing happen before another thing. A
semaphore is an integer variable with two atomic operations

Wait or down or p or lock

Signal or up or v or unlock or post


An atomic operation is operation that ones start, completes logically in
invisible way. In this context being automatic mean that if processes calls wait
no other processes can change the semaphore until the semaphore. Allow complete
the task before another task is starting.
If processes = 150 then no of semaphore = 300
You can see the semaphore
$cd ipcs ls
OR
$cd bin (root)
$ cat sysctl
It will show following type of output
Kernel.sem 250 3200 100 128
Total four values is set for semaphore
Configuring semaphore kernel parameter of Linux:Linux offer two kinds of semaphores
KERNEL SEMAPHORES- which is used by kernel control paths
SYSTEM V IPC SEMAPHORES- which is used by user made processes
Linux file to set semaphore: /proc/sys/kernel/sem
SEMMSL 250
SEMMNS 32000
SEMOPM 100
SEMMNI 128
SEMMSL- This parameter defines the maximum number of semaphores per semaphore set
Default 25, rang 1 to maxint
SEMMNS-This parameter defines the total number of semaphores (not semaphore sets)
for the entire Linux system. A semaphore set can have more than one semaphore, and
as the segment(2) man page explains, values greater than SEMMSL * SEMMNI makes it
irrelevant. The maximum number of semaphores that can be allocated on a Linux
system will be the lesser of: SEMMNS or (SEMMSL * SEMMNI)
Default 60 rang 1 to maxint
SEMOPM- This parameter defines the maximum number of semaphore operations that can
be performed per semop (2) system call (semaphore call). The semop (2) function
provides the ability to do operations for multiple semaphores with one semop
(2) system call. Since a semaphore set can have the maximum number of SEMMSL
semaphores per semaphore set, it is often recommended to set SEMOPM equal to
SEMMSL.
Default 10, rang 1 to maxint
SEMMNI- This parameter defines the maximum number of semaphore sets for the entire
Linux system.
Default 10, range 1 to 65535.
This means that 10*25*10 =2500 concurrent operations can be performed.

104 | P a g e

SWAPE SPACE-for finding swap information


$ Free
$cat /proc/swaps
Swap space in Linux is used when the amount of physical memory (RAM) is full. If
the system needs more memory resources and the RAM is full, inactive pages in
memory are moved to the swap space. While swap space can help machines with a
small amount of RAM, it should not be considered a replacement for more RAM. Swap
space is located on hard drives, which have a slower access time than physical
memory. Swap space can be a dedicated swap partition (recommended), a swap file,
or a combination of swap partitions and swap files. Swap should equal 2x physical
RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any
amount above 2 GB, but never less than 32 MB.
M = Amount of RAM in GB, and S = Amount of swap in GB, then
If M < 2
S = M *2
Else
S = M + 2
Using this formula, a system with 2 GB of physical RAM would have 4 GB of swap,
while one with 3 GB of physical RAM would have 5 GB of swap. Creating a large swap
space partition can be especially helpful if you plan to upgrade your RAM at a
later time.
GROUP MANAGEMENT- /etc/group
Contain group name.
PASSWORD MANAGEMENT- /etc/passwd
Journal file system:-A file system in which the hard disk maintains data
integrity in the event of a system crash or if the system is otherwise halted
abnormally. The journal file system (JFS) maintains a log, or journal, of what
activity has taken place in the main data areas of the disk; if a crash occurs,
any
lost
data
can
be
recreated
because
updates
to
the metadata in directories and bit maps have been written to a serial log. The
JFS not only returns the data to the pre-crash configuration but also recovers
unsaved data and stores it in the location it would have been stored in if the
system had not been unexpectedly interrupted.
Jfs buffer Linux utilize a ram buffer to minimize the disk I/O called as jfs. In
raw partition we can directly write input, output in top of the cylinder and the
bottom of the cylinder. If we go to file system it is manage by general file
system is called jfs. Raw partition doesnt have any jfs to maintain the memory
system. Thats the region raw partition work as faster. File system management is
very slow because it is maintain by a program. That loaded into the jfs buffer not
visual to the normal users but it is available there. And since this program is
managing it is an overhead thats why file system is slow. Oracle says not use raw
partition use only ASM. ASM easier as a raw partition and performance is almost
equal to the raw partition. Raw partition management is quite critical.
Some important command of Linux:Uname a: - it will show the all information about machine. You can use this
command with following option
s- Print kernel name
n- Print node name means network node host name
r- Print kernel release
v- Print kernel version
m- Print the machine name
p- Print machine hardware name
i- Print hardware platform

105 | P a g e
o- Print operating system name
Who- To list the users currently logged in to given machine to find
Out who is logged in
Whoami- To display login of user currently logged onto given terminal to answer
the question: "Who am I?"
Arch- show the architecture of machine
Cat 2007- show the time table of 2007
Cat /proc/cpuinfo show information about CPU
Cat /proc/meminfo verify memory uses
Cat /proc/swaps - show file swap
Cat /proc/version- show version of kernel
Cat /proc/interrupts- show interrupts
Cat /proc/net/dev- show network adapter and addresses
Clock w- show save date changes on bios
Date- show date
Date 041217002007.00- set date and time MonthDayhoursMinutesYear.Seconds
Dmidecode q- show hardware system components - (SMBIOS / DMI)
Hdparm -i /dev/had- displays the characteristics of a hard-disk
Hdparm -tT /dev/sda- perform test reading on a hard-disk
Lspci tv- display PCI devices
Lsusb tv- show USB devices
Cat file1- view the contents of a file starting from the first row
Head -2 file1- view first two lines of a file
Ps- to list your current processes by their pid (process identification number)
Cat file1 file2-displays contents of file1 followed by file2 on the screen (or
window) without any screen breaks.
Cat file1 file2 > file3- creates file3 containing file1 followed by file2
Diff- to show the differences between two files
Diff abc def-displays any lines in ABC or DEF that differ from each other.
pwd- to display full pathname of current working subdirectory to provide the name
of the subdirectory you are currently working in
Rm ABC DEF- deletes both ABC and DEF
Rm -i ABC DEF- first asks you if you really want to delete these files; then
deletes the ones for which you respond yes (y). (-I for interactive)
Rmdir- the subdirectory cannot contain any files you need to rm (delete or purge)
them first
Rmdir MNO- deletes the empty subdirectory named MNO
ls- Lists all files in your current directory
Ls a- lists all files in your current directory, including any dot(.) files
(e.g., .login).
Ls *.java- lists all files in your current directory that end with the characters
'.java' (e.g., example1.java).
Ls F-lists files in your current directory, putting a slash (/) after those that
are directories and an asterisk (*) after those that are executables.
ls l- lists all files in your current directory, showing protection codes, date
of creation (or most recent modification), and size.
Mv -i ABC DEF- renames ABC to DEF; can also be thought of as moving the
file ABC on top of file DEF. asking permission if the file DEF already exists. (i for interactive)
Ls a- show hidden files
Cp ABC DEF- copies file ABC to (or on top of) a file named DEF.
Cp -i ABC ADIR/DEF- Directory ADIR Requests approval for overwriting the file if
the file ADIR/DEF already exists. (-i means interactive.)
Cp -r ADIR BDIR- copies the entire contents of the directory ADIR to a new (or on
top of the old) directory BDIR. (-r means recursive.)

106 | P a g e
Cp dir/*- copy all files of a directory within the current work directory
Cd ABC- moves to a subdirectory named ABC located below your current directory.
cd ..- Moves to the parent directory of your current directory
cd ../ADIR- moves to a directory named ADIR located in the parent directory of
your current directory
Printenv- to show the current environment setting
Passwd- to change your current password
Less file1- similar to more command but which allows backward movement in the file
as well as forward movement
More file1- view content of a file along
Tail -2 file1- view last two lines of a file
Tail -f /var/log/messages- view in real time what is added to a file
Mkdir -p /tmp/dir1/dir2- create a directory tree
Mkdir dir1 dir2- create two directories simultaneously
Bunzip2 file1.bz2- decompress a file called 'file1.bz2'
Bzip2 file1- compress a file called 'file1'
Gunzip file1.gz -b decompress a file called 'file1.gz'
Gzip -9 file1- compress with maximum compression
Rar a file1.rar test_file- create an archive rar called 'file1.rar'
Rar a file1.rar file1 file2 dir1- compress 'file1', 'file2' and 'dir1'
simultaneously
Rar x file1.rar- decompress rar archive
Unrar x file1.rar- decompress rar archive
Unzip file1.zip- decompress a zip archive
Zip file1.zip file1- create an archive compressed in zip
Zip -r file1.zip file1 file2 dir1- compress in zip several files and directories
simultaneously
Du -sh dir1- estimate space used by directory 'dir1'
Du -sk * | sort rn- show size of the files and directories sorted by size
Ls -lSr |more- show size of the files and directories
Find / -name file1- search file and directory into root file system from '/'
Find / -user user1- search files and directories belonging to 'user1'
Find /home/user1 -name \*.bin- search files with '. bin' extension within
directory '/ home/user1'
Find /usr/bin -type f -atime +100- search binary files are not used in the last
100 days
Find /usr/bin -type f -mtime -10- search files created or changed within 10 days
Find / -name *.rpm -exec chmod 755 '{}- search files with '.rpm' extension and
modify permits
Find / -xdev -name \*.rpm- search files with '.rpm' extension ignoring removable
partitions as cd rom, pen-drive, etc.
Locate \*.ps- find files with the '.ps' extension - first run 'updatedb' command
Whereis halt- show location of a binary file, source or man
Which halt- show full path to a binary / executable
Lstree- show files and directories in a tree starting from root (2)
Tree- show files and directories in a tree starting from root (1)
Badblocks -v /dev/hda1- check bad blocks on disk hda1
Dosfsck /dev/hda1- repair / check integrity of dos file systems on disk hda1
E2fsck /dev/hda1- repair / check integrity of ext2 file system on disk hda1
E2fsck -j /dev/hda1- repair / check integrity of ext3 file system on disk hda1
Mkswap /dev/hda3- create a swap file system
Swapon /dev/hda3- activating a new swap partition
Swapon /dev/hda2 /dev/hdb3- activate two swap partitions
Hostname- show hostname of system
Route n- show routing table
Free m- displays status of RAM in megabytes
Kill -9 process_id- force closure of the process and finish it

107 | P a g e
Kill -1 process_id- force a process to reload configuration
Last reboot- show history reboot
Lsmod- display kernel loaded
Pstree- Shows a tree system processes
Top- display Linux tasks using most CPU

You might also like