You are on page 1of 21

Whenever a statement is executed, Oracle follows a methodology to evaluate the statement

in terms of syntax, validity of objects being referred and of course, privileges to the user.
Apart from this, Oracle also checks for identical statements that may have been fired, with
the intention of reducing processing overheads. All this takes place in a fraction of a second,
even less, without the user knowing what is happening to the statement that was fired. This
process is known as Parsing.
Types of Parsing
All statements, DDL or DML, are parsed whenever they are executed. The only key fact is
that whether it was a Soft (statement is already parsed and available in memory) or
aHard (all parsing steps to be carried out) parse. Soft parse will considerably improve the
system performance where as frequent Hard parsing will affect the system. Reducing Hard
parsing will improve the resource utilization and optimize the SQL code.
Parsing process
Oracle internally does the following to arrive at the output of an SQL statement.
1. Syntactical check. The query fired is checked for its syntax.
2. Semantic check. Checks on the validity of the objects being referred in the statement and
the privileges available to the user firing the statement. This is a data dictionary check.
3. Allocation of private SQL area in the memory for the statement.
4. Generating a parsed representation of the statement and allocating Shared SQL area.
This involves finding an optimal execution path for the statement.
In point four, Oracle first checks if the same statement is already parsed and existing in the
memory. If found, the parsed representation will be picked up and the statement executed
immediately (Soft parse). If not found, then the parsed representation is generated and
stored in a shared SQL area (Part of shared pool memory in SGA), the statement is then
executed (Hard parse). This step involves the optimization of the statement, the one that
decides the performance.
Identical statements
Oracle does the following to find identical statements to decide on a soft or a hard parse.
a. When a new statement is fired, a hash value is generated for the text string. Oracle
checks if this new hash value matches with any existing hash value in the shared pool.
b. Next, the text string of the new statement is compared with the hash value matching
statements. This includes comparison of case, blanks and comments present in the
statements.
c. If a match is found, the objects referred in the new statement are compared with the
matching statement objects. Tables of the same name belonging to different a schema will
not account for a match.
d. The bind variable types of the new statement should be of same type as the identified
matching statement.
e. If all of the above is satisfied, Oracle re-uses the existing parse (soft). If a match is not
found, Oracle goes through the process of parsing the statement and putting it in the
shared pool (hard).
Reduce hard parsing
The shared pool memory can be increased when contention occurs, but more important is
that such issues should be addressed at the coding level. Following are some initiatives that
can be taken to reduce hard parsing.
1. Make use of bind variables rather than hard-coding values in your statements.
2. Write generic routines that can be called from different places. This will also eliminate
code repetition.
3. Even with stringent checks, it may so happen that same statements are written in
different formats. Search the SQL area periodically to check on similar queries that are
being parsed separately. Change these statements to be look-alike or put them in a
common routine so that a single parse can take care of all calls to the statement.
Identifying unnecessary parse calls at system level
select parse_calls, executions,
substr(sql_text, 1, 300)
from v$sqlarea
where command_type in (2, 3, 6, 7);
Check for statements with a lot of executions. It is bad to have the PARSE_CALLS value in
the above statement close to the EXECUTIONS value. The above query will fire only for DML
statements (to check on other types of statements use the appropriate command type
number). Also ignore Recursive calls (dictionary access), as it is internal to Oracle.
Identifying unnecessary parse calls at session level
select b.sid, a.name, b.value
from v$sesstat b, v$statname a
where a.name in ('parse count (hard)', 'execute count')
and b.statistic# = a.statistic#
order by sid;
Identify the sessions involved with a lot of re-parsing (VALUE column). Query these sessions
from V$SESSION and then locate the program that is being executed, resulting in so much
parsing.
select a.parse_calls, a.executions, substr(a.sql_text, 1, 300)
from v$sqlarea a, v$session b
where b.schema# = a.parsing_schema_id
and b.sid = <:sid>
order by 1 desc;
The above query will also show recursive SQL being fired internally by Oracle.
4. Provide enough private SQL area to accommodate all of the SQL statements for a
session. Depending on the requirement, the parameter OPEN_CURSORS may need to be
reset to a higher value. Set the SESSION_CACHED_CURSORS to a higher value to allow
more cursors to be cached at session level and to avoid re-parsing.
Identify how many cursors are being opened by sessions
select a.username, a.sid, b.value
from v$session a, v$sesstat b, v$statname c
where b.sid = a.sid
and c.statistic# = b.statistic#
and c.name = 'opened cursors current'
order by 3 desc;
The VALUE column will identify how many cursors are open for a session and how near the
count is to the OPEN_CURSORS parameter value. If the margin is very small, consider
increasing the OPEN_CURSORS parameter.
Evaluate cached cursors for sessions as compared to parsing
select a.sid, a.value parse_cnt,
(select x.value
from v$sesstat x, v$statname y
where x.sid = a.sid
and y.statistic# = x.statistic#
and y.name = 'session cursor cache hits') cache_cnt
from v$sesstat a, v$statname b
where b.statistic# = a.statistic#
and b.name = 'parse count (total)'
and value > 0;
The CACHE_CNT ('session cursor cache hits') of a session should be compared to the
PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the
SESSION_CACHED_CURSORS parameter.
The following parse related information is available in V$SYSSTAT and V$SESSTAT views,
connect with V$STATNAME using STATISTIC# column.
SQL> select * from v$statname where name like '%parse%';

STATISTIC# NAME CLASS
---------- ------------------------- ----------
217 parse time cpu 64
218 parse time elapsed 64
219 parse count (total) 64
220 parse count (hard) 64
221 parse count (failures) 64
5. Shared SQL area may be further utilized for not only identical but also for some-what
similar queries by setting the initialization parameter CURSOR_SHARING to FORCE. The
default value is EXACT. Do not use this parameter in Oracle 8i, as there is a bug involved
with it that hangs similar query sessions because of some internal processing. If you are on
9i, try out this parameter for your application in test mode before making changes in
production.
6. Prevent large SQL or PL/SQL areas from ageing out of the shared pool memory. Ageing
out takes place based on Least recently used (LRU) mechanism. Set the
parameterSHARED_POOL_RESERVED_SIZE to a larger value to prevent large packages
from being aged out because of new entries. A large overhead is involved in reloading a
large package that was aged out.
7. Pin frequent objects in memory using the DBMS_SHARED_POOL package. This package
is created by default. It can also be created explicitly by running DBMSPOOL.SQL script; this
internally calls PRVTPOOL.PLB script. Use it to pin most frequently used objects that should
be in memory while the instance is up, these would include procedure (p), functions (p),
packages (p) and triggers (r). Pin objects when the instance starts to avoid memory
fragmentation (Even frequently used data can be pinned but this is a separate topic).
To view a list of frequently used and re-loaded objects
select loads, executions, substr(owner, 1, 15) "Owner",
substr(namespace, 1, 20) "Type", substr(name, 1, 100) "Text"
from v$db_object_cache
order by executions desc;
To pin a package in memory
SQL>exec dbms_shared_pool.keep('standard', 'p');
To view a list of pinned objects
select substr(owner, 1, 15) "Owner",
substr(namespace, 1, 20) "Type",
substr(name, 1, 100) "Text"
from v$db_object_cache
where kept = 'YES';
8. Increasing the shared pool size is an immediate solution, but the above steps need to be
carried out to optimize the database in the long run. The size of the shared pool can be
increased by setting the parameter SHARED_POOL_SIZE in the initialization file.
Conclusion
Reduce Hard parsing as much as possible! This can be done by writing generic routines that
can be called from different parts of the application, thus the importance of writing uniform
and generic code.

Locally Managed Tablespace (LMT) is one of the key features in Oracle database. These
have been made available since Oracle 8i. It is worth using LMTs considering the benefits in
doing so. I have put forward some scenarios that may be worth noting, for systems that are
already using LMTs or planning to shift to LMTs.
Benefits of LMTs
Below are the key benefits offered by LMTs. Not all are achievable when migrating to LMTs.
1. Dictionary contention is reduced.
Extent management in DMTs is maintained and carried out at the data dictionary
level. This requires exclusive locks on dictionary tables. Heavy data processing that
results in extent allocation/deallocation may sometimes result in contentions in the
dictionary.
Extents are managed at the datafile level in LMTs. Dictionary tables are no longer
used for storing extent allocation/deallocation information. The only information still
maintained in the dictionary for LMTs is the tablespace quota for users.
2. Space wastage removed.
In DMTs, there is no implied mechanism to enforce uniform extent sizes. The extent
sizes may vary depending on the storage clause provided at the object level or the
tablespace level, resulting in space wastage and fragmentation.
Oracle enforces the uniform extents allocation in the LMTs (when created with
UNIFORM SIZE clause). Space wastage is removed, as this would result in all the
same sized extents in the tablespace.
3. No Rollback generated.
In DMTs, all extent allocations and deallocations are recorded in the data dictionary.
This generates undo information thus using vital resources and may compete with
other processes.
In LMTs, no rollback is generated for space allocation and deallocation activities.
4. ST enqueue contention reduced.
In DMTs, Space Transaction (ST) enqueue is acquired when there is a need for
extent allocations in DMTs. It is also exclusively acquired by SMON process for
coalescing free space in DMTs. Only one such enqueue exists per instance, and may
sometimes result in contention and performance issues if heavy extent processing is
being carried out. The following error is common in such scenario.
ORA-01575: timeout warning for space management resource
As ST enqueue is not used by LMTs it reduces the overall ST enqueue contention.
5. Recursive space management operations removed.
In DMTs, SMON process wakes up every 5 minutes for coalescing free space in
DMTs. Optionally, the ALTER TABLESPACE <tablespace name> COALESCE command
is also used to coalesce DMTs and reduce fragmentation.
On the other hand, LMTs avoid recursive space management operations and
automatically track adjacent free space, thus eliminating the need to coalesce free
extents. This further reduces fragmentation.
6. Fragmentation reduced.
Fragmentation is reduced in LMTs but not completely eliminated. Since adjacent free
spaces are automatically tracked, there is no need to do coalescing, as is required in
the case of DMTs.
Management of Extents in LMTs
Oracle maintains a bitmap in each datafile to track used and free space availability in an
LMT. The initial blocks in the datafiles are allocated as File Space Bitmap blocks to maintain
the extent allocation information present in the datafile. Each bit stored in the bitmap
corresponds to a block or a group of blocks. Whenever the extents are allocated or freed,
oracle changes the bitmap values to reflect the new status. Such updates in the bitmap
header do not generate any rollback information.
The number of blocks that a bit represents in a bitmap depends on the database block size
and the uniform extent size allocated to the tablespace. For example, if the
DB_BLOCK_SIZE parameter is set to 8K, and the tablespace is created with uniform extent
sizing of 64K, then 1 bit will map to one 64K extent, i.e., 64K (extent size)/8K (block size)
= 8 database blocks.
Allocation Types in LMTs
Allocation type plays a very important role in how the LMT is behaving. It specifies how the
extent is being allocated by the system. There are three types of allocating extents in LMTs-
USER, SYSTEM and UNIFORM.
USER- The LMT behaves as DMT, allocating extents as per the storage clause
provided with the object or defaulted at tablespace level. The advantage is that
allocation of extents is managed at the datafile level and such tablespaces will not
compete for ST enqueue. The disadvantage is that such tablespaces are not subject
to uniform extent allocation policy. DMTs that are converted to LMTs fall under this
type.
SYSTEM- Oracle manages the space. The extents are auto allocated by the system
based on an internal algorithm. Allocation of extents is managed at the datafile level
and such tablespaces will not compete for ST enqueue. Such tablespaces would have
extents of varying sizes and would result in fragmentation and some space being
wasted. This is a good alternative if the extent sizes of the various objects to be
placed in the tablespace cannot be determined.
UNIFORM- All extents are of fixed size in the system. The size is provided when
creating the LMT. This type gives all the benefits offered by LMT and one should aim
at achieving this.

Storage parameters usage in LMT
Storage parameters are used in DMTs to specify the object sizing. These parameters are not
of much importance in UNIFORM type LMTs but play a role in deciding the initial allocation
of space. Oracle considers the storage clause for the initial number of extents that should be
allocated. For example, LMT is created with 32K extent size. The database block size is 8k.
SQL> create table am05 (col1 number)
2 storage (initial 100k next 100k minextents 1 maxextents unlimited pctincrease 0);

SQL> select segment_name, segment_type, extent_id, bytes, blocks
2 from user_extents where segment_name = 'AM05';

SEGMENT_NAME SEGMENT_TYPE EXTENT_ID BYTES BLOCKS
-------------------- ------------------ ---------- ---------- ----------
AM05 TABLE 0 32768 4
AM05 TABLE 1 32768 4
AM05 TABLE 2 32768 4
AM05 TABLE 3 32768 4
Oracle allocates four extents, the total size being 128K that is closer to the 100K provided
for initial extent size. Please note that all the extents allocated have the uniform extent size
of 32K. Only the number of extents to be allocated is decided based on the storage clause.
See example below to clarify this.
SQL> create table am06 (col1 number)
2 storage(initial 200k next 100k minextents 2 maxextents unlimited pctincrease 0);

SQL> select segment_name, segment_type, extent_id, bytes, blocks
2 from user_extents where segment_name = 'AM06';

SEGMENT_NAME SEGMENT_TYPE EXTENT_ID BYTES BLOCKS
-------------------- ------------------ ---------- ---------- ----------
AM06 TABLE 0 32768 4
AM06 TABLE 1 32768 4
AM06 TABLE 2 32768 4
AM06 TABLE 3 32768 4
AM06 TABLE 4 32768 4
AM06 TABLE 5 32768 4
AM06 TABLE 6 32768 4
AM06 TABLE 7 32768 4
AM06 TABLE 8 32768 4
AM06 TABLE 9 32768 4

10 rows selected.

SQL> select sum(bytes)/1024 from user_extents where segment_name = 'AM06';

SUM(BYTES)/1024
---------------
320
As per the storage clause, the table should be allocated 200K + 100K of space (since
minextents is 2). Oracle rounds off on the higher side and allocates 10 extents of 32K,
totaling 320K.
Even pctincrease plays a role in uniform LMTs as the below example shows.
SQL> create table am07 (col1 varchar2(200))
2 storage(initial 16K next 16K minextents 5 maxextents unlimited pctincrease 50);

Table created.

SQL> select segment_name, segment_type, extent_id, bytes, blocks
2 from user_extents where segment_name = 'AM07';

SEGMENT_NAME SEGMENT_TYPE EXTENT_ID BYTES BLOCKS
-------------------- ------------------ ---------- ---------- ----------
AM07 TABLE 0 32768 4
AM07 TABLE 1 32768 4
AM07 TABLE 2 32768 4
AM07 TABLE 3 32768 4
AM07 TABLE 4 32768 4

SQL> select sum(bytes)/1024 from user_extents where segment_name = 'AM07';

SUM(BYTES)/1024
---------------
160
As per the storage clause the required initial size of the table should be 146K (16 + 16 + 24
+ 36 + 54), Oracle rounds on the higher side to 160K (5 32K extents).
Hence, storage could be used to allocate the initial size for an object. The Default Storage
clause cannot be specified for LMTs at tablespace level.
SQL> create tablespace users4
2 datafile 'D:\oracle\oradata3\users4.dfb' size 5M
3 autoextend off
4 extent management local uniform size 32K
5 default storage(initial 100k next 100k minextents 2 maxextents unlimited pctincrease
50);
create tablespace users4
*
ERROR at line 1:
ORA-25143: default storage clause is not compatible with allocation policy
Please refer the example section for LMT creations and migration examples.
DBMS_SPACE_ADMIN Package
This Oracle supplied package is used for managing LMTs. The following key options are
available.
TABLESPACE_VERIFY
The first parameter is the tablespace name and the next is the verify option (this defaults to
the constant TABLESPACE_VERIFY_BITMAP). This routine verifies the bitmap at tablespace
level with the extent maps of the segments present in the tablespace. This ensures the
consistency of the bitmap.
exec dbms_space_admin.tablespace_verify('GLD');
TABLESPACE_REBUILD_BITMAPS
This procedure rebuilds the appropriate bitmap(s). If no bitmap block DBA is specified, then
it rebuilds all bitmaps for the given tablespace.
exec dbms_space_admin.tablespace_rebuild_bitmaps('ECXX');
TABLESPACE_REBUILD_QUOTAS
This procedure rebuilds quota allocations for the given tablespace.
exec dbms_space_admin.tablespace_rebuild_quotas('USERS');

TABLESPACE_MIGRATE_FROM_LOCAL
To migrate from LMT to DMT. The tablespace should be online and read write during
migration.
exec dbms_space_admin.tablespace_migrate_from_local('USERS');
TABLESPACE_MIGRATE_TO_LOCAL
To move from DMT to LMT. The tablespace should be online and read write during
migration. SYSTEM tablespace migration is not supported in 8i releases; this is available in
9i. Migration of temporary tablespace (contents temporary) is not supported; these could be
dropped and rebuilt as LMTs.
Tablespaces migrated to locally managed format are USER managed. Thus uniform extent
size allocation should be manually achieved. The tables and indexes in such tablespaces will
grow according to the storage clause specified.
This procedure takes three parameters: tablespace name, the allocation unit size in bytes
(optional) and the relative file number (optional) where the bitmap block should be placed
for the tablespace.
The relative file number is not required when only one datafile exists in a tablespace. For
multiple datafiles, if it is not specified, the system will automatically choose one to place the
bitmap into. Only one bitmap header is created for all existing files.
The allocation unit size specified should be a factor of the unit size calculated by the system.
By default, the system calculates the allocation unit size based on the highest common
divisor of all extents for the concerned tablespace. This number is further trimmed based on
the Minimum Extent of the tablespace. If the specified unit size allocation is not a factor of
the unit size calculated by the system, an error message is returned. Preferably, allow the
system to compute this value for you.
exec dbms_space_admin.tablespace_migrate_to_local('ECXX');
Please refer to the examples below for using the DBMS_SPACE_ADMIN package.
Checking space availability in LMTs
The existing DBA_FREE_SPACE is still available for checking available space in LMT AND
DMT tablespaces. Specifically, two more views were introduced by Oracle -
DBA_LMT_FREE_SPACE and DBA_DMT_FREE_SPACE. These views show the available blocks
that should be multiplied with the block size to get the total bytes.
select name, (sum(a.blocks * 8192))/1024/1024 "size MB"
from dba_lmt_free_space a, v$tablespace b
where a.tablespace_id = b.ts#
group by name;

select name, (sum(a.blocks * 8192))/1024/1024 "size MB"
from dba_dmt_free_space a, v$tablespace b
where a.tablespace_id = b.ts#
group by name;
Beware of ORA-600 error that may be encountered when using DBA_LMT_FREE_SPACE. For
example, the following statement gave me trouble until I found the reason to be an internal
problem that would get resolved in higher releases.
SQL> select * from dba_lmt_free_space where tablespace_id = 1000;
select * from dba_lmt_free_space where tablespace_id = 1000
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [ktsitbs_info1], [1000], [], [], [], [], [], []

The examples below are tried on database version 8.1.7.0.0 with block size of 8K.

(1) To create a new LMT with uniform extents of 32K

click for full example
(2) To create a new LMT that is SYSTEM managed.

click for full example
(3) To find the list of DMTs in the database.
SQL> select tablespace_name, status, contents
2 from dba_tablespaces
3 where extent_management= 'DICTIONARY';

TABLESPACE_NAME STATUS CONTENTS
-------------------- --------- ---------
SYSTEM ONLINE PERMANENT
RBS ONLINE PERMANENT
USERS ONLINE PERMANENT
TEMP ONLINE TEMPORARY
TOOLS ONLINE PERMANENT
INDX ONLINE PERMANENT
DRSYS ONLINE PERMANENT

(4) To find the list of LMTs in the database.
SQL> select tablespace_name, status, contents
2 from dba_tablespaces
3 where extent_management= 'LOCAL';

TABLESPACE_NAME STATUS CONTENTS
-------------------- --------- ---------
OEM_REPOSITORY ONLINE PERMANENT
USERS2 ONLINE PERMANENT
USERS3 ONLINE PERMANENT
(5) Migrating DMT to LMT. Please note the error given for wrong allocation unit size
provided.
SQL> select tablespace_name, status, contents, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'USERS';

TABLESPACE_NAME STATUS CONTENTS EXTENT_MAN ALLOCATIO
------------------------------ --------- --------- ---------- ---------
USERS ONLINE PERMANENT DICTIONARY USER


SQL> select tablespace_name, status, contents, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'ECXX';

TABLESPACE_NAME |STATUS |CONTENTS |EXTENT_MAN|ALLOCATIO
______________________________|_________|_________|__________|_________
ECXX |ONLINE |PERMANENT|DICTIONARY|USER

SQL> exec dbms_space_admin.tablespace_migrate_to_local('ECXX', 512);
BEGIN dbms_space_admin.tablespace_migrate_to_local('ECXX', 512); END;

*
ERROR at line 1:
ORA-03241: Invalid unit size
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 0
ORA-06512: at line 1

SQL> exec dbms_space_admin.tablespace_migrate_to_local('ECXX');

PL/SQL procedure successfully completed.
(6) Migrating tablespace from LMT to DMT
To migrate from LMT to DMT. The tablespace should be online and read write during
migration.
SQL> select tablespace_name, status, contents, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'ECXX';

TABLESPACE_NAME |STATUS |CONTENTS |EXTENT_MAN|ALLOCATIO
______________________________|_________|_________|__________|_________
ECXX |ONLINE |PERMANENT|LOCAL |USER

SQL> exec dbms_space_admin.tablespace_migrate_from_local('ECXX');

PL/SQL procedure successfully completed.

SQL> select tablespace_name, status, contents, extent_management, allocation_type
2 from dba_tablespaces
3 where tablespace_name = 'ECXX';

TABLESPACE_NAME |STATUS |CONTENTS |EXTENT_MAN|ALLOCATIO
______________________________|_________|_________|__________|_________
ECXX |ONLINE |PERMANENT|DICTIONARY|USER
(7) Creating LMT with default clause, this would result in an error.
SQL> create tablespace users3
2 datafile 'D:\oracle\oradata3\users3.dbf' size 5M
3 autoextend off
4 extent management local uniform size 32K
5 default storage (initial 32K next 32k minextents 1 maxextents unlimited pctincrease
10);
create tablespace users3
*
ERROR at line 1:
ORA-25143: default storage clause is not compatible with allocation policy
(8) Converting dictionary managed temporary tablespace is not supported as of Oracle
8.1.7
SQL> exec dbms_space_admin.tablespace_migrate_to_local('TEMPTM');
BEGIN dbms_space_admin.tablespace_migrate_to_local('TEMPTM'); END;

*
ERROR at line 1:
ORA-03245: Tablespace has to be dictionary managed, online and permanent to be able to
migrate
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 0
ORA-06512: at line 1
(9) Storage parameters do not play a role in UNIFORM and SYSTEM LMTs as extents are
handled at tablespace level.
SQL> alter table am1 storage(next 100k);
alter table am1 storage(next 100k)
*
ERROR at line 1:
ORA-25150: ALTERING of extent parameters not permitted

(10) COMPATIBLE parameter should be set to 8.1.6.0.0 or greater when migrating
tablespaces.
SQL> select name, value from v$parameter where name = 'compatible';

NAME VALUE
---------------------------------------------------------------- ---------
compatible 8.1.0

SQL> exec dbms_space_admin.tablespace_migrate_to_local('users', 512);
BEGIN dbms_space_admin.tablespace_migrate_to_local('users', 512); END;

*
ERROR at line 1:
ORA-00406: COMPATIBLE parameter needs to be 8.1.6.0.0 or greater
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 0
ORA-06512: at line 1
Notes
1. To move an existing DMT to LMT without losing any of the LMT features, you may
consider creating a new LMT and then moving the objects from the existing DMT to it. This
way both uniform extent allocation and local management of extents features are available.
2. As of Oracle 8.1.7, SYSTEM tablespace cannot be Dictionary managed. It is supported in
higher releases.
3. SMON Process coalesces only DMT tablespaces every 5 minutes, where pctincrease is not
set to 0.
4. As of Oracle 8.1.5, it is possible to create LMTs but not possible to migrate an existing
DMT to LMT.
5. As of Oracle 8.1.6, it is possible to create and migrate to LMT.
6. Tablespaces are by default created as LMTs in Oracle 9i,
7. SYSTEM tablespace restrictions as LMT.
Creating or migrating the SYSTEM to LMT is a no return process. Make sure that all the
existing DMTs are first converted to LMT before converting SYSTEM tablespace. If any DMT
is present in the database after conversion of the SYSTEM to LMT, then it will be marked as
READ-ONLY and it cannot be changed to READ-WRITE. The SYSTEM once created or
converted to LMT cannot be converted back to DMT. Once the SYSTEM is LMT no more DMTs
can be created in the database.
8. Once all the tablespaces are converted to LMTs, the table FET$ would not contain any
more records.
Conclusion
LMT is highly beneficial and powerful feature. The management of object extents will
become much easier. With implementation of LMTs, one should re-evaluate and revise the
extent management and object sizing policies that were followed for DMTs.
If your application was developed in earlier releases of Oracle (v. 7 and earlier), chances are
that your database is running in Rule based optimizer. This article will help you understand
the Oracle Optimizer and various efficient ways of moving to Cost based Optimizer. This is
Part 1 of a five part series.
Part 1
1. What is Optimizer?
2. Why Optimize?
3. Available Optimizers
4. Why is RBO being removed?
5. Why move to CBO?
Part 2
6. Initialization parameters that affect CBO
7. Internal Oracle parameters that affect CBO
Part 3
8. Setup changes for migrating to CBO
9. Generating Statistics
10. DML Monitoring
Part 4
11. Hints
12. Stored outlines
13. Statistics for SYS schema
Part 5
14. New Privileges
15. How to analyze execution plans in CBO?
16. Oracle Applications 11i specific information for CBO
17. Conclusion
. What is Optimizer?
In Oracle, a query may be executed in more than one way. The execution plan that has the
best ranking or the lowest cost is the one that will return output with the fastest rate and
optimal utilization of resources. The execution plan is generated by the Optimizer. Optimizer
is an 'engine' running in the database that is dedicated to deriving a list of execution paths
based on various conditions and then choosing the most efficient for running a query. Once
an execution plan choice is made, it is then carried out to arrive at the output.
In Oracle, Optimizer relates to DML statements.
2. Why Optimize?
You know it! Optimizing a query aims at executing it in the shortest time and with optimal
use of resources, thus making it fast and efficient. By resources, here I mean CPU
utilization, hard disk I/O, memory consumption and to some extent, network operations.
Irrespective of how big or rich your server is in terms of these resources, improper or sub-
optimal queries will always be expensive and may drag your session or impact other process
on the server.
The extent to which a query is expensive will depend on lot of factors, including the size of
the result set to be fetched, the size of the data being scanned to retrieve the result set and
the load on the system at that point in time. Proper optimization of statements will save
your users lot of runtime wastage and unwanted resource utilization.
3. Available Optimizers
Oracle has two modes for Optimizer to decide on the best execution plan, Rule based and
Cost based. This article concentrates on Cost Based Optimizer and Rule based is described
in brief.
3.1 Rule Based Optimizer (RBO)
RBO follows a simple ranking methodology. Fifteen ranking points are designed in this
optimizer. When a query is received, the optimizer evaluates the number of points that are
satisfied. The execution path with the best rank (lowest number) is then chosen for
executing the query. The fifteen-point ranking is mentioned below.
1. Single row by ROWID
2. Single row by cluster join
3. Single row by hash cluster with unique or primary key
4. Single row by unique or primary key
5. Cluster join
6. Hash cluster key
7. Indexed cluster key
8. Composite key
9. Single column indexes
10. Bounded range on index columns
11. Unbounded range on indexed columns
12. Sort merge join
13. MAX or MIN on indexed column
14. ORDER BY on indexed columns
15. Full table scan
For example, If I fire a query on a table that has two columns that are searched for exact
match (equal-to) in the where clause condition, one being the primary key and the other
column has a non-unique key, RBO will prefer the primary key (rank 4) to the non-unique
key (rank 9).
When more than one table is accessed in a query, the optimizer needs to decide which
should be the driving table. The RBO generates a set of join orders, each with a different
table as the first table. Then the most optimal plan is chosen from the resulting set of
execution plans.
The optimizer evaluates the execution plans for various conditions such as (fewest nested-
loop, fewest sort-merge joins, table with the best ranking access path, etc.). If there is still
a tie, the optimizer chooses the execution plan for which the first table appears later in the
query's FROM clause. Hence, it is a conventional coding practice to put the driving table at
the extreme right, followed by other tables in order of access in the FROM clause, i.e., the
ordering of tables based on their access is from right to left.
Please note that the operators being used for searching the columns also play a role in
deciding the ranking. Sometimes even the age of an index is considered for ranking!
For example the below table shows what index is used if column1 and column2 have
indexes on them and if both are being referred in the where clause with "=" operator.
Example:
select * from am79 where col1 = 1 and col2 = 'amar';
-- here both col1 and col2 are indexed.

-------------------------------------------------------------------------------------
Normal index types | Index used in RBO
column1(a) column2(b) column1+column2(c) |
-------------------------------------------------------------------------------------
non-unique non-unique c
non-unique non-unique a + b
non-unique non-unique non-unique c
unique non-unique a
unique non-unique a
unique unique b (the most recent index created)
unique unique unique c
-------------------------------------------------------------------------------------
-The above is tested on Oracle 8.1.7.1.
-In case of non-unique single column indexes, both indexes are used.
-In case of unique indexes, they are not combined for execution plan, any one is taken.
-Preference is given to the index available with the "=" operator column, than with
others operators.
-Don't create bitmap & function-based indexes, these will not work in RBO.
-------------------------------------------------------------------------------------
RBO was the preferred choice for most setups in earlier releases of oracle as the execution
paths were consistent and uniform. Queries would behave the same way if run on different
databases of the same application.

3.2 Cost based optimizer (CBO)
CBO follows Expense calculation methodology. All execution plans are tagged with a cost,
the one with the lowest cost will be chosen. The higher the cost the more resources will be
used by the execution plan, the lower the cost, the more efficient the query is.
CBO uses all available information-statistics and histograms stored in the dictionary, user
provided hints and supplied parameter settings to arrive at the cost. CBO generates all
possible permutations of access methods and then chooses what fits best. The number of
permutations depends on the number of tables present in the query and can sometimes be
around 80,000 permutations or even more! Please refer to the parameter section in part 2
of this series for setting related parameters.
CBO may also perform operations such as query transformation, view merging, OR
transformation, push join predicates, etc. that would change the original statement and
alter existing or add new predicates, all with the aim of deriving new access plans that could
be better than the existing ones. Note that transformation does not affect the data that is
returned, only the execution path. Please refer to the parameter section in part 2 of this
series for information related to this.
3.2.1 Statistics
Statistics provide critical input in order for CBO to work properly; these are generated for
data storing objects and include information such as the number of rows in a table, distinct
values in a column, number of leaf blocks in an index, etc. The more accurate the statistics,
the more efficient the results provided by Optimizer. Please refer to the Generating statistics
section in part 3 of this series for how this information is generated and how best we can
maintain it.
Statistics may be exact or estimated. Statistics generated with a COMPUTE clause analyzes
all of the data in the object. This gives the optimizer accurate information to work on and
arrive at a good execution plan.
Statistics generated with an ESTIMATE clause analyzes data in the object to the extent of
sample size mentioned. Sample size may be specified as number of rows or percentage of
rows that should be randomly analyzed to generate the statistics. Optionally block sampling
may also be specified. This saves on time if there are many huge tables in the system. The
guarantee of good execution plans will depend on how close the estimated value is to the
exact values. You can try out your setup at different sample sizes to arrive at an appropriate
figure or have different estimation levels for different types of tables, but the idea is to get
as close to accuracy as feasible.
Statistics are stored in a data dictionary in tables owned by SYS user. The following views
display the statistics collected for tables, columns and indexes.
For Tables
DBA_TABLES
NUM_ROWS - Number of rows.
BLOCKS - Number of used blocks.
EMPTY_BLOCKS - Number of empty blocks that have never been used.
AVG_SPACE - Average free space (in bytes) in blocks allocated to the table. All
empty and free blocks are considered for this.
CHAIN_CNT - Number of chained or migrated rows.
AVG_ROW_LEN - Average row length in bytes.
LAST_ANALYZED - Date when the table was last analyzed.
SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to NUM_ROWS
if COMPUTE.
GLOBAL_STATS - For partitioned tables, YES - statistics collected as a whole, NO -
statistics are estimated from statistics.
USER_STATS - Set to YES if user has explicitly set the statistics for the table.
Statistics for individual partitions of a table can be seen from DBA_TAB_PARTITIONS.
Cluster statistics is available from DBA_CLUSTERS. Object table statistics are present in
DBA_OBJECT_TABLES.
For Columns
DBA_TAB_COLUMNS
NUM_DISTINCT - Number of distinct values.
LOW_VALUE - Lowest value
HIGH_VALUE - Highest value
DENSITY - Density of the column
NUM_NULLS - Number of records with null value for the concerned column.
NUM_BUCKETS - Number of buckets in histograms. Refer Histograms section.
SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to total rows if
COMPUTE.
LAST_ANALYZED - Date when the table was last analyzed.
DBA_TAB_COL_STATISTICS shows similar data. Partitioned Table column statistics can be
viewed from DBA_PART_COL_STATISTICS and DBA_SUBPART_COL_STATISTICS.
For Indexes
DBA_INDEXES
BLEVEL - Depth of the index, from root to leaf.
LEAF_BLOCKS - Number of leaf blocks.
DISTINCT KEYS - Number of distinct keys.
AVG_LEAF_BLOCKS_PER_KEY - Average number of leaf blocks in which each
distinct key appears, should be 1 for unique indexes.
AVG_DATA_BLOCKS_PER_KEY - Average number of blocks in the table that are
pointed to by a distinct key.
CLUSTERING_FACTOR - A count that determines the ordering of the index. Index
is ordered if count is closer to the number of blocks, i.e., entries in single leaf tend to
point to rows in same blocks in the table. Index is randomly ordered if closer to the
number of rows, i.e., entries in single leaf are pointing to rows spread across
multiple blocks.
NUM_ROWS - Number of rows indexed.
SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to NUM_ROWS
if COMPUTE..
LAST_ANALYZED - Date when the table was last analyzed.
GLOBAL_STATS - For partitioned indexes, YES - statistics collected as a whole, NO
- statistics are estimated from statistics.
USER_STATS - Set to YES if user has explicitly set the statistics for the index.
PCT_DIRECT_ACCESS - For secondary indexes on IOTs, percentage of rows with
valid guess.
Statistics for individual partitions of indexes can be seen from DBA_IND_PARTITIONS and
DBA_IND_SUBPARTITIONS.
Dictionary tables related to Histogram information are discussed later.
3.2.2 Available CBO Modes
CBO has two available modes in which to run, ALL_ROWS and FIRST_ROWS.
FIRST_ROWS aims at returning the first row(s) of the statement as soon as possible. This
mode tells optimizer to give response time prime importance. It prefers nested-loop joins.
FIRST_ROWS uses cost as well as some thumb rules to process the first set of rows.
Examples of thumb rules - Plans using indexes are preferred over plans having full table
scans as access path, ORDER BY clause can induce index access, etc.
As of release 9i, the number of rows to be returned in the first hit can also be mentioned in
the parameter, FIRST_ROWS_n (n could be 1, 10, 100 or 1000). This could be set as per
the application requirements.
ALL_ROWS processes all rows for a given query before returning the output. It forces
optimizer to consider minimal use of resources and best throughput. ALL_ROWS prefers
sort-merge joins.
For an OLTP system, FIRST_ROWS would be the ideal option for fast response time.
ALL_ROWS is meant for batch processing applications. Note, a plan that produces the first n
rows with the fastest response time might not be an optimal plan if requirement is to obtain
the entire result, so decide as per the need of the application.
CBO is dynamic and tunes its execution plans as the database grows in size. So do not be
taken aback if the same query that works perfectly in one database setup is behaving badly
in some other database of the same application. This would happen if the setup and
statistics differ between the two databases. To prevent such behavior, you may consider
using optimizer plan stability, which is covered later in this series.
3.2.3 Basic CBO Terms
The following terms will be used quite often when analyzing statements in CBO.
Cost
The COST computed in CBO is a unit of expense involved with each operation. The logic as
to how the cost is actually derived is not documented or made external. Moreover, this may
change across releases.
Cardinality
The number of rows in the table or number of distinct row links in the index. The cardinality
of a query is the number of rows that is expected to be returned by it.
Selectivity
The number of distinct values. The distinct values of a column being indexed are known as
its selectivity. For example, if a table has 10000 rows and an index is created on a column
having 4000 distinct values, then the selectivity of the index is (4000/10000) * 100 = 40%.
Unique index on not null columns have a selectivity of 100%.
Transitivity
It is a process of generating additional predicates for a query by CBO. This enables
optimizer to consider additional execution paths. For example if predicates are provided in
query of the type A=B and B=C, the optimizer may add an additional predicate that
indicates A=C.
Statistics
Much required information gathered for various data holding objects. This information is
vital for the CBO to decide on execution plans.
Join Methods
Oracle uses joins like Hash, sort-merge and nested loops. A query may run faster using one
type of join as compared to other methods. This should be evaluated for individual queries.
FTS
FTS or Full Table Scan relates to a query sequentially scanning a table from the first block to
the last allocated block. This could be very expensive for big tables and should be avoided.
Index scan
Relates to random access of a table by use of one or more indexes on the table.
3.2.4 Minimum requirement
To start using CBO the minimum requirement is to set the optimizer mode to FIRST_ROWS
or ALL_ROWS (or CHOOSE) and generate statistics for the objects. However, this will not
ensure that your system is working at its best. Please refer to part 2 (Initialization
parameters) for information regarding related initialization parameters.
Irrespective of the Optimizer mode settings, CBO is automatically invoked if one of the
following is satisfied:
If hints are used.
If table is partitioned.
If tables are set for parallel.

You might also like