Professional Documents
Culture Documents
html
How can you find out how many users are currently logged into
the database? How can you find their operating system id?
Answer: There are several ways. One is to look at the v$session or
v$process views. Another way is to check the current_logins
parameter in the v$sysstat view. Another if you are on UNIX is to do a
"ps -ef|grep oracle|wc -l? command, but this only works against a
single instance installation.
How can you tell if a tablespace has excessive fragmentation?
If a select against the dba_free_space table shows that the count of a tablespaces extents
is greater than the count of its data files, then it is fragmented.
To rename the database change reuse to set in the create control file script as shown
below
CREATE CONTROLFILE SET DATABASE "ORCL" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 14
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORACLE\ORADATA\ORCL\REDO01.LOG' SIZE 100M,
GROUP 2 'E:\ORACLE\ORADATA\ORCL\REDO02.LOG' SIZE 100M,
GROUP 3 'E:\ORACLE\ORADATA\ORCL\REDO03.LOG' SIZE 100M
DATAFILE
'E:\ORACLE\ORADATA\ORCL\SYSTEM01.DBF',
'E:\ORACLE\ORADATA\ORCL\UNDOTBS01.DBF',
'E:\ORACLE\ORADATA\ORCL\EXAMPLE01.DBF',
'E:\ORACLE\ORADATA\ORCL\INDX01.DBF',
'E:\ORACLE\ORADATA\ORCL\TOOLS01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS01.DBF',
'E:\ORACLE\ORADATA\ORCL\OEM_REPOSITORY.DBF',
'E:\ORACLE\ORADATA\ORCL\CWMLITE01.DBF',
'E:\ORACLE\ORADATA\ORCL\DRSYS01.DBF',
'E:\ORACLE\ORADATA\ORCL\ODM01.DBF',
'E:\ORACLE\ORADATA\ORCL\XDB01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS02.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS03.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS04.DBF'
CHARACTER SET WE8MSWIN1252
;
ALTER DATABASE OPEN RESETLOGS;
based on reasonable expectations or history and setting the pctincrease for all objects to zero from
the default of 50% would help reduce fragmentation, as would the use of the same next extent size
for all objects in a tablespace. With Oracle 8.1 the rdbms introduced new tablespace creation
parameters that enforce uniform extent sizing within a tablespace overriding any object storage
clause specified at creation to pretty much enforce the ideas I just described.
Ver. 8.1
create tablespace x
datafile '/dev/vx/rdsk/filename' size 1024M
extent management local uniform size 512K;
There are several variations of the options available to manage a tablespace. This FAQ is not going
to attempt to cover any of these features in depth, but basically you can now create a tablespace to
be dictionary managed, which is the pre-8.1 method or to be locally managed. Locally managed
tablespaces contain a bitmap used to control extent allocation within the tablespace and eliminate
the need to acquire the single database wide ST lock to allocate or deallocate extents within the
locally managed tablespace. Locally managed tablespaces should be declared to use uniform extent
management; otherwise, they default to type autoallocate, which forbids users from specifying
extent size information. With uniform extents if a user submits an object create with an initial size
of 750K in the tablespace above the Oracle rdbms would allocate two 512k extents to hold the
object overriding the storage request but would otherwise accept the statement.
One of the nice effects of using uniform extents is that it makes predicting growth and determining
when another file needs to be added to a tablespace fairly straightforward. Just divide the total
free space in the tablespace by the extent size and round down to the integer value as there are no
unusable free extents cluttering up the tablespace. There will usually be some wasted space at the
end of the file as the extent size will rarely divide perfectly into the file size minus the header block
minus the bitmap remaining useable size, but barring using a very small uniform extent size with
humongous objects requiring the creation of multiple bitmaps this will be less than one extent.
When measured over time you can get a pretty good idea of extents per time period worth of usage.
If you are working with a pre-8.1 version of Oracle or have inherited a system upgraded from earlier
versions that use permanent tablespaces created using traditional dictionary managed space then
you can still manage by extents, but you will have to work to get the tablespace objects sized
appropriately.
Pre 8.1
Create tablespace x
Datafile '/dev/vx/rdsk/filename' size 1024M
Default storage (initial 512k next 512k pctincrease 0)
The second create statement appears to pretty much define the tablespace the same as the uniform
extent example, but if a user submits an object create with a storage clause initial extent request
of 750K then Oracle will give them the 750K request overriding the tablespace defaults. Starting
with version 7.3 Oracle provided the minimum extent clause, which required that every extent in
the tablespace be at least integer value in size or a multiple of this size. In a way this was the
forerunner of the new uniform extent option. This parameter should not be confused with the
minextents object storage clause parameter.
Here is SQL that works for version 7.3+ (due to inline view) to determine the number of next extent
allocations that can be taken within a tablespace based on the largest next extent allocation
request size for any object allocated to the tablespace:
COLUMN tablespace_name
COLUMN extents
select f.tablespace_name
,sum(floor(nvl(f.bytes,0)/(s.MEXT))) extents
from
sys.dba_free_space f
,( select tablespace_name, max(next_extent) as MEXT
from
sys.dba_segments
group by tablespace_name
)s
where
f.tablespace_name = s.tablespace_name(+)
group by f.tablespace_name
/
Available
Tablespace Name
Extents
==================== =========
LGDATA01
LGDATA02
LGIDX01
16
LGIDX02
32
..........
SMON coalesces free space (extents) into larger, contiguous extents every 2 hours and
even then, only for a short period of time.SMON will not coalesce free space if a
tablespace's default storage parameter "pctincrease" is set to 0. With Oracle 7.3 one can
manually coalesce a tablespace using the ALTER TABLESPACE ... COALESCE;
command, until then use:
ALTER TABLESPACE <tablespace_name> COALESCE
SQL> alter session set events 'immediate trace name coalesce level ';
Where 'n' is the tablespace number you get from SELECT TS#, NAME FROM SYS.TS$;
You can get status information about this process by selecting from the
SYS.DBA_FREE_SPACE_COALESCED dictionary view.
There is no single system table, which contains the high water mark (HWM) for a table.
A table's HWM can be calculated using the results from the following SQL statements:
SELECT BLOCKS
FROM DBA_SEGMENTS
WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
ANALYZE TABLE owner.table ESTIMATE STATISTICS;
SELECT EMPTY_BLOCKS
FROM DBA_TABLES
WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
Thus, the tables' HWM = (query result 1) - (query result 2) - 1
NOTE: You can also use the DBMS_SPACE package and calculate the
HWM = TOTAL_BLOCKS - UNUSED_BLOCKS - 1.
E.G
selectBLOCKSfromdba_segments
whereowner='APPLSYS'ANDSEGMENT_NAME=
UPPER('fnd_concurrent_requests')
ANALYZETABLEAPPLSYS.FND_CONCURRENT_REQUESTSESTIMATESTATISTICS;
SELECTEMPTY_BLOCKS
FROMDBA_TABLES
WHEREOWNER=UPPER('APPLSYS')ANDTABLE_NAME=
UPPER('fnd_concurrent_requests')
Starting with Oracle 9i, DBAs can now create locally managed tablespaces.
A Locally Managed TBS manages its own list of free extents in a bitmap block placed inside the
header of the first data file of the tablespace. Inside the bitmap block, each bit maps to a free
block in the tablespace. When creating a locally managed tablespace, you can specify the extent
allocation method to be used.
AUTOALLOCATE - means that the extent sizes are managed by Oracle.
Oracle will choose the optimal next size for the extents starting with 64KB. As the segments grow
and more extents are needed, Oracle will start allocating larger and larger sizes ranging from 1Mb
to eventually 64Mb extents. This might help conserve space but will lead to fragmentation. This is
usually recommended for small tables or in low managed systems.
UNIFORM - specifies that the extent allocation in the tablespace is in a fixed uniform size. The
extent size can be specified in M or K. The default size for UNIFORM extent allocation is 1M. Using
uniform extents usually minimizes fragmentation and leads to better overall performance.
SQL>CREATE TABLESPACE test_tablespcae DATAFILE '/emc/oradata/test_tablespace1.dbf'
SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
SQL>CREATE TABLESPACE test_tablespcae DATAFILE '/emc/oradata/test_tablespace1.dbf'
SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
I usually prefer to keep large production-grade tables in UNIFORM sized tablespaces and smaller
tables or tables in unmanaged environments in AUTOALLOCATE tablespaces.
Also note: if you specify, LOCAL, you cannot specify DEFAULT STORAGE, MINIMUM EXTENT or
TEMPORARY.
Advantages of Locally Managed Tablespaces:
o
o
o
o
Eliminates the need for recursive SQL operations against the data dictionary (UET$ and FET$
tables)
Reduce contention on data dictionary tables (single ST enqueue)
Locally managed tablespaces eliminate the need to periodically coalesce free space
(automatically tracks adjacent free space)
Import/Export
exp and imp are the executables that allow to make exports and imports of data objects (such as
tables). Therefore, logical backups can be made with exp. exp/imp allow to transfer the data
accross databases that reside on different hardware plattforms and/or on different Oracle versions.
If the data is exported on a system with a different Oracle version then on that on which it is
imported, imp must be the newer version. That means, if something needs to be exported from 10g
into 9i, it must be exported with 9i's exp. imp doesn't re-create an already existing table. It either
errors out or ignores the errors. In order to use exp and imp, the catexp.sql script must be run.
catexp.sql basically creates the exp_full_database and imp_full_database roles. It is found under
$ORACLE_HOME/rdbms/admin:
SQL> @?/rdbms/admin/catexp
catexp is called by catalog.sql.
Import export modes
exp/imp can be used in four modes:
Full Export
The EXP_FULL_DATABASE and IMP_FULL_DATABASE, respectively, are needed to perform a full
export. Use the full export parameter for a full export.
Tablespace
Use the tablespaces export parameter for a tablespace export.
User
This mode can be used to export and import all objects that belong to a user. Use the owner export
parameter and the from user import parameter for a user (owner) export-import.
Table
Specific tables (and partitions) can be exported/imported with table export mode. Use the tables
export parameter for a table export.
exp
Objects owned by SYS cannot be exported.
Prerequisites
One must have the create session privilege for being able to use exp. If objects of another user's
schema need to be exported, the EXP_FULL_DATABASE role is required.
direct
Used for a direct path export.
feedback=n
Prints a dot after each nth exported row.
flashback_scn
The exported data is consistent with the specified SCN.
flashback_time
The exported data is consistent with a SCN that approximately matches that of the specified time.
consistent
object_consistent
query
Restricts the exported rows by means of a where clause. The query parameter can only be used for
table export mode. For obvious reasons, it must be appliable to all exported tables.
parfile
Specifies a parfile.
NLS_LANG settings
As exp and imp are client utilities they use the NLS_LANG settings. See also nls_language.
imp
If the parameter touser is used and (?) the export was made with FULL=YES, the users must already
be created in the target database.
Parameters
show
This parameter only shows the contents of an export file; it does not perform an import.
fromuser
This parameter is used when an import in 'user export/import mode is made.
Using imp/exp accross different Oracle versions
If exp and imp are used to export data from an Oracle database with a different version than the
database in which is imported, then the following rules apply:
exp must be of the lower version
imp must match the target version.
Transportable tablespaces
The parfile
A parfile (=parameter file) contains a list of export parameters
Oracle's export (exp) and import (imp) utilities are used to perform logical database backup and
recovery. When exporting, database objects are dumped to a binary file which can then be imported
into another Oracle database.
These utilities can be used to move data between different machines, databases or schema.
However, as they use a proprietary binary file format, they can only be used between Oracle
databases. One cannot export data and expect to import it into a non-Oracle database.
Various parameters are available to control what objects are exported or imported. To get a list of
available parameters, run the exp or imp utilities with the help=yes parameter.
The export/import utilities are commonly used to perform the following tasks:
o
o
o
o
o
o
o
From Oracle 10g, users can choose between using the old imp/exp utilities, or the newly introduced
Data Pump utilities, called expdp and impdp. These new utilities introduce much needed
performance improvements, network based exports and imports, etc.
NOTE: It is generally advised not to use exports as the only means of backing-up a database. Physical
backup methods (for example when you use RMAN) are normally much quicker and supports point in
time based recovery (apply archivelogs after recovering a database). Also, exp/imp is not practical
for large database environments.
Look for the "imp" and "exp" executables in your $ORACLE_HOME/bin directory. One can run them
interactively, using command line parameters, or using parameter files. Look at the imp/exp
parameters before starting. These parameters can be listed by executing the following commands:
"exp help=yes" or "imp help=yes".
The following examples demonstrate how the imp/exp utilities can be used:
BUFFER=100000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y
NOTE: If you do not like command line utilities, you can import and export data with the "Schema
Manager" GUI that ships with Oracle Enterprise Manager (OEM).
From Oracle8i one can use the QUERY= export parameter to selectively unload a subset of the data
from a table. Look at this example:
If you need to monitor how fast rows are imported from a running import job, try one of the
following methods:
Method 1:
sys.v_$sqlarea
where
and
command_type = 2
and
open_versions > 0;
For this to work one needs to be on Oracle 7.3 or higher (7.2 might also be OK). If the import has
more than one table, this statement will only show information about the current table being
imported.
Method 2:
Use the FEEDBACK=n import parameter. This command will tell IMP to display a dot for every N rows
imported
Oracle offers no parameter to specify a different tablespace to import data into. Objects will be recreated in the tablespace they were originally exported from. One can alter this behaviour by
following one of these procedures:
o
o
o
Revoke the user's quota from the tablespace from where the object was exported. This
forces the import utility to create tables in the user's default tablespace.
Make the tablespace to which you want to import the default tablespace for the user
Import the table
Before one import rows into already populated tables, one needs to truncate or drop these tables to
get rid of the old data. If not, the new data will be appended to the existing tables. One must
always DROP existing Sequences before re-importing. If the sequences are not dropped, they will
generate numbers inconsistent with the rest of the database.
Note: It is also advisable to drop indexes before importing to speed up the import process. Indexes
can easily be recreated after the data was successfully imported
Different versions of the import utility is upwards compatible. This means that one can take an
export file created from an old export version, and import it using a later version of the import
utility. This is quite an effective way of upgrading a database from one release of Oracle to the
next.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling older
imp/exp versions to work (for backwards compatibility). For example, one can run
$ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3 exp/imp
utilities to run against an Oracle 8 database.
From Oracle8i, the export utility supports multiple output files. This feature enables large exports
to be divided into files whose sizes will not exceed any operating system limits (FILESIZE=
parameter). When importing from multi-file export you must provide the same filenames in the
same sequence in the FILE= parameter. Look at this example:
EXPORT:
o
o
o
o
o
IMPORT:
o
o
o
o
o
o
o
o
Create an indexfile so that you can create indexes AFTER you have imported data. Do this
by setting INDEXFILE to a filename and then import. No data will be imported but a file
containing index definitions will be created. You must this file afterwards and supply the
passwords for the schemas on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora
file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments
offline (except the SYSTEM rollback segment of course). The rollback segment must be as
big as your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use ANALYZE=N in the import parameter file to avoid time consuming ANALYZE statements
Remember to run the indexfile previously created
o
o
ORA-00001: Unique constraint (...) violated You are importing duplicate rows. Use
IGNORE=NO to skip tables that already exist (imp will give an error if the object is recreated).
ORA-01555: Snapshot too old Ask your users to STOP working while you are exporting or
use parameter CONSISTENT=YES
o
o
ORA-01562: Failed to extend rollback segment Create bigger rollback segments or set
parameter COMMIT=Y while importing
IMP-00015: Statement failed ... object already exists... Use the IGNORE=Y import
parameter to ignore these errors, but be careful as you might end up with duplicate rows.
SQL*Loader
How does one use the SQL*Loader utility?
Submitted by admin on Sat, 2004-08-07 06:10.
One can load data into an Oracle database by using the sqlldr (sqlload on some platforms) utility.
Invoke the utility without arguments to get a list of available parameters. Look at the following
example:
load data
infile 'c:\datamydata.csv'
into table emp
fields terminated by "," optionally enclosed by '"'
( empno, empname, sal, deptno )
Another Sample control file with in-line data formatted as fix length records. The trick is to specify
"*" as the name of the data file, and use BEGINDATA to start the data section in the control file.
load data
infile *
replace
into table departments
(
dept
COSC
COMPUTER SCIENCE
ENGL
ENGLISH LITERATURE
MATH
MATHEMATICS
POLY
POLITICAL SCIENCE
SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database.
Its syntax is similar to that of the DB2 Load utility, but comes with more options. SQL*Loader
supports various load formats, selective loading, and multi-table loads.
Open the MS-Excel spreadsheet and save it as a CSV (Comma Separated Values) file. This file can
now be copied to the Oracle machine and loaded using the SQL*Loader utility.
Possible problems and workarounds:
The spreadsheet may contain cells with newline characters (ALT+ENTER). SQL*Loader expects the
entire record to be on a single line. Run the following macro to remove newline characters (Tools ->
Macro -> Visual Basic or):
Sub CleanUp()
Dim TheCell As Range
On Error Resume Next
Oracle does not supply any data unload utilities. Here are some workarounds:
Using SQL*Plus
You can use SQL*Plus to select and format your data and then spool it to a file. This example spools
out a CSV (comman separated values) file that can be imported into MS-Excel:
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1 || ',' || col2 || ',' || col3
from
where
tab1
col2 = 'XYZ';
spool off
You can also use the "set colsep ," command if you don't want to put the commas in by hand. This
saves a lot of typing:
set colsep ,
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1, col2, col3
from
where
tab1
col2 = 'XYZ';
spool off
Using PL/SQL
PL/SQL's UTL_FILE package can also be used to unload data:
Third-party programs
You might also want to investigate third party tools to help you unload data from Oracle. Here are
some examples:
o
o
o
LOAD DATA
INFILE *
INTO TABLE load_delimited_data
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
data1,
data2
)
BEGINDATA
11111,AAAAAAAAAA
22222,"A,B,C,D,"
NOTE: The default data type in SQL*Loader is CHAR(255). To load character fields longer than 255
characters, code the type and length in your control file. By doing this, Oracle will allocate a big
enough buffer to hold the entire column, thus eliminating potential "Field in data file exceeds
maximum length" errors. Example:
...
resume char(4000),
...
LOAD DATA
INFILE *
data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
One can skip header records or continue an interrupted load (for example if you run out of space) by
specifying the "SKIP n" keyword. "n" specifies the number of logical rows to skip. Look at this
example:
OPTIONS (SKIP 5)
LOAD DATA
INFILE *
INTO TABLE load_positional_data
(
data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
...
If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD
clause instead of the SKIP parameter. CONTINUE_LOAD allows you to specify a different number of
rows to skip for each of the tables you are loading.
Data can be modified as it loads into the Oracle Database. One can also populate columns with
static or derived values. However, this only applies for the conventional load path (and not for
direct path loads). Here are some examples:
LOAD DATA
INFILE *
INTO TABLE modified_data
(
rec_no
"my_db_sequence.nextval",
region
CONSTANT '31'</FONT>,
time_loaded
[b]"to_char(SYSDATE, 'HH24:MI')",
data1
POSITION(1:5)
":data1/100",
data2
POSITION(6:15) "upper(:data2)",
data3
POSITION(16:22)"to_date(:data3, 'YYMMDD')"
)
BEGINDATA
11111AAAAAAAAAA991201
22222BBBBBBBBBB990112
LOAD DATA
INFILE 'mail_orders.txt'
BADFILE 'bad_orders.txt'
APPEND
INTO TABLE mailing_list
FIELDS TERMINATED BY ","
(
addr,
city,
state,
zipcode,
mailing_addr
mailing_city
mailing_state
)
LOAD DATA
INFILE file1.dat
INFILE file2.dat
INFILE file3.dat
APPEND
INTO TABLE emp
( empno
ename
POSITION(1:4)
INTEGER EXTERNAL,
POSITION(6:15)
CHAR,
LOAD DATA
INFILE *
INTO TABLE tab1 WHEN tab = 'tab1'
( tab
FILLER CHAR(4),
col1 INTEGER
)
INTO TABLE tab2 WHEN tab = 'tab2'
( tab
FILLER POSITION(1:4),
col1 INTEGER
)
BEGINDATA
tab1|1
tab1|2
tab2|2
tab3|3
LOAD DATA
INFILE 'mydata.dat'
REPLACE
INTO TABLE emp
WHEN empno != ' '
( empno
ename
POSITION(1:4)
INTEGER EXTERNAL,
POSITION(6:15)
CHAR,
)
INTO TABLE proj
WHEN projno != ' '
(
POSITION(1:4)
INTEGER EXTERNAL
Look at this example, (01) is the first character, (30:37) are characters 30 to 37:
LOAD DATA
INFILE
'mydata.dat' BADFILE
APPEND
INTO TABLE my_selective_table
WHEN (01) <> 'H' and (01) <> 'T' and (30:37) = '20031217'
(
region
CONSTANT '31',
service_key
POSITION(01:11)
INTEGER EXTERNAL,
call_b_no
POSITION(12:29)
CHAR
NOTE: SQL*Loader does not allow the use of OR in the WHEN clause. You can only use AND as in the
example above! To workaround this problem, code multiple "INTO TABLE ... WHEN" clauses. Here is
an example:
LOAD DATA
INFILE
'mydata.dat' BADFILE
APPEND
INTO TABLE my_selective_table
WHEN (01) <> 'H' and (01) <> 'T'
(
region
CONSTANT '31',
service_key
POSITION(01:11)
INTEGER EXTERNAL,
call_b_no
POSITION(12:29)
CHAR
)
INTO TABLE my_selective_table
WHEN (30:37) = '20031217'
(
region
CONSTANT '31',
service_key
POSITION(01:11)
INTEGER EXTERNAL,
call_b_no
POSITION(12:29)
CHAR
One cannot use POSTION(x:y) with delimited data. Luckily, from Oracle 8i one can specify FILLER
columns. FILLER columns are used to skip columns/fields in the load file, ignoring fields that one
does not want. Look at this example:
LOAD DATA
TRUNCATE INTO TABLE T1
FIELDS TERMINATED BY ','
( field1,
field2 FILLER,
field3
)
One can create one logical record from multiple physical records using one of the following two
clauses:
o
o
CONCATENATE: - use when SQL*Loader should combine the same number of physical
recordstogether to form one logical record.
CONTINUEIF - use if a condition indicates that multiple records should be treated as one.
Eg. by having a '#' character in column 1.
One cannot, but by setting the ROWS= parameter to a large value, committing can be reduced. Make
sure you have big rollback segments ready when you use a high value for ROWS=.
o
o
o
o
A very simple but easily overlooked hint is not to have any indexes and/or constraints
(primary key) on your load tables during the load process. This will significantly slow down
load times even with ROWS= set to a high value.
Add the following option in the command line: DIRECT=TRUE. This will effectively bypass
most of the RDBMS processing. However, there are cases when you can't use direct load.
Refer to chapter 8 on Oracle server Utilities manual.
Turn off database logging by specifying the UNRECOVERABLE option. This option can only be
used with direct data loads.
Run multiple load jobs concurrently.
The conventional path loader essentially loads the data by using standard INSERT statements. The
direct path loader (DIRECT=TRUE) bypasses much of the logic involved with that, and loads directly
into the Oracle data files. More information about the restrictions of direct path loading can be
obtained from the Utilities Users
SQL*Loader can load data from a "primary data file", SDF (Secondary Data file - for loading nested
tables and VARRAYs) or LOBFILE. The LOBFILE method provides an easy way to load documents,
photos, images and audio clips into BLOB and CLOB columns. Look at this example:
Given the following table:
NUMBER(5),
file_name
VARCHAR2(30),
image_data BLOB);
Control File:
LOAD DATA
INFILE *
INTO TABLE image_table
REPLACE
FIELDS TERMINATED BY ','
(
image_id
INTEGER(5),
file_name
CHAR(30),
Specify the Characterset WE8EBCDIC500 for the EBCDIC data. The following example shows the
SQL*Loader Controlfile to load a fixed length EBCDIC record into the Oracle Database:
LOAD DATA
CHARACTERSET WE8EBCDIC500
INFILE data.ebc "fix 86 buffers 1024"
BADFILE data.bad'
DISCARDFILE data.dsc'
REPLACE
INTO TABLE temp_data
(
field1
POSITION (1:4)
INTEGER EXTERNAL,
field2
POSITION (5:6)
INTEGER EXTERNAL,
field3
POSITION (7:12)
INTEGER EXTERNAL,
field4
POSITION (13:42)
CHAR,
field5
POSITION (43:72)
CHAR,
field6
POSITION (73:73)
INTEGER EXTERNAL,
field7
POSITION (74:74)
INTEGER EXTERNAL,
field8
POSITION (75:75)
INTEGER EXTERNAL,
field9
POSITION (76:86)
INTEGER EXTERNAL
STATUS
CONTENTS
ONLINE
PERMANENT
UNDO
ONLINE
UNDO
SYSAUX
ONLINE
PERMANENT
TEMP
ONLINE
TEMPORARY
USERS
ONLINE
PERMANENT
BYTES
\ORACLEXE\ORADATA\XE\USERS.DBF 104857600
SYSAUX
\ORACLEXE\ORADATA\XE\SYSAUX.DBF 461373440
UNDO
\ORACLEXE\ORADATA\XE\UNDO.DBF
SYSTEM
\ORACLEXE\ORADATA\XE\SYSTEM.DBF 356515840
94371840
CONTENTS
ONLINE
PERMANENT
UNDO
ONLINE
UNDO
SYSAUX
ONLINE
PERMANENT
TEMP
ONLINE
TEMPORARY
USERS
ONLINE
PERMANENT
MY_SPACE
ONLINE
PERMANENT
BYTES
\ORACLEXE\ORADATA\XE\USERS.DBF 104857600
SYSAUX
\ORACLEXE\ORADATA\XE\SYSAUX.DBF 461373440
UNDO
\ORACLEXE\ORADATA\XE\UNDO.DBF
SYSTEM
\ORACLEXE\ORADATA\XE\SYSTEM.DBF 356515840
MY_SPACE
\TEMP\MY_SPACE.DBF
94371840
10485760
So one statement created two structures: a tablespace and a data file. If you check your file system
with Windows file explorer, you will see the data file is located in the \temp directory of. The data
file size is about 10MB. Its contents should be blank and full of \x00 at this time.
How To Rename a Tablespace?
You can easily rename a tablespace by using the ALTER TABLESPACE ... RENAME TO statement as
shown in the example below:
SQL> CREATE TABLESPACE my_space
2 DATAFILE '/temp/my_space.dbf' SIZE 10M;
Tablespace created.
SQL> ALTER TABLESPACE my_space RENAME TO your_space;
Tablespace created.
SQL> SELECT TABLESPACE_NAME, STATUS, CONTENTS
2 FROM USER_TABLESPACES;
TABLESPACE_NAME STATUS
CONTENTS
ONLINE
PERMANENT
UNDO
ONLINE
UNDO
SYSAUX
ONLINE
PERMANENT
TEMP
ONLINE
TEMPORARY
USERS
ONLINE
PERMANENT
YOUR_SPACE
ONLINE
PERMANENT
After you have created a new tablespace, you can give it to your users for them to create tables in
the new tablespace. To create a table in a specific tablespace, you need to use the TABLESPACE
clause in the CREATE TABLE statement. Here is a sample script:
SQL> connect SYSTEM/fyicenter
Connected.
TABLE_NAME
TABLESPACE_NAME
MY_SPACE
EMPLOYEES
USERS
107
...
How To See Free Space of Each Tablespace?
NUM_ROWS
One of the important DBA tasks is to watch the storage usage of all the tablespaces to make sure
there are enough free space in each tablespace for database applications to function properly. Free
space information can be monitored through the USER_FREE_SPACE view. Each record in
USER_FREE_SPACE represents an extent, a contiguous area of space, of free space in a data file of a
tablespace.
Here is SQL script example on how to see free space of a tablespace:
SQL> connect HR/fyicenter
Connected.
FILE_ID
BYTES
5 10354688
USERS
4 101974016
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
USERS
65536
BYTES
C:\ORACLEXE\ORADATA\XE\USERS.DBF 104857600
SYSAUX
C:\ORACLEXE\ORADATA\XE\SYSAUX.DBF 461373440
UNDO
C:\ORACLEXE\ORADATA\XE\UNDO.DBF
SYSTEM
C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF 356515840
MY_SPACE
C:\TEMP\MY_SPACE.DBF
MY_SPACE
C:\TEMP\MY_SPACE_2.DBF
10485760
5242880
FILE_ID
BYTES
94371840
5177344
MY_SPACE
5 10354688
1285956 bytes
Variable Size
58720444 bytes
Database Buffers
Redo Buffers
37748736 bytes
2908160 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 5 - see DBWR
trace file
ORA-01110: data file 5: 'C:\TEMP\MY_SPACE.DBF'
SQL> SHUTDOWN;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
How Remove Data Files befor opening a Database?
Let's say you have a corrupted data file or lost a data file. Oracle can mount the database. But it
will not open the database. What you can do is to set the bad data file as offline befor opening the
database. The tutorial exercise shows you how to set two data files offline and open the database
without them:
>sqlplus /nolog
SQL> connect SYSTEM/fyicenter AS SYSDBA
SQL> STARTUP MOUNT;
ORACLE instance started.
Total System Global Area 100663296 bytes
Fixed Size
1285956 bytes
Variable Size
58720444 bytes
Database Buffers
Redo Buffers
37748736 bytes
2908160 bytes
Database mounted.
SQL> ALTER DATABASE DATAFILE '\temp\my_space.dbf'
2 OFFLINE DROP;
Database altered.
SQL> ALTER DATABASE DATAFILE '\temp\my_space_2.dbf'
2 OFFLINE DROP;
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> col file_name format a36;
SQL> col tablespace_name format a16;
SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES
2 FROM DBA_DATA_FILES;
TABLESPACE_NAME FILE_NAME
BYTES
C:\ORACLEXE\ORADATA\XE\USERS.DBF 104857600
SYSAUX
C:\ORACLEXE\ORADATA\XE\SYSAUX.DBF 503316480
UNDO
C:\ORACLEXE\ORADATA\XE\UNDO.DBF
SYSTEM
C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF 367001600
MY_SPACE
C:\TEMP\MY_SPACE.DBF
MY_SPACE
C:\TEMP\MY_SPACE_2.DBF
94371840
At this point, if you don't care about the data in MY_SPACE, you can drop it now with the database
opened.
SPACE MANAGEMENT
Oracle locally managed tablespace benefits
There are a number of benefits that locally-managed tablespaces offer:
Less contention - OLTP systems profit from fewer dictionary concurrency problems because
Oracle manages space in the tablespace rather than the data dictionary. Recursive space
management calls become a thing of the past. Folks who endure parallel server (or RAC)
installations will appreciate this success indicator, as "pinging" between nodes may be substantially
reduced
No high extent penalty - Objects can have nearly unlimited numbers of space extents with
apparently no performance degradation. Such a feature eliminates the problem of object extent
fragmentation outright.
Better free space management - Free space found in datafiles does not have to be coalesced
because bitmaps track free space and allocate it much more effectively than dictionary-managed
tablespaces. This benefit eliminates the problem of honeycomb fragmentation completely.
(2) It also depend on the undo retention policy set by the DBA.If the undo retention time is set to
very low the the data will sweped out too early and may cause SNAPSHOT-TOO-OLD error.
When any DML operation is taking long time which may lead reuse of datablocks of undo segment by
another DML, in this case oracle will throw snapshot too old error.For any DML oracle stores the
undo image in rollback segment, suppose transaction A is in progress which cause datablock from
undo segment been used by this transaction.And now if another transaction B is started which need
datablocks from undo segment to keep undo copy, but if rollback segment is not large enough to
provide the datablocks for second transaction B, then transaction B will use the datablock which are
already in use to retain the undo image for transaction A, and this will lead to snap shot too old.
Simple solution change the commit point by chaning checkpoint frequency. And if it's possible then
increase undo segment size by adding datafile to undo tablespace.
Anyways, ORA-1555 snapshot too old error is thrown by a "query" when it fails to get a read
consistent image of data its querying.
What is read consistency?
When a long running query(running into hours) is issued, its possible that before the query finishes
and gives u result some other DML "transaction" may change the data that the query was
reading.This will cause the query to return inconsistent result(which is not allowed as per the ACID
properties). To avoid this Oracle has something called as Undo segments which will store the old
value of changed data blocks.So now if a query hits a data block which was changed after it started,
it will go to the UNDO and pick up the old value(the one which was prevalent when the query has
started its execution.this is also the READ CONSISTENT copy).
Ok, now when will ORA-1555 occur? If the query is unable to get the old(read consistent) copy of the
data in the UNDO-because it may have be overwritten-then...yes my friend you are right..thats the
time the query throws the ORA-1555 error.
1. Explain the difference between a hot backup and a cold backup and the benefits
associated with each.
A hot backup is basically taking a backup of the database while it is still up and running and it
must be in archive log mode. A cold backup is taking a backup of the database while it is shut
down and does not require being in archive log mode. The benefit of taking a hot backup is
that the database is still available for use while the backup is occurring and you can recover
the database to any point in time. The benefit of taking a cold backup is that it is typically
easier to administer the backup and recovery process. In addition, since you are taking cold
backups the database does not require being in archive log mode and thus there will be a slight
performance gain as the database is not cutting archive logs to disk.
2. You have just had to restore from backup and do not have any control files. How would
you go about bringing up this database?
I would create a text based backup control file, stipulating where on disk all the data files are
and then issue the recover command with the using backup control file clause.
3. How do you switch from an init.ora file to a spfile?
Issue the create spfile from pfile command.
Disable the foreign key constraint to the parent, drop the table, re-create the table, enable
the foreign key constraint.
13. Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode and the
benefits and disadvantages to each.
ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all
transactions that have occurred in the database so that you can recover to any point in time.
NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage
of not being able to recover to any point in time. NOARCHIVELOG mode does have the
advantage of not having to write transactions to an archive log and thus increases the
performance of the database slightly.
14. What command would you use to create a backup control file?
Alter database backup control file to trace.
15. Give the stages of instance startup to a usable state where normal users may access it.
STARTUP NOMOUNT - Instance startup
STARTUP MOUNT - The database is mounted
STARTUP OPEN - The database is opened
16. What column differentiates the V$ views to the GV$ views and how?
The INST_ID column which indicates the instance in a RAC environment the information came
from.
17. How would you go about generating an EXPLAIN plan?
Create a plan table with utlxplan.sql.
Use the explain plan set statement_id = 'tst1' into plan_table for a SQL statement
Look at the explain plan with utlxplp.sql or utlxpls.sql
18. How would you go about increasing the buffer cache hit ratio?
Use the buffer cache advisory over a given workload and then query the v$db_cache_advice
table. If a change was necessary then I would use the alter system set db_cache_size
command.
19. Explain an ORA-01555 ? SNAPSHOT TOO OLD
You get this error when you get a snapshot too old within rollback. It can usually be solved by
increasing the undo retention or increasing the size of rollbacks. You should also look at the
logic involved in the application getting the error message.
20. Explain the difference between $ORACLE_HOME and $ORACLE_BASE.
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is
where the oracle products reside.
A cold backup is when the database is not running - i.e. users are not logged on - hence no
activity going on and easier to backup. This is also known as an offline backup.
In contrast, a hot backup would have to be taken if your database is mission critical, i.e. it has
to run 24 hours a day, 7 days a week. In this case you will have to perform an online or a hot
backup.
Import/export is logical backup (just backing up data and table structrues).
Physical copy of database files (datafiles+controlfiles+redlog files) itself can be divided into
cold and hot backups.
Copying physical files when the database is up is called hot back up. This is achieved by putting
the tablespace in backup mode (available for users) and then copying all the datafile using the
ocopy commands and then bringing it back to normal state.
Alter tablespace begin backup;
host ocopy
Alter tablespace end backup;
Whereas if you copy the same files when the db is down, it is called cold backup.
21. How would you determine the time zone under which a database was operating?
select DBTIMEZONE from dual;
22. Explain the use of setting GLOBAL_NAMES equal to TRUE.
Setting GLOBAL_NAMES dictates how you might connect to a database. This variable is either
TRUE or FALSE and if it is set to TRUE it enforces database links to have the same name as the
remote database to which they are linking.
23. What command would you use to encrypt a PL/SQL application?
WRAP
24. Name three advisory statistics you can collect.
Buffer Cache Advice, Segment Level Statistics, & Timed Statistics.
25. Where in the Oracle directory tree structure are audit traces placed?
In unix $ORACLE_HOME/rdbms/audit, in Windows the event viewer
28. Explain materialized views and how they are used.
Materialized views are objects that are reduced sets of information that have been
summarized, grouped, or aggregated from base tables. They are typically used in data
warehouse or decision support systems.
29. When a user process fails, what background process cleans up after it?
PMON
30. What background process refreshes materialized views?
The Job Queue Processes.
31. How would you determine what sessions are connected and what resources they are
waiting for?
Use of V$SESSION and V$SESSION_WAIT
32. Describe what redo logs are.
Redo logs are logical and physical structures that are designed to hold all the changes made to
a database and are intended to aid in the recovery of a database.
33. How would you force a log switch?
ALTER SYSTEM SWITCH LOGFILE;
34. Give two methods you could use to determine what DDL changes have been made.
You could use Logminer or Streams
35. What does coalescing a tablespace do?
Coalescing is only valid for dictionary-managed tablespaces and de-fragments space by
combining neighboring free extents into large single extents.
36. What is the difference between a TEMPORARY tablespace and a PERMANENT
tablespace?
A temporary tablespace is used for temporary objects such as sort structures while permanent
tablespaces are used to store those objects meant to be used as the true objects of the
database.
37. Name a tablespace automatically created when you create a database.
The SYSTEM tablespace.
38. When creating a user, what permissions must you grant to allow them to connect to the
database?
Grant the CONNECT to the user.
39. How do you add a data file to a tablespace?
ALTER TABLESPACE ADD DATAFILE SIZE
40. How do you resize a data file?
2. Locate the latest dump file in your USER_DUMP_DEST directory (show parameter
USER_DUMP_DEST) - rename it to something like dbrename.sql.
3. dbrename.sql, remove all headers and comments, and change the database's name. Also
change "CREATE CONTROLFILE REUSE ..." to "CREATE CONTROLFILE SET ...".
4. Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE, don't ABORT!) and run
dbrename.sql.
5. Rename the database's global name:
ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name;
performance issue anymore, unless they run into thousands and thousands where additional I/O
may be required to fetch the additional blocks where extent maps of the segment are stored.
Back to top of file
Where can one find the high water mark for a table?
There is no single system table which contains the high water mark (HWM) for a table. A table's
HWM can be calculated using the results from the following SQL statements:
SELECT BLOCKS
FROM
DBA_SEGMENTS
WHERE
SELECT EMPTY_BLOCKS
FROM
DBA_TABLES
WHERE
command.
Because you can change the sizes of datafiles, you can add more space to your database
without adding more datafiles. This is beneficial if you are concerned about reaching the
maximum number of datafiles allowed in your database.
Manually reducing the sizes of datafiles allows you to reclaim unused space in the database. This
is useful for correcting errors in estimations of space requirements.
Also, datafiles can be allowed to automatically extend if more space is required. Look at the
following command:
CREATE TABLESPACE pcs_data_ts
DATAFILE 'c:\ora_apps\pcs\pcsdata1.dbf' SIZE 3M
AUTOEXTEND ON NEXT 1M
DEFAULT STORAGE (
MAXSIZE UNLIMITED
INITIAL 10240
NEXT 10240
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0)
ONLINE
PERMANENT;
On systems prior to Oracle 8i, write a job to copy archived redo log files from the primary
database to the standby system, and apply the redo log files to the standby database (pipe it).
Remember the database is recovering and will prompt you for the next log file to apply.
Oracle 8i onwards provide an "Automated Standby Database" feature which will send archived
log files to the remote site via NET8, and apply then to the standby database.
When one needs to activate the standby database, stop the recovery process and activate it:
ALTER DATABASE ACTIVATE STANDBY DATABASE;
Include this in your INIT.ORA file and bounce your database for it to take effect.
Thanks to Erlie
Flynn
sys.v_$instance;
sys.v_$session
WHERE
sid=1
/* this is pmon */
Users still running on Oracle 7 can try one of the following queries:
column STARTED format a18 head 'STARTUP TIME'
select C.INSTANCE,
to_date(JUL.VALUE, 'J')
|| to_char(floor(SEC.VALUE/3600),
'09'
|| ':'
-- || substr (to_char(mod(SEC.VALUE/60, 60), '09'), 2, 2)
|| substr (to_char(floor(mod(SEC.VALUE/60, 60)), '09'), 2, 2)
|| '.'
|| substr (to_char(mod(SEC.VALUE,
from SYS.V_$INSTANCE JUL,
60), '09'), 2, 2)
STARTED
SYS.V_$INSTANCE SEC,
SYS.V_$THREAD
select
to_date(JUL.VALUE, 'J')
|| to_char(to_date(SEC.VALUE, 'SSSSS'), ' HH24:MI:SS') STARTED
select
a DATE
'DD-MON-YY HH24:MI:SS') STARTED
from V$INSTANCE JUL,
V$INSTANCE SEC
where JUL.KEY like '%JULIAN%'
and SEC.KEY like '%SECOND%';
V$temp_space_header
GROUP
BY tablespace_name;
sys.v_$session s, sys.v_$sort_usage u
where
s.saddr = u.session_addr
where
s.saddr = u.session_addr
and
vp.name = 'db_block_size'
and
group
data from the database to a file. See the Import/ Export FAQ for more details.
Cold or Off-line Backups - Shut the database down and backup up ALL data, log, and control
files.
Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the
tablespaces into backup mode and backup their files. Also remember to backup the control files
and archived redo log files.
RMAN Backups - While the database is off-line or on-line, use the "rman" utility to backup the
database.
It is advisable to use more than one of these methods to backup your database. For example, if
you choose to do on-line database backups, also cover yourself by doing database exports. Also
test ALL backup and recovery scenarios carefully. It is better to be safe than sorry.
Regardless of your strategy, also remember to backup all required software libraries, parameter
files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup
archived log files.
Back to top of file
Restoring involves copying backup files from secondary storage (backup media) to disk. This can
be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can rollforward until a specific point-in-time (before the disaster occurred), or roll-forward until the last
transaction recorded in the log files.
sql> connect SYS as SYSDBA
sql> RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP
CONTROLFILE;
Sometimes Oracle takes forever to shutdown with the "immediate" option. As workaround to this
problem, shutdown using these commands:
alter system checkpoint;
shutdown abort
startup restrict
shutdown immediate
Note that if you database is in ARCHIVELOG mode, one can still use archived log files to roll
forward from an off-line backup. If you cannot take your database down for a cold (off-line)
backup at a convenient time, switch your database into ARCHIVELOG mode and perform hot
(on-line) backups.
Back to top of file
It is better to backup tablespace for tablespace than to put all tablespaces in backup mode.
Backing them up separately incurs less overhead. When done, remember to backup your control
files. Look at this example:
NOTE: Do not run on-line backups during peak processing periods. Oracle will write complete
database blocks instead of the normal deltas to redo log files while in backup mode. This will lead
to excessive database archiving and even database freezes.
Back to top of file
( database );
release channel t1;
}
The examples above are extremely simplistic and only useful for illustrating basic concepts. By
default Oracle uses the database controlfiles to store information about backups. Normally one
would rather setup an RMAN catalog database to store RMAN metadata in. Read the Oracle
Backup and Recovery Guide before implementing any RMAN backups.
Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media
manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to
disk and then manually copy the backups to tape.
Back to top of file
NOTE1: Remember to take a baseline database backup right after enabling archivelog mode.
Without it one would not be able to recover. Also, implement an archivelog backup to prevent the
archive log directory from filling-up.
NOTE2: ARCHIVELOG mode was introduced with Oracle V6, and is essential for database pointin-time recovery. Archiving can be used in combination with on-line and off-line database
backups.
NOTE3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG
mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=...
NOTE4: You can change the archive log destination of a database on-line with the ARCHIVE
LOG START TO 'directory'; statement. This statement is often used to switch archiving between a
set of directories.
NOTE5: When running Oracle Real Application Server (RAC), you need to shut down all nodes
before changing the database to ARCHIVELOG mode. See the RAC FAQ for more details.
Back to top of file
format '/app/oracle/arch_backup/log_t%t_s%s_p%p'
5>
if only half of it was backed up (split blocks). Because of this, one should notice increased log
activity and archiving during on-line backups.
Back to top of file
One can select from V$BACKUP to see which datafiles are in backup mode. This normally saves
a significant amount of database down time. See script end_backup2.sql in the script section of
this FAQ.
Thiru Vadivelu contributed the following:
From Oracle9i onwards, the following command can be used to take all of the datafiles out of
hotbackup mode:
ALTER DATABASE END BACKUP;
The following INIT.ORA parameter may be required if your current redologs are corrupted or
blown away. Caution is advised when enabling this parameter as you might end-up losing your
entire database. Please contact Oracle Support before using it.
_allow_resetlogs_corruption = true
SQL> exit;
Next, log in to rman and create the catalog schema. Prior to Oracle 8i this was done by running
the catrman.sql script.
rman catalog rman/rman
RMAN> create catalog tablespace tools;
RMAN> exit;
You can now continue by registering your databases in the catalog. Look at this example:
Resize datafiles
One can manually increase or decrease the size of a datafile from Oracle 7.2 using the
following command:
ALTER DATABASE DATAFILE 'filename2' RESIZE 100M;
Because you can change the sizes of datafiles, you can add more space to your database
without adding more datafiles. This is beneficial if you are concerned about reaching the
maximum number of datafiles allowed in your database.
Manually reducing the sizes of datafiles allows you to reclaim unused space in the
database. This is useful for correcting errors in estimations of space requirements.
Extend datafiles
Also, datafiles can be allowed to automatically extend if more space is required. Look at
the following commands:
CREATE TABLESPACE pcs_data_ts
DATAFILE 'c:ora_appspcspcsdata1.dbf' SIZE 3M
AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED
DEFAULT STORAGE ( INITIAL 10240
NEXT 10240
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0)
ONLINE
PERMANENT;
ALTER DATABASE DATAFILE 1 AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;
For Indexes
ANALYZEINDEXorders_region_id_idx
VALIDATESTRUCTURE;
After running this command, query INDEX_STATSto obtain information about the
index as shown in the following example:
SELECTblocks,pct_used,distinct_keys
lf_rows,del_lf_rows
FROMindex_stats;
Reorganize the index if it has a high proportion of deleted rows. For example: when the
ratio of DEL_LF_ROWS to LF_ROWS exceeds 30%.
TABLESPACE SIZE
SID_SERIAL USERNAME PROGRAM
---------- ------- ---------- -------- -----------------------------TEMP
24M
260,7
SCOTT
sqlplus@localhost.localdomain
(TNS V1-V3)
On Windows systems:
Create a batch file, sss.bat, add the command to it, and place it somewhere in your PATH.
Whenever you now want to start sqlplus as sysdba, just type "sss". Much less typing for
ya lazy DBA's.
Note: From Oracle 10g you don't need to put the "/AS SYSDBA" in quotes anymore.
DBA's often do not document the patches they install. This may lead to situations where a
feature works on machine X, but not on machine Y. This FAQ will show how you can list
and compare the patches installed within your Oracle Homes.
All patches that are installed with Oracle's OPatch Utility (Oracle's Interim Patch
Installer) can be listed by invoking the opatch command with the lsinventory option.
Here is an example:
$ cd $ORACLE_HOME/OPatch
$ opatch lsinventory
Invoking OPatch 10.2.0.1.0
Oracle interim Patch Installer version 10.2.0.1.0
Copyright (c) 2005, Oracle Corporation. All rights reserved..
...
Installed Top-level Products (1):
Oracle Database 10g
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
10.2.0.1.0
OPatch succeeded.