Professional Documents
Culture Documents
Doc Type:
Subject:
Performance Standards
Recommendations for the
Oracle Applications
Coverage:
-
- SQL
Views
PL/SQL
Java
Forms
Reports
PRO*C
Discoverer
Data Modeling
Concurrent Manager Jobs
Author(s):
Contributor(s):
Creation Date:
Last Updated:
Version:
Status:
Functional Specification
Table of Contents
1. Overview___________________________________________________________________________4
2. Performance Standards_______________________________________________________________5
2.1. SQL__________________________________________________________________________________5
2.1.1. Bind Variables_________________________________________________________________________________5
2.1.2. nvl() and decode() ______________________________________________________________________________6
2.1.3. IN vs. EXISTS_________________________________________________________________________________8
2.1.4. Sharable Memory______________________________________________________________________________10
2.1.5. Outer-Joins___________________________________________________________________________________11
2.1.6. Execution plans_______________________________________________________________________________11
2.1.7. Deadlock and Locking Order_____________________________________________________________________14
2.1.8. General Guidelines_____________________________________________________________________________16
2.2. Views________________________________________________________________________________17
2.2.1. Creating Views________________________________________________________________________________17
2.2.2. Using Views__________________________________________________________________________________17
2.2.3. View Merging_________________________________________________________________________________17
2.3. PL/SQL______________________________________________________________________________18
2.3.1. Layers of pl/sql-java "objects" ____________________________________________________________________18
2.3.2. PL/SQL table usage____________________________________________________________________________18
2.3.3. Bulk________________________________________________________________________________________19
2.3.4. Shared pool pinning ___________________________________________________________________________20
2.3.5. General PL/SQL performance guidelines ___________________________________________________________21
2.4. Java_________________________________________________________________________________22
2.4.1. Object Creation_______________________________________________________________________________22
2.4.2. Strings and StringBuffers________________________________________________________________________22
2.4.3. Coding Best Practices___________________________________________________________________________24
2.4.4. Synchronization_______________________________________________________________________________25
2.4.5. Collections___________________________________________________________________________________25
2.4.6. Garbage Collection_____________________________________________________________________________26
2.4.7. Weak & Soft References________________________________________________________________________26
2.4.8. JDBC Guidelines______________________________________________________________________________27
2.4.9. Memory Footprint_____________________________________________________________________________29
2.4.10. Reducing Database Trips_______________________________________________________________________29
2.4.11. Deployment_________________________________________________________________________________29
2.4.12. Green Threads versus Native Threads_____________________________________________________________29
2.5. Forms_______________________________________________________________________________30
2.5.1. Forms Blocks_________________________________________________________________________________30
2.5.2. Use of bind variables___________________________________________________________________________30
2.5.3. LOVs_______________________________________________________________________________________30
2.5.4. Record Groups________________________________________________________________________________31
2.5.5. Caching______________________________________________________________________________________31
2.5.6. Item Properties________________________________________________________________________________31
2.6. Reports______________________________________________________________________________31
2.6.1. Reports SQL__________________________________________________________________________________31
2.6.2. Initialization Values____________________________________________________________________________32
2.6.3. Break Groups_________________________________________________________________________________32
2.6.4. Computed Columns____________________________________________________________________________32
2.6.5. Lexical Parameters_____________________________________________________________________________32
2.6.6. Defaulting Report Parameters____________________________________________________________________32
Oracle Confidential
Version:
2.8. Discoverer___________________________________________________________________________37
2.9. Materialized Views____________________________________________________________________37
2.10. Data Modeling_______________________________________________________________________38
2.10.1. Data Modeling for OLTP_______________________________________________________________________38
2.10.2. Arrange most used/accessed columns first in a new table______________________________________________38
2.10.3. Primary Keys________________________________________________________________________________39
2.10.4. NULL columns_______________________________________________________________________________39
2.10.5. Indexes_____________________________________________________________________________________39
2.10.6. Attribute Type _______________________________________________________________________________40
2.10.7. Views______________________________________________________________________________________40
2.10.8. General Guidelines____________________________________________________________________________41
Functional Specification
1. Overview
The objective of this document is to present a series of performance development standards for
use in conjunction with Oracle Applications Release 11.5 and beyond. The standards presented in
this document cover the following areas: SQL, Views, PL/SQL, Java, PRO*C, Forms, Reports, and
Discoverer. We will document the relevant performance development standards for each
individual area. It is important that Applications developers adhere to the standards listed in this
document. Failure to do so often leads to performance issues and bugs that require a large redesign of a feature. Due to the nature of Applications development and the ever-evolving
technology stack, this document will continue to evolve in order to incorporate any new
performance standards.
Oracle Confidential
Version:
2. Performance Standards
This section details the performance development standards for the individual areas such as SQL
or View related standards. It is assumed that the reader of this document is fluent in the different
areas of writing SQL, etc.
2.1. SQL
This section documents the standards pertaining to the use of SQL in Application
code. Due to the complexity, views are discussed in a separate section. Please note
that the SQL performance standards presented here apply to all clients of SQL
including Forms, PL/SQL, Java, HTML, Perl, PRO*C, Reports, Discoverer, Views, and
any other component where SQL is used.
2.1.1.
Bind Variables
Bind variables allow SQL statements to be shared across repeated executions. The
use of bind variables helps prevent a SQL statement from hard parsing on every
execution only because the values supplied have changed. Bind variables help
eliminate hard parses and in certain cases help reduce the soft parse code path (i.e.
PL/SQL).
When using bind variables, you should match the bind variables types with the
database column types to which they are being binded. For example, [transaction_id
= :b1]. In this case, the bind variable :b1 should be declared as a numeric data type
provided that the database column type is defined as a NUMBER. If the
transaction_id column is NUMERIC, and the PL/SQL variable, for example, is varchar,
then an implicit conversion will be needed in order to make the types consistent.
Inconsistent bind types can cause multiple child cursors to be created for the same
SQL statement and disable the use of an index on that column. Hence, it is
important that when you use bind variables, that the types and lengths match
exactly to that of the respective database columns. This applies to INSERTs,
SELECTS, UPDATEs, and DELETEs; in other words any SQL statement where binds are
used. PL/SQL and Forms both perform automatic binding. For example, in Forms
when you have a SQL statement which references an item in a block such as [a.col =
:MYBLOCK.MYITEM], Forms rewrites this SQL to be [a.col = :1]. PL/SQL also does
automatic binding when a SQL statement in PL/SQL references a PL/SQL variable.
Hence, it is important that your Forms Block Items and PL/SQL variables are
consistent with the types of the database columns. All SQL statements in Oracle
Applications should use bind variables except in the following exception cases:
statements involving the use of histograms and certain types of upgrade scripts.
Dynamicaly generated SQL statements should be double-checked for bind variables.
2.1.1.1.
Histograms
Histograms allow the optimizer to assign the correct selectivity for a column filter or
a join condition using the histogram distribution information rather than assuming a
uniform distribution. For skewed columns such as flags, statuses, or types,
histograms are needed so that the optimizer can accurately estimate the selectivity.
For example, if 95% of the rows consisted of having the column value of
5
Functional Specification
STATUS=COMPLETE, and only 5% consisted of the value STATUS=PENDING, the
histogram allows the optimizer to assign this correct weight. The lack of a histogram
(uniform distribution) would result in a 50% selectivity for either COMPLETE or
PENDING. The optimizer does not currently use histograms when a bind variable is
used as a value. For this reason, literals should be used only in SQL statements that
contain filters on skewed columns for which a histogram exists and only on that
column. The remaining filters should use bind variables. In addition, the use of
literals should be restricted to a consistent set of values per execution. For example,
consider the following query:
select EI.TASK_ID, EI.BILL_RATE
from PA_EXPENDITURE_ITEMS EI
where EI.TASK_ID=:b1
and EI.COST_BURDEN_DISTRIBUTED_FLAG = N
Notice in the above example that TASK_ID is using a bind variable, however, the
COST_BURDEN_DISTRIBUTED_FLAG is using a literal (N). This is an accepted use of
literals.
2.1.1.2.
Non-Repeatable Upgrade Scripts
Non-repeatable upgrade scripts are another exception where literals can be
used in place of bind variables. A non-repeatable upgrade script applies to a
script that is run once and only once during the lifetime of an upgrade cycle.
If the script is run more than once, or is part of a parallel upgrade script, the
script should use bind variables in order to facilitate cursor reuse.
2.1.2.
ai.invoice_num = nvl(:b1,ai.invoice_num)
Although you may think that the optimizer can use the index on AI.INVOICE_NUM
because the nvl() is on the right-hand side, it will not. Functions such as nvl() and
decode() are considered index-unsafe for the simple reason that the ability to utilize
an index depends on the bind variable value. In the example above, if the bind
variable :b1 is null, then the expression will result in the following: (ai.invoice_num =
ai.invoice_num). Obviously, in this case, the index on AI.INVOICE_NUM cannot be
used because this expression is semantically equivalent to [1=1]. The optimizer has
no way of knowing whether or not a bind variable value is supplied or if it is null.
2.1.2.1.
nvl() and optimizer statistics
Another example of an nvl() construct that should not be used is as follows:
update GL_BALANCES GBAL
set
PERIOD_NET_DR = :b1
where (GBAL.CODE_COMBINATION_ID,GBAL.PERIOD_NAME,
Oracle Confidential
Version:
GBAL.SET_OF_BOOKS_ID,GBAL.CURRENCY_CODE,GBAL.ACTUAL_FLAG)
in (select CODE_COMBINATION_ID , PERIOD_NAME ,
SET_OF_BOOKS_ID , CURRENCY_CODE , ACTUAL_FLAG
from POSTING_INTERIM )
and NVL(GBAL.TRANSLATED_FLAG,X) <>R
2.1.2.2.
decode() and join-key resolution
You should also not use decode() or nvl() as a run-time join filter. This
prevents the optimizer from assigning the correct join cardinality estimates.
Doing so often leads to poor execution plans. You should join directly to the
tables, and the join keys should be explicitly provided. For example:
select ae.source_table,
d.invoice_distribution_id,
ap.invoice_payment_id
from ap_ae_lines_all ae,
ap_invoice_distributions_all d,
po_distributions_all pd,
ap_invoice_payments_all ap
where decode(ae.source_table,AP_INVOICE_DISTRIBUTIONS,ae.source_id,null)
= d.invoice_distribution_id (+)
and
ae.source_id = 21628
and
ae.source_table = AP_INVOICE_DISTRIBUTIONS
and
pd.po_distribution_id(+) = d.po_distribution_id
and
decode(ae.source_table,AP_INVOICE_PAYMENTS,ae.source_id,null)
= ap.invoice_payment_id (+)
In the above example, the join between the table AP_AE_LINES_ALL and
AP_INVOICE_PAYMENTS_ALL depends on the runtime value of the
AE.SOURCE_TABLE column. Hence, the optimizer will not be able to
accurately estimate the join cardinality between these two tables at plan
generation time. The optimizer will use internal defaults, and it may result in
a sub-optimal plan.
2.1.2.3.
Another common misuse of nvl() is the negation case whereby you want to
retrieve rows given a certain criteria. Consider the following query:
select max(poll2.creation_date)
from po_line_locations_archive poll2,
po_headers_archive poh,
po_lines_archive pol1
where pol1.po_line_id=poll2.po_line_id and
Functional Specification
poh.po_header_id=pol1.po_header_id and
NVL(POL1.LATEST_EXTERNAL_FLAG ,N)=Y and
pol1.item_id=:b1 AND
POH.TYPE_LOOKUP_CODE IN (STANDARD,PLANNED,BLANKET) AND
NVL(POH.LATEST_EXTERNAL_FLAG ,N) = Y AND
POLL2.SHIPMENT_TYPE != PRICE BREAK AND
NVL(POLL2.LATEST_EXTERNAL_FLAG,N) = Y)
In
the
previous
example,
the
predicate
[ NVL(POH.LATEST_EXTERNAL_FLAG ,N)=Y] can be semantically rewritten
as [POH.LATEST_EXTERNAL_FLAG=Y]. This avoids the nvl() construct and
the unnecessary overhead of invoking the nvl() SQL function. Do not use nvl()
on a column when you are after the non-null rows, and the predicate is an
equality predicate.
2.1.3.
IN vs. EXISTS
Explain Plan:
DELETE STATEMENT Cost=2070, Rows=30196
DELETE MTL_SUPPLY Cost=, Rows=
FILTER Cost=, Rows=
TABLE ACCESS FULL MTL_SUPPLY Cost=2070, Rows=30196
FILTER Cost=, Rows=
TABLE ACCESS BY INDEX ROWID PO_REQUISITION_LINES_ALL Cost=3, Rows=1
INDEX UNIQUE SCAN PO_REQUISITION_LINES_U1 Cost=2, Rows=1
Below is the execution plan for the preceding statement, rewritten where EXISTS is
replaced with IN.
Note that subquery is not correlated wheb using the IN clause.
8
Oracle Confidential
Version:
Functional Specification
Explain Plan:
DELETE STATEMENT Cost=8, Rows=1
DELETE MTL_SUPPLY Cost=, Rows=
NESTED LOOPS Cost=8, Rows=1
TABLE ACCESS BY INDEX ROWID PO_REQUISITION_LINES_ALL Cost=5, Rows=1
INDEX RANGE SCAN PO_REQUISITION_LINES_U2 Cost=3, Rows=1
TABLE ACCESS BY INDEX ROWID MTL_SUPPLY Cost=3, Rows=62439
INDEX RANGE SCAN MTL_SUPPLY_N1 Cost=2, Rows=62439
2.1.4.
Sharable Memory
SQL statements that consume a large amount of sharable memory place a large
burden on the shared pool. The larger the SQL statement, the more memory
allocations and latch gets will be needed in order to build a sharable cursor in the
cursor cache. SQL statements that require a large amount of memory (i.e. several
megabytes) pose a scalability problem since this limits the amount of sharable
cursors that can be active in the shared pool. Suppose for example, that a query Q1
against a view V1 consumes 1.5 MB of sharable memory. Suppose that query Q2 is a
slight variant of Q1 in that it specified an additional filter or a different filter. This
results in 3 MB of shared memory allocated for only two cursors. Shared pool
operations are also slightly more expensive in an OPS/RAC environment due to the
need to acquire global cache locks. Hence, it is important that Apps SQL statements
are kept to a reasonable minimum in terms of the sharable memory required. Apps
SQL statements should not exceed 1 MB in terms of the amount of sharable memory
required for the cursor for any particular SQL statement. The amount of sharable
memory for a SQL statement can be measured by querying the v$sql table and
examining the SHARABLE_MEM column. The following is an example of a query that
reports the amount of sharable memory consumed for a given SQL statement:
select sql_text,sharable_mem
from v$sql
where sql_text like %select ae.source_table%
SQL_TEXT SHARABLE_MEM (bytes)
select ae.source_table, d.invoice_distribution_id, ap.invoice_payment_id
from ap_ae_lines_all ae,
ap_invoice_distributions_all d,
po_distributions_all pd,
ap_invoice_payments_all ap
where decode(ae.source_table,AP_INVOICE_DISTRIBUTIONS,ae.source_id,null)
= d.invoice_distribution_id (+)
and
ae.source_id = :b1
and
ae.source_table = AP_INVOICE_DISTRIBUTIONS
and
pd.po_distribution_id(+) = d.po_distribution_id
and
decode(ae.source_table, AP_INVOICE_PAYMENTS,ae.source_id,null)
= ap.invoice_payment_id (+)
In the above example, the SQL statement consumed almost 40K of shared memory
for the cursor. It is important that you monitor the amount of sharable memory
10
Oracle Confidential
Version:
2.1.5.
Outer-Joins
Tables that are outer-joined prevent the optimizer from choosing them as driving
tables. This limits the degree of optimization in terms of join permutation for which
the optimizer can consider. Do not outer-join to a table unless it is absolutely
needed. You should consider using default values in the base tables so as to avoid
an outer-join. Outer-joins are typically needed when there is no corresponding match
or the outer-row key does not exist in the inner-table. NEVER NEVER NEVER NEVER
Outer-join to a view. This typically results in a non-mergable view execution plan
with a full table scan on the adjoining table. If you need outer-join semantics, rewrite
the SQL to outer-join to the required base tables that make up the view.
2.1.6.
Execution plans
As a developer, you are responsible for generating and evaluating the execution
plans for each and every SQL statement for which you check in the code. Do not
make assumptions about the execution plans. You should generate an execution
plan, and review the plan to ensure that it is optimal. Things that you should
highlight from the plan are the driving table and the driving index, non-mergable
views, full table scans, non-selective indexes, and the join methods.
Following execution plan example illustrates that a full table scan is occurring on
both the SO_LINES and SO_HEADERS table. The optimizer also chose to use a hash
join as the join method that is typically the join method of choice when the estimate
join cardinality between the tables is high.
PLAN TABLE:
Operation
Name
Rows
Bytes
Cost
1K
287K
568
1K
287K
568
1K
66M
568
1K|
66M
568
HASH JOIN
194K
66M
337
HASH JOIN
15K
3M
253
SELECT STATEMENT
COUNT STOPKEY
VIEW
FILTER
SORT GROUP BY
SORT GROUP BY
SO_LINES
15K
1M
236
SO_HEADERS
1M
128M
16
You should also generate a SQL trace file and use tkprof to format the output of the
SQL trace file. You should examine the elapsed times, disk reads, and buffer gets.
For a single execution, a high number of buffer gets typically points to an inefficient
SQL statement. A high number of disk reads can be even worse than a high number
of buffer gets since disk reads will be more expensive than reads from the buffer
cache. This usually indicates that a full table scan on a large table is occurring or
that a large join between two tables is occurring (sort merge or hash). Check size
(memory) for SQL statements.
11
Functional Specification
Online queries should not exceed 200KB. SQL for batch jobs or complex
reports should not exceed 1MB.
count
cpu
elapsed
disk
query
current
rows
Execute
Fetch
0.11
1.72
0.17
13.91
31614 172.14
1257.83
0
0
0
0
37558
18
1771379
6639 189682
31617
173.97 1271.91
37558
1771379
6657 189682
Execution Plan
------- --------------------------------------------------0
UNION-ALL
FILTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS: (BY INDEX ROWID) OF MTL_PARAMETERS
INDEX: ANALYZED (UNIQUE SCAN) OF MTL_PARAMETERS_U1 (UNIQUE)
TABLE ACCESS (BY INDEX ROWID) OF
MTL_SYSTEM_ITEMS_B
189682
189682
189682
189682
0
0
For more information on how to interpret execution plan see also Chapter 9
"Using Explain Plan" Oracle 9i Database Performance Guide and Reference
For more information on performance heuristics and join methods see SQL
Repository documentation
12
Oracle Confidential
Version:
13
Functional Specification
2.1.7.
SQL statements that are locking rows should be analyzed carefully to insure that
Application deadlock or lock ordering issues are avoided. The Oracle database raises
an error (ORA-60) when application deadlock occurs, however, it does not resolve
the deadlock. The Application must be designed in such a way that these scenarios
do not occur. Consider the following cursor that attempts to lock qualifying rows:
CURSOR lock_departure(x_dep_id NUMBER) IS
SELECT DEP.STATUS_CODE,
DEL.STATUS_CODE,
LD.LINE_DETAIL_ID,
PLD.PICKING_LINE_DETAIL_ID
FROM WSH_DEPARTURES DEP,
WSH_DELIVERIES DEL,
SO_LINE_DETAILS LD,
SO_PICKING_LINE_DETAILS PLD
WHERE DEP.DEPARTURE_ID = x_dep_id
AND
DEL.ACTUAL_DEPARTURE_ID(+) = DEP.DEPARTURE_ID
AND
LD.DEPARTURE_ID(+) = DEP.DEPARTURE_ID
AND
PLD.DEPARTURE_ID(+) = DEP.DEPARTURE_ID
The problem with this query is that the locking order is largely dependent on the
execution plan and the row source order. For example, it is possible that the rows in
SO_LINE_DETAILS can be locked before the rows in SO_PICKING_LINE_DETAILS. It is
also possible that the rows of SO_PICKING_LINE_DETAILS are locked before the rows
in SO_LINE_DETAILS. The locking order is based on the join order (i.e. execution
plan). If one user ran this query under the RBO, and another user ran this query
under the CBO, locking order issues could arise due to the likelihood of a plan
difference. Another problem with this cursor is that it performs non-qualified locking
via the FOR UPDATE. The FOR UPDATE clause can take additional optional
arguments specifying the tables to be locked. For example, FOR UPDATE OF DEP
means that only the rows in WSH_DEPARTURES should be locked. The solution to this
query is to specify the tables to be locked in the FOR UPDATE clause via the FOR
option, or break the query into separate cursors such that each cursor locks a single
table only. For example, the above cursor can be rewritten as follows:
CURSOR lock_departure(x_dep_id NUMBER) IS
select departure_id
from WSH_DEPARTURES
where DEPARTURE_ID = x_dep_id
FORUPDATE NOWAIT;
CURSOR lock_deliveries(x_dep_id NUMBER) IS
select delivery_id
from WSH_DELIVERIES
where ACTUAL_DEPARTURE_ID = x_dep_id
FOR UPDATE NOWAIT;
14
Oracle Confidential
Version:
In summary, do not code a SQL statement that performs an unqualified lock via the
FOR UPDATE clause. You should either break up the SQL statement into multiple
single table cursors or specify the FOR <table> option of the FOR UPDATE clause.
15
Functional Specification
2.1.8.
General Guidelines
Avoid constructing complex SQL statements that attempt to cover all possible
scenarios. Use conditional logic and break it into simpler and scalable SQL
statements
Search screens and UIs should prevent execution of blind queries as well as
nonselective queries. Such queries impact the whole system.
Use of functions and expressions should be avoided in the conditions used for
index access.
Use of character functions (i.e. like) on number columns causes implicit type
conversion that will disable index use.
Do not use Dynamic SQL or REF Cursors for frequently executed statements.
Use the DML RETURNING feature to merge SQLs and reduce resource
consumption:
Replace:
select seq.nextval into id from dual;
insert into tab (tab_id, ...) values (id, ...)
with
insert into tab (tab_id,...)
values
(seq.nextval,...)
returning tab_id into id;
16
Statements accessing more than one table should use table aliases while
referencing columns, even if column definition is unambiguous.
For poorly performing queries, you may need to revisit the functionality or
change the code in another layer in order to tune the entire flow. Do not just
focus on the SQL statement by itself, evaluate the entire flow.
Do not use PARALLEL hint or alter objects PARALLEL. It will bypass the buffer
cache and impact execution plans.
Oracle Confidential
Version:
2.2. Views
This section covers performance standards related to creation, maintenance and
optimization of Apps views. You should read through the section on SQL
performance standards before reading through this section.
2.2.1.
Creating Views
When creating views, level of view nesting should be 1 - views should directly
expand to base tables. Do not create views on top of views on top of views, etc.
Avoid PL/SQL functions in view definitions, both in WHERE clause and SELECT clause.
PL/SQL function in a SQL statement does not provide read consistency and adds
significant overhead to SQL execution due to the context switch between PL/SQL and
SQL for each row, even though the overhead is reduced in Oracle9i.
2.2.2.
Using Views
Do not use views blindly. Transparent changes to views can severely impact the
performance of clients of the view (e.g. hr_locations, ra_phones).
Views should not be used in Reports, PL/SQL, Java, or PRO*C. Conditional logic should
be used in the code and the code should join directly to base tables. Views should be
constrained in use within the online (i.e. Forms and Self-Service).
Avoid queries such as select * from <view>. It can break code if columns are
added, and prevent column elimination optimization..
Instead of joining to _VL views, join directly to _TL or _B table and include NLS filter
where language=USERENV(LANG).
2.2.3.
View Merging
The Query Transformer attempts to merge the body of the view with the body of the
SQL statement. Optimizer then considers resulting statement as a single query and
this allows him to consider more efficient join orders and index access paths.
If the view is not merged, the query block making up the view is executed stand
alone, and the results are joined with the parent query. The lack of view merging
can lead to an inefficient plan because joins that could reduce the view answer set
are not pushed inside the view.
Example of view merging:
SELECT h.header_id, h.org_id, sold_to_org.customer_number
FROM oe_sold_to_orgs_v sold_to_org,
oe_order_headers h
WHERE h.order_number = :b1 AND
h.sold_to_org_id = sold_to_org.organization_id
Explain Plan:
17
Functional Specification
SELECT STATEMENT Cost=7, Rows=1
NESTED LOOPS Cost=7, Rows=1
NESTED LOOPS Cost=6, Rows=1
TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL Cost=4, Rows=1
INDEX RANGE SCAN OE_ORDER_HEADERS_U2 Cost=3, Rows=1
TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS Cost=2, Rows=391250
INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 Cost=1, Rows=391250
INDEX UNIQUE SCAN HZ_PARTIES_U1 Cost=1, Rows=5103330
Explain Plan:
SELECT STATEMENT Cost=15365, Rows=1
NESTED LOOPS OUTER Cost=15365, Rows=1
TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL Cost=4, Rows=1
INDEX RANGE SCAN OE_ORDER_HEADERS_U2 Cost=3, Rows=1
VIEW OE_SOLD_TO_ORGS_V Cost=, Rows=391250
MERGE JOIN Cost=15361, Rows=391250
INDEX FULL SCAN HZ_PARTIES_U1 Cost=11093, Rows=5103330
SORT JOIN Cost=4268, Rows=391250
TABLE ACCESS FULL HZ_CUST_ACCOUNTS Cost=1719, Rows=391250
2.3. PL/SQL
This section covers performance related standards for PL/SQL.
2.3.1.
2.3.2.
PL/SQL tables should not be iteratively searched when there are hundreds or
thousands of records. To implement PL/SQL table searches, one of the following
methods is recommended:
18
Version:
2.3.3.
For non-numeric keys or numeric keys that exceed the binary_integer size
boundary (e.g., IDs derived from SYSGUIDs), consider implementing HASH
lookup searches using the DBMS_UTILITY.GET_HASH_VALUE construct. Care
must be taken to resolve hash collisions (two different values that yield the
same hash value). Alternatively, such searches can be replaced by global
temporary tables, indexed by the search key. Recent tests show that this
approach can give almost a two order of magnitude improvement over linear
searches of pl/sql tables that exceed 1000 entries.
Bulk
PL/SQL engine executes procedural statements but sends SQL statements to the SQL
engine, which executes the SQL statements and, in some cases, returns data to the
PL/SQL engine. Too many context switches between the PL/SQL and SQL engines can
harm performance. That can happen when a loop executes a separate SQL
statement for each element of a collection, specifying the collection element as a
bind variable.
A DML statement can transfer all the elements of a collection in a single operation, a
process known as bulk binding. If the collection has x elements, using bulk binding
you can perform the equivalent of x SELECT, INSERT, UPDATE, or DELETE statements
using a single operation. This technique improves performance by minimizing the
number of context switches between the PL/SQL and SQL engines. With bulk binds,
entire collections, not just individual elements, are passed back and forth.
To do bulk binds with INSERT, UPDATE, and DELETE statements, you enclose the SQL
statement within a PL/SQL FORALL statement.
Example:
DECLARE
TYPE NumList IS VARRAY(15) OF NUMBER;
lines NumList := NumList();
BEGIN
/* Populate varray */
...
FORALL j IN 1..X
UPDATE RLM_SCHEDULE_LINES SET PROCESS_STATUS = p_status
WHERE line_id = lines(j);
END;
To do bulk binds with SELECT statements, you include the BULK COLLECT clause in
the SELECT statement instead of using INTO. If you are using Cursors, you can still
use bulk by including the BULK COLLECT clause in FETCH.
Example of Retrieving Query Results into Collections with the BULK COLLECT
Clause:
DECLARE
TYPE ModelRecTab IS TABLE OF OE_ORDER_LINES%ROWTYPE;
model_recs ModelRecTab;
BEGIN
SELECT * BULK COLLECT
19
Functional Specification
INTO model_recs
FROM OE_ORDER_LINES
WHERE item_type_code = MODEL
AND
END;
BEGIN
OPEN c1;
FETCH c1 BULK COLLECT
INTO model_recs;
END;
2.3.4.
20
Oracle Confidential
Version:
2.3.5.
with:
var := value;
SQL reference to dual requires 5 buffer gets just for DUAL scan
Take the effort to write "base table" SQLs only; don't use complex views, as
these APIs will be heavily used. Avoid referencing complex views in PL/SQL
API.
Replace:
select ... from tab
where
with:
IF l_col1 is not null THEN
select ...
from tab
where col1 = l_col1
ELSIF l_col2 is not null THEN
select ...
from tab where col2 = l_col2...
For dynamic SQL use "EXECUTE IMMEDIATE" rather than the DBMS_SQL
package.
record
level)
rather
that
field-by-field
21
Functional Specification
2.4. Java
This section covers performance related standards for Java.
2.4.1.
Object Creation
All chained constructors are automatically called when creating an object with new.
Chaining more constructors for a particular object causes extra overhead at object
creation, as does initializing instance variables more than once. Java initializes
variables to the following defaults:
0 for integer types of all lengths (byte, char, short, int, long)
2.4.2.
Use StringBuffer rather than the String concatenation operator (+). The String
concatenation operator + involves a lot of work: a new StringBuffer is created, the two
arguments are added to it with append(), and the final result is converted back with a
22
Oracle Confidential
Version:
toString(). This increases cost in both space and time. Especially if you're appending
more than one String, consider using a StringBuffer directly instead.
23
Functional Specification
2.4.3.
Declare constant variables as final static. That way they be allocated and
initialized once.
Avoid casting
Type-specific code is not just faster than code with type casts; it's also
cleaner and safer. Unfortunately, sometimes its difficult to avoid the use of
casting.
Upcast operations (also called widening conversions in the Java Language
Specification) convert a subclass reference to an ancestor class reference.
This casting operation is normally automatic, since it's always safe and can
be implemented directly by the compiler.
Downcast operations (also called narrowing conversions in the Java Language
Specification) convert an ancestor class reference to a subclass reference.
This casting operation creates execution overhead, since Java requires that
cast is checked at runtime to make sure that it's valid. If the referenced
object is not an instance of either the target type for the cast or a subclass of
that type, the attempted cast is not permitted and must throw a
java.lang.ClassCastException. Method calls on casted objects are more
expensive.
Local variables are accessed faster than class members since they are stored
on stack rather than heap.
Amount localAmount = MyClass.amount;
for(int x=0; x<10000; x++)
total + = localAmount;
Use reflection code to minimize eager class loading for large classes.
if(x==1)
AM = new ApplicationModule();
vs
if(x==1)
AM = Class.forName(ApplicationModule).newInstance();
Compiler options:
-g:none; no debugging information
-O: Applies optimizations.
24
Oracle Confidential
Version:
2.4.4.
Synchronization
Synchronization overhead:
2.4.5.
Collections
Legacy collections (like Vector and Hashtable) are synchronized, whereas new
collections (like ArrayList and HashMap) are unsynchronized, and must be
"wrapped" via Collections.SynchronizedList or Collections.synchronizedMap if
synchronization is desired. Do not use synchronized classes for thread local
collections.
Do not use object collections for primitive data types. Custom collection classes
should be used instead.
Size collection at maximum size to avoid frequent reallocations and rehashing in
case of hashtables or hashmaps.
Use java.util.Arrays.asList():
Functional Specification
2.4.6.
Garbage Collection
The canonicalization is one way to avoid garbage collection: fewer objects mean less
to garbage-collect. Similarly, the pooling technique also tends to reduce garbagecollection requirements, partly because you are creating fewer objects by reusing
them, and partly because you deallocate memory less often by holding on to the
objects you have allocated. Another technique for reducing garbage-collection
impact is to avoid using objects where they are not needed. For example, there is no
need to create an extra unnecessary Integer to parse a String containing an int
value, as in:
String string = "55";
int theInt = new Integer(string).intValue();
When a class does not provide a static method, you can sometimes use a dummy
instance to repeatedly execute instance methods, thus avoiding the need to create
extra objects.
Using primitive data types in cases when you can hold an object in a primitive datatype format rather than another format can also reduce garbage collection. For
example, if you have a large number of objects each with a String instance variable
holding a number (e.g., "1492", "1997"), it is better to make that instance variable
an int data type and store the numbers as ints, provided that the conversion
overheads do not swamp the benefits of holding the values in this alternative
format.
Be aware of which methods alter objects directly without making copies and which
ones return a copy of an object. For example, any String method that changes the
string (such as String.trim()) returns a new String object, whereas a method like
Vector.setSize() does not return a copy. If you do not need a copy, use (or create)
methods that do not return a copy of the object being operated on.
Avoid using generic classes that handle Object types when you are dealing with
basic data types. For example, there is no need to use Vector to store ints by
wrapping them in Integers. Instead, implement an IntVector class that holds the ints
directly.
Avoid Finalization (GC perspective). Finalizers prolong the life of a non-referenced
object!
Do not call System.gc().
2.4.7.
Oracle Confidential
Version:
garbage collector clears them. Garbage collector does not clear SoftReference
objects until all WeakReferences have been cleared.
WeakReferences are intended for caches that normally take up more space and are
the first to be reclaimed when memory gets low. SoftReferences are intended for
canonical tables that are normally smaller, and developers prefer them not to be
garbage-collected unless memory gets really low. This differentiation between the
two reference types allows cache memory to be freed up first if memory gets low;
only when there is no more cache memory to be freed does the garbage collector
start looking at canonical table memory.
Java 2 comes with a java.util.WeakHashMap class that implements a hash table with
keys held by weak references.
A WeakReference normally maintains references to elements in a table of
canonicalized objects. If memory gets low, any of the objects referred to by the table
and not referred to anywhere else in the application except by other weak
references are garbage-collected. This does not affect the canonicalization because
only those objects not referenced anywhere else are removed.
2.4.8.
JDBC Guidelines
vs.
getColumn(name).
getClob()
Use of column data types reduces JDBC round-trips to obtain column data
types
stmt = dbconn.CreateStatement();
stmt.defineColumnType(1,Types.VARCHAR,length)
Functional Specification
ps = dbconn.prepareStatement();
ps.setExecuteBatch (100)
ps.set<type> (n,value)
ps.executeUpdate()
2.4.8.1.
JDBC and Multithreading
Do not use multithreading on the same JDBC connection object. The Oracle
JDBC driver supports multithreading, however, all JDBC methods are
synchronized. This means that if two threads are trying to execute a
statement on the same connection object then one of them will have to wait
until the other one finishes. This practically defeats the purpose of allowing
multithreading and only adds to the complexity of your program. In addition
to this, using JDBC in a multithreaded fashion is error prone and not well
tested. This will ultimately expose your program to runtime errors that
maybe hard to debug and fix.
28
Oracle Confidential
Version:
2.4.9.
Memory Footprint
Optimize the SQL statements for the VOs. Avoid selecting unnecessary columns in
VOs since all columns data are brought in the VM. Specific VOs should be created for
each page.
Restrict the number of rows. rowsetiterator fetches all the rows. BC4J places the
values in its own collection.
Size of out-binds should be always explicitly defined.
pagecontext.releaseRootApplicationModule()
retainAM=N
2.4.11. Deployment
Package application in zip/jar files.
Reduced # of files.
non-compressed format.
JVM options:
-native | -green.
Functional Specification
advantages over the default green threads implementation. If you run Java TM code in
a multi-processor environment, the Solaris kernel can schedule native threads on
the parallel processors for increased performance. By contrast, green threads exist
only at the user-level and are not mapped to multiple kernel threads by the
operating system. Performance enhancement from parallelism cannot be realized
using green threads. Also, when using the native threads, the VM can avoid some
inefficient remapping of I/O system calls that are necessary when green threads are
used.
2.5. Forms
2.5.1.
Forms Blocks
Do not base blocks on switched UNION/UNION ALL views whereby only one case can
be true depending i.e. on selection in search block. Instead, you should change the
query
source
programmatically
by
using
set_block_property(
..DATA_SOURCE_NAME..).
Blocks shouldnt be based on complex views. Instead, they should be decomposed
and logic should be moved to post-query triggers since roundtrips from the Forms
Server to the DB are not expensive.
Example:
Cash
Management
Find
Window
is
based
on
the
complex
view
CE_AVAILABLE_TRANSACTIONS_V that consist of a 5 way UNION of 5 other views.
Typically just one branch of the view is used based on transaction type. Changing
the data source dynamically allows the main view to be avoided and the correct
view used based on the type. As a result, shared memory was reduced from 4.8MB
to 347K and parstime from 8 seconds to 1.4.
2.5.2.
2.5.3.
LOVs
LOVs: If a list of values query can return more than 100 vales, list of values must be
defined with Filter Before Display = true; so user can restrict number of values
displayed. LOVs should reference base tables only, not views. They should not be
based on UNIONs. Instead, user should be prompted to select a type first.
Search screens and UIs should prevent execution of blind queries as well as
nonselective queries by enforcing that certain number of characters (different than
%) is entered as search criteria. As a general guideline, minimum number of
characters required is:
2 Char(s) for Result Sets between 100
- 1000 rows
3 Char(s) for Result Sets between 1000 - 10000 rows
4 Char(s) for Result Sets between 10000 - 1000000 rows
Note: This does not apply to exact searches
Do not pre-pend % by default to LOVs or search fileds. It will disable index use.
30
Oracle Confidential
Version:
2.5.4.
Record Groups
Record groups should not be based on complex SQL statements in order to be used
for numerous LOVs. Each record group should be based on a tuned SQL specific for
the needs of a LOV. If possible, use create_group_from_query so query will not be
executed and record group populated until record group is needed.
2.5.5.
Caching
2.5.6.
Item Properties
2.5.6.1.
Case Insensitive Query Property
Do not set case insensitive property on items that do not need case insensitivity or
items that are always stored in fixed case. Query generator does not presume that
functional indexes are available and creates query like:
select from t
where upper(X) = BLAKE
or X like Bl%
or X like bL%
or X like BL%
or X like bl%
2.5.6.2.
Visible Property
Folder blocks should not have large number of fields with VISIBLE set to TRUE. Fields
set to VISIBLE at design time should be the ones that majority of users would want to
see.
In a form that has Stacked canvases, only the canvas that is displayed when the
form opens should have the VISIBLE property set to TRUE. All other stacked
canvases should have VISIBLE set to FALSE. If that depends on runtime parameters
or profiles, all stacked canvases should have property VISIBLE set to FALSE and it
should be programmatically turned on at runtime for the stacked canvas that needs
to be displayed. The same applies to content canvases that are not displayed at
form startup.
2.5.6.3.
Display Property
If form has multiple Tabs defined, DISPLAY property should be set to NO for all tabs
except one that is displayed initially when form is loaded.
2.6. Reports
2.6.1.
Reports SQL
Large and complex SQL statements with a large number of placeholders should be
avoided since considerable amount of time will be taken to parse large and complex
SQL. This is a mid-tier parsing done by the Reports engine, not DB parsing when
31
Functional Specification
statement is executed.
2.6.2.
Initialization Values
Information that doesnt change for the duration of the report should be cached. Use
Before Report Trigger to cache those values instead of performing unnecessary joins
in the main SQL or calling APIs on a row-by-row basis.
Example:
SELECT ASP.INCOME_TAX_REGION_FLAG
FROM AP_SYSTEM_PARATEMERS ASP;
2.6.3.
Break Groups
Limit the number of break groups. Oracle Reports appends each column of the break
group to the main SQL, which will result in a more expensive sort and may influence
the execution plan.
2.6.4.
Computed Columns
Try to place all computations into SQL statements. Avoid formula columns since they
can be expensive if report produces large number of rows. Aggregations and totals
should be performed in SQL statement not in Reports.
2.6.5.
Lexical Parameters
Use equality parameters whenever possible - it will produce more efficient SQL
statement and execution plan To achieve that you can also use lexical parameters.
Example:
IF (:p_return_date_low = :p_return_date_high) then
:lp_return_date := and h.ordered_date = :p_retrun_date_low;
ELSE
:lp_return_date :=
and h.ordered_date between :p_retrun_date_low and :p_return_date_high;
END IF;
2.6.6.
Report parameters should not be defaulted to i.e. min and max values if user
32
Oracle Confidential
Version:
33
Functional Specification
2.7. PRO*C
2.7.1.
Arrays processing
The array interface allows you to declare a local C array and populate it with values.
The array can then be used to perform an insert, update or delete. Arrays enables
you to reduce the number of roundtrips to the database. For example, if you need to
read 1000 rows from the database. Instead of of opening a cursor and loop through
each fetch until all rows are retrieved, use an array of 1000 elements which will
result in only one SQL fetch call. Using arrays can considerably SQL call overhead
and as well as network overhead if running in a distributed environment.
With PRO*C releases prior to PRO*C release 8, you cannot use an array of structures
within PRO*C. You can however use arrays within within a single structure to
perform batch operations. PRO*C 8.0 allows you to use an array of structures to
perform batch SQL and other object-type operations. Using an array of structures
allows for more elegant programming and also offers more flexibilty in organizing
the data structures.
When using array processing, always declare the same length for each array if you
are using multiple items within the single batch operation. You need to declare the
following:
Declare all arrays based on the batch size and use the FOR
:batch size clause when you perform SQL operations to specify
explicitly the number of rows to be processed.
For SELECT and DML statements, sqlca.sqlerrd[2] reports the cumulative sum of
rows processed. In some cases, using the array interface increased performance by
an order of magnitude. Please refer to the PRO*C/PRO*C++ Precompiler
Programmers Guide (Using Host Arrays section).
2.7.1.1.
Selecting the Batch Size
Choosing the optimal batch size depends on many factors, such as the size of the
data set being fetched., the performance characteristics of the network between the
application and the database server, and the latency of round trips. Larger batch
sizes increase the amount of network traffic between the application and the
database server. For networks experiencing performance bottlenecks, using a large
batch size can have a negative effect on performance. A large number of columns,
especially those with LONG row lenghts, can cause a low batch operation. Therefore
choose an appropriate batch size by making the batch size parameter dynamically
configurable upon execution. As a rule of thumb, start with a small batch size (500)
and gradually increment it until you achieve optimal performance.
Do not allocate statically allocate arrays or structures. Always allocate the arrays or
structures dynamically by using a memory allocation routine such as malloc(). This
enables you to free the batch array size or structure once it is no longer needed. It
also reduces application startup overhead by allocating only the amount of memory
needed.
You should always allocate and deallocate the arrays or structures dynamically to
ensure that sufficient memory exists. Also, use the FOR :batch_size clause when
you perform SQL operations to specify explicitly the number of rows to be processed.
34
Oracle Confidential
Version:
2.7.2.
In PRO*C releases 2.1 and above, a client shared library, libclntsh.so, is provided so
that that PRO*C applications could be linked with this shared library. It helps reduce
the PRO*C executables size from 2-3 MB to 50-100KB on average. This results in not
only in disk storage savings but also saves compile time and execution time. Since
the executables are linked with the shared library, only functions that are called are
paged in, which increases the performance of the executable since the memory
requirements drop significantly. In order to make use of the client-shared library,
relink your PRO*C and OCI programs with with the libclntsh.so library(-lclntsh) or
libclntsh.sl on HP. Set the environment variable ORA_CLIENT_LIB to shared before
compiling.
2.7.3.
There are several PRO*C compiler options that can increase cursor management
performance. The HOLD_CURSOR compile option, when set to YES, causes
Oracle to hold the cursor handle that is associated with the SQL statement in the
cursor cache. This helps eliminate reparsing should the SQL statement be
reexecuted at a later stage in the application. This eliminates the need to reparse
the SQL statement since the cursor can be reused. HOLD_CURSOR set to NO
causes the cursor handle to be reused following the execution of the SQL statement
and the closing of the cursor. Set HOLD_CURSOR to NO to increase the cursor
cache hit ratio.
The RELEASE_CURSOR compile option, when set to YES, releases the private SQL
area associated with the SQL statement cursor. This means that the parsed
statement for the SQL statement is removed. If the SQL statement is reexecuted
later, the SQL statement must be parsed again, and a private SQL area must be
allocated. When RELEASE_CURSOR = NO, the cursor handle and private SQL area
are not reused unless the number of open cursors exceeds MAXOPENCURSORS. Set
RELEASE_CURSOR = NO and HOLD_CURSOR = YES to increase the cursor cache
hit ratio. Set the MAXOPENCURSORS compile option to the maximum number of
cursors used in your application.
PRO*C 8i provides the ability to reduce network round trips to the database server
prefetching rows of a cursor. The PREFETCH compiler option allows you to specify
the number of rows to be prefetched. The default value is 1, and the maximum
number of rows which can be prefetched is 65,535. Prefetching is primarily useful for
cursors which do not perform array processing or fetch rows in batches. However, it
can also be used with array processing. For example, if your cursor fetches in
batches of 100 rows, and the prefetch compiler option is set to 500, then after 500
rows have been fetched by the program, another database round trip will occur in
order ro fetch the next set of 500 rows.
2.7.4.
Functional Specification
multiple PRO*C applications, or preferably by using the threads feature which will
help you parallelize your application by creating several threads.
The threads option allows a high degree of parallelism by establishing separate
contexts via the EXEC SQL CONTEXT ALLOCATE statement, and each thread may
use a different context to perform SQL operations. Using threads can increase the
performance of your application significantly by processing SQL statements
simultaneously using lightweight threads. Using threads is more efficient and
elegant than the fork() and exec() technique. When a fork() is issued from a PRO*C
application following an established connection, the child process will not be able to
to make use of the connection. Although th econnection to Oracle is treated as a file
descriptor (socket), and fork() duplicates all open file descriptors, the process id
(PID) od the process process that establishes the connection is also used to manage
the connection. Therefeore after the fork() and exec(), you may get ORA-1012 errors
(not logged on) when SQL statements are issued from the child process. If you still
intend to use the fork() and exec() technique, the preferred method is to issue the
fork() and fork() before any connection is established., and then establish separate
connections in each parent and child process.
2.7.5.
Object Cache
2.7.6.
DML RETURNING
2.7.7.
When you develop applications, always use make files to compile and link code. Do
not use manual compile scripts to produce executables. Use new make files
provided with new releases that incorporates new functionality. Therefore, if you use
manual compile scripts, code that you compiled under a previous release may not
link properly under a new release. Use the sample make files provided (proc.mk or
36
Oracle Confidential
Version:
2.8. Discoverer
This section covers performance related standards for Discoverer.
Discoverer is designed as an ad-hoc query tool. For true enterprise reporting, Oracle
Reports should be used. The main focus is to enable end-user to access data from
the database, produce standard reports and enable powerful analytics (Ranking,
Top Ten, Drilling capabilities). In order to provide this analytical capability,
Discoverer creates indexed cubic cache in the middle-tier. Default settings for the
cache are large so Discoverer takes advantage of the memory on the server. By
modifying those settings, system resources available to Discoverer can be
controlled.
Avoid returning tens of thousands of rows. Provide parameters to reduce the number
of rows returned.
Data model should support efficient reporting from Discoverer. If all required data is
already stored in denormalized tables or in materialized views, Discoverer reports
would be simpler and perform better. It is advisable not to create complex folders,
since they are in essence views created on top of other views or tables (other
Complex or Base Folders). Whenever you select any information from the complex
folder, the whole complex query is executed. If you are looking for a particular
information, try to retrieve minimal number of rows and involve minimal number of
tables and views in process and that is best achieved by selecting from Base
Folders.
For more information on Oracle Discoverer see also an Oracle Whitepaper
Oracle9iAS Discoverer Best Practices for release 1.0.2.2.
37
Functional Specification
2.10.
Data Modeling
This section covers performance related standards for Data Modeling, primarily
physical design standards.
Scalability of an application greatly depends on the data model design. Successful
data model should result in easy-to-code, well performing application. Tuning
database parameters will not address the issue of a non-scalable model.
38
Oracle Confidential
Version:
2.10.5. Indexes
Join columns on the table should be indexed.
2.10.5.1.
Avoid overindexing
Index maintenance is an overhead during DML operations. Create indexes just on
selective columns on which you perform searches. Example:
AP_INVOICES_N2 (VENDOR_ID)
AP_INVOICES_U2 (VENDOR_ID, INVOICE_NUM)
The AP_INVOICES_N2 index is redundant since the optimizer can use the U2 index.
39
Functional Specification
2.10.5.2.
Order columns in index by occurrence rather than selectivity
Columns in multi-column indexes should be ordered by their occurrence in the
WHERE CLAUSE, not by selectivity.
That means, columns used frequently and in = conditions in the front, columns
used less frequently and in BETWEEN, >, <, min, max and ORDER BY in
the end.
SO_LINES.HEADER_ID = SO_HEADERS.HEADER_ID
AND
SO_LINES.ITEM_TYPE_CODE IN ('KIT','MODEL','CLASS','STANDARD','SERVICE')
AND
SO_LINES.LINE_TYPE_CODE IN ('REGULAR','DETAIL')
AND
SO_LINES.ATO_LINE_ID IS NULL
AND
SO_LINES.OPEN_FLAG||'' = 'Y'
AND
SO_HEADERS.OPEN_FLAG = 'Y'
2.10.7. Views
When converting base tables into views review performance implications
thoroughly. SQL will always access all tables in a view. Crating views on top of other
views is against coding standards. Level of view nesting should be 1 - views should
directly expand to base tables.
2.10.7.1.
Views should use UNION ALL rather than UNION
If UNION functionality is required in the view definition, always consider using
UNION ALL rather than UNION statement.
2.10.7.2.
Views should not define column as concatenation of cols or
literals
Views should avoid defining columns as concatenation of columns and/or literals
unless you can be sure these columns will not be used in the where clause of SQL
statements. CBO will not be able to use index on concatenated columns and that can
create a performance issue.
Consider why you have this implementation and if it could be handled in the API
instead.
2.10.7.3.
Views, in general, should not contain functions
Try to avoid defining columns using functions (i.e. DECODE, NVL) unless you can be
40
Oracle Confidential
Version:
sure that these columns will not be used in the WHERE clause of SQL statements.
CBO will not be able to use index on such columns and that can create a
performance issue
If you have functions in view definitions, consider why do you have this
implementation and if it could be handled in APIs instead.
41
Functional Specification
2.11.
This section details the performance standards of modules, which will be scheduled
and executed via the Concurrent Manager and tips on managing the concurrent
manager without negatively impacting online E-Business suite users.
Oracle Confidential
Version:
Functional Specification
Keep statistics up to date on the FND tables to ensure optimal plans.
Constantly monitor running jobs and identify the ones that should be moved to
another concurrent manager or that is a candidate for further tuning.
2.12.
TBD.
2.13.
Performance Measurement
TBD.
2.14.
Administrative Interfaces
TBD.
2.15.
Configuration Parameters
There are no new configuration parameters.
44
Oracle Confidential
Version: