You are on page 1of 61

c 


  
c 
 

  
c 
 

  
 

CLARIFYING THE PROBLEM

The following questions can assist in identifying and resolving problems in a timely
manner.

Has this function worked previously?


If the answer is yes, then you will need to explore areas such as patch application.

Has this problem occurred before, during the testing phase of your implementation or
an upgrade?
You could review your records at this stage to see if this has occurred previously and
how it was resolved.

Has it stopped working as a result of a stand-alone Patch or Patchset being applied?


At this point it is also a good idea to obtain the current Patchset level you have for
the particular module.

Have there been any changes to the configuration of the system ² technical or
functional?
Technical eg ² upgrade of the Database, Application, Forms or Reports, Functional eg
² Values in the System Options form have been modified within a particular module.

Is the problem user specific?


You will need to identify if other users are experiencing the problem reported or if it
relates to one user only. If it is a single user, things such as personal profile options
can be reviewed. Or, is it a group of users accessing the same responsibility? Review
the differences in the responsibilities to try to pin point the issue.

What environment is it happening in? Is the problem reproducible?


Is this a Production, Test, User Acceptance Testing (UAT) or Development system? Is it
occurring in all, some or only one of the above environments? Try to examine the
differences between the environments if the problem exists in one but not the other.
It is also important to try to reproduce the problem in another environment if
possible. This will aid the problem solving process if the issue needs to be logged with
Oracle Support Services.

Is this standard functionality or a customisation of Oracle applications?


If the problem is standard functionality and the initial research has been done, such
as referring to MetaLink and User Guides, Technical References and so on, and the
problem remains unsolved, then a call can be logged with Oracle Support for
assistance. If it is a customisation made to the application, then it will not be
supported by Oracle Support Services, and you may need to seek the services of a
Consulting organisation, if you cannot resolve the problem.

Reviewing your concurrent log files.


I can·t stress enough the importance of reviewing the log file from your Concurrent
processes. Even if your concurrent request completed without an error status, it is
always good practice to just quickly review the log file. Processes such as the Oracle
Assets Periodic Mass Copy and the General Ledger Journal Import may complete
successfully, but that does not mean that there were no exceptions encountered.
With processes such as these, which transfer information, or with interfaces, always
review the log file and associated report output for any records that were not
successful in the process, as they will need to be addressed.

Always take note of the errors that appear in your log file, and save the file to a
secure directory, to avoid losing it should the Concurrent Manager requests get
purged.

METALINK

Before logging a call with Oracle Support Services, the above questions should be
answered as well as performing research on MetaLink. MetaLink is a Web support
service available from Oracle Support Services. It was developed to provide Oracle
customers with a 24x7 resource to access information from repositories regarding
support information, including solutions. The following address allows you to register
on and access MetaLink - www.oracle.com/support/

There are various facets to MetaLink, however the focus of this paper will be on the
search function for previous reported incidents and the logging, updating and
reviewing of Technical Assistance Requests (TARS), and the reviewing of Patches, Bugs
and briefly on the Forums available.

The Search Function


The Basic Search
Enter the search criteria in the search field (To the left of the Search Button). Then
click on the search button and your results will be displayed.

The Advanced Search

The Advanced Search page allows you to refine and further customise your query
when searching for information on the site. You can narrow your search by selecting
one or more options. These options include selecting the type of search, the order in
which information is shown, and the source or area of the site you wish to search.

To ensure that your search is effective, take some time to review the actual search
methods available. There are five methods and generally their use will vary on the
type of problem/error that you encounter.

Below is the screen dump of the Advanced search screen.

This is the information the above search retrieved.

Match All (AND++) Search


This is used for general keyword searches and is the default for both basic and
advanced searches.
Using AND++ without a comma:
EXAMPLE: Using the keywords,"APP-43446 TAXCODEµ will return all documents in
which both "APP-43446" and "TAXCODE" appear. The precise phrase "APP-43446
TAXCODE" does not have to appear in any of these documents. The more frequently
the keywords exist in the document the higher the score on the hit list returned.

Using AND++ with a comma:


All words BEFORE the COMMA must exist, while each word AFTER the comma scores
higher if present.
EXAMPLE: Using the keywords, "APP-43446 unable to find, taxcode for this
transaction" will return all documents in which "APP-43466 unable to find" appear but
"taxcode for this transaction" are optional. All documents with the words "APP-43446
unable to find" will appear in the hit list but those documents with "taxcode for this
transaction" also appearing in the document will score higher than those without
these optional words.
ConText Syntax Search
Use the ConText Syntax option if you want to conduct a detailed search that involves
more than keywords. This option searches the site using the ConText Query language.
The ConText Query language uses special search operators that define terms within a
query expression.
The following table provides examples for some commonly used search operators:

Symbol-Operator
Input Result Description
% Wildcard Character 10565%
te%st 10565.6
test, teeniest, tenderest The percent symbol matches any substring.
( _ ) Wildcard Character 80_86
_est 80186,80286, 80z86
best, jest, pest, rest, test The underscore matches any single character.
( ) Group Character (peter, or paul and mary) peter and/or paul and mary The open
parenthesis indicates the beginning of a group. The first close parenthesis
encountered is the end of the group. Any open parentheses encountered before the
close parenthesis indicate nested groups.
{ } Group Character {end-of-file}
{ORA-600} end of file
ORA-600 Performs the same function as regular brackets except it prevents the
penetration of other expansion operators.
$ Stem Expansion $scream scream screaming screamed Expands a query to include all
terms having the same stem or root word as the specified term.
? Fuzzy Expansion ?feline feline defined filtering Expands a query to include all terms
with similar spellings as the specified term (English-language text only).

Match any (OR) Search


Use the Match any (OR) option to find information on multiple topics or for synonyms.
The Match any (OR) option is a Boolean expression that allows you to find documents
in which either of two or more search words appear, again without needing to know
the ConText search search syntax.
EXAMPLE: Using the keywords, ´APP-43446 Taxcode·· will find all documents in which
´APP-43446·· appears and all documents in which ´Taxcodeµ appears.

Doc ID Search
Querying by document ID number allows you to retrieve a specific document from
Oracle Support Services' databases. A document ID number is a unique identification
number assigned to every file stored in Oracle's Support Knowledge Base. Document ID
numbers can be obtained from an Oracle analyst or by looking at the document ID in
hitlists from other types of MetaLink searches.

Weighted (FUZZY) Search


Use the Weighted (FUZZY) search option to find documents that contain any of the
keywords (like a Match any (OR) search) but by using a comma you can rank the
importance of the keywords.
EXAMPLE: Using the keywords, "ORA-942, AR_STATEMENT_HISTORY" would find
documents that contain either of those keywords, however it would rank "ORA-942" as
more important than ´AR_STATEMENT_HISTORYµ. Scores take into account word
frequency so it is possible that an article containing 10 occurrences of
´AR_STATEMENT_HISTORYµ would score more than an article containing both terms.

Technical Assistance Request (TAR) - Create, Update and Modify

With the correct privileges, you have the ability to create, update and modify the
TARS in Metalink. When you create an internet TAR (iTAR), it is sent to a queue in the
Metalink System. All TARS are transferred into the internal tracking system every 15
minutes and an analyst will then be assigned to work on it.

Below is a screen dump of a Global TAR search by a single Customer Support Identifier
(CSI) Or Multiple CSI·s within the same country.
Below is a screen dump representing a search for a specific TAR number.

Patches

A patch is a collection of fixed bugs assembled and tested by support analysts and
developers within Oracle. If you find a problem that has been reported before and
there is a patch available you can download the patch directly from the web site. You
can also view a list of available patches for a product and platform.

Type in the desired patch that you wish to download, the release, product, platform,
language and type. If you know the file that the problem occurred in, then type this
in the Includes File box. Ie Statement Generation program file arxsgp.lpc. Then it will
list all the patches that contain this file. You can choose to order it by Patch Number
or Release Date. Then click submit. It is very important to read the README.txt as
there could be pre-requisite or other important instructions you need to follow.

Bugs

These are issues that are logged into the bug database and this is the primary tool
used by Development to determine the problem and devise a solution. The Bug
Database Search page allows you to specify your search either by a bug number or by
other search criteria such as product version, platform, or status. You can also
determine the order of your query results by sorting how you would like the
information displayed.

If you can not retrieve a bug then it may be due to the following reasons;

1) The bug may not be published


2) Bugs have to be newer than June 1998
3) You can only view bugs that match the products and platforms that are licensed
according to the CSIs you have inserted in the ¶Show Licence· section of Metalink.

This will retrieve the history of the bug and provide a resolution.

Forums

A forum is an interactive area for discussions and commentaries dedicated to a


certain topic. Forums allow you to post questions and comments and receive
responses within 2 business days from a Support Analyst who has expertise in that
product area. If this is not obtainable within 2 days then advice from Oracle will be
given. It also gives you the opportunity to reply to questions posted by other people.

The forums are intended to address problems with a severity level 2,3 or 4. If the
problem is a severity 1 then you must make a call to support. Forums are threaded, so
a reply to a particular posting becomes part of the ¶thread·.

Select the relevant Forum Product Groups from the drop-down list.
Click on the type of Thread: All, New, Changed.
All ² Threads posted since the date indicated in the posting period.
New- Only those threads posted since your last visit.
Changed- Only those threads with changes ² ie new replies

From the time drop down list, select the number of days for which you would like to
view messages.
Option to click on ¶Only Threads in which I participate, that is the forum threads
which you have posted a message or replied to a message.
Click on the Open Forum Button

For Further information about Metalink please review- Metalink Quick Reference
Sheet. Document id 115465.1

THE EXAMINE/DEBUG TOOL

The Examine utility can be very useful to view values of ¶hidden· fields in a form that
relate directly to columns in the Oracle tables in your database. It is often used to
find the value of an ¶Id· column, which is usually the primary key used to identify
records in a table.

The Navigation path for this is as follows -

Release 10.7 Character


\Other Debug

Release 10.7 GUI ²11


Help > Tools > Examine
Release 11i
Help > Diagnostics > Examine

Utilities:Diagnostics
This profile determines if the diagnostic utility Examine requires an Oracle database
password. It also controls access to most of the other functions on the ¶Tools· menu.

1) The setting of ´Yesµ means that you can automatically use the Examine tool when
the choice is on the menu.
2) The setting of ´Noµ mans that you must enter a password to use the Examine tool
when it appears on the menu.

Your System Administrator controls the setting of this profile option.

You may have noticed on your Tools menu in Rel 10.7SC, that there is also a debug
option available. This will effectively run your form in debug mode, displaying debug
messages as each step executes. Please note that this option does not work with
version 4.5 of Forms, but is fixed in version 6.

HOW DO I IDENTIFY THE VERSION OF A FORM, REPORT, and PROGRAM?

It is important to identify the current version of the program that is experiencing the
problem, as a bug in the version of the program you are running may be the cause of
your problem.

Form
A Form version is easily obtained via the front-end application. The navigation path is
as follows ²

Release 10.7 Character


\Help Version

Release 10.7 GUI ²11i


Help > About Oracle Applications

This will provide you with several pieces of useful information, including the Database
version, the Applications version, the Form Name and Form Version.

Report/Executable
To obtain the version of a report or concurrent executable, you need to firstly log on
to the operating system, and go to the appropriate directory -

For Reports:
Release 10.7
Cd $/srw

Release 11
Cd $/reports

For a Spawned Concurrent program:


Cd $/bin

Issue the following command to retrieve the version number of the report program or
the libraries used by a concurrent executable ²

Unix:
Strings -a |grep ¶Header·

Or

Ident |more
NT:
Find /i ´Headerµ

FORM ERRORS

If the problem is occurring in a standard Oracle Form, obtain the following


information by navigating to ¶Help-> ¶About Oracle Applications· in the Forms menu -

Oracle Forms Version (example 4.5.10.10.2), this will help determine the form
patchset level.
Form Name: (example FNDSCSGN), will assist in finding previous form issues.
Individual Form Version (example 11.0.30), this is important if reporting a bug to
development is necessary.

You can try to regenerate the Oracle Standard Forms (All platforms) and relink form
executables (UNIX Only)

Regenerate the forms that are causing the problems. Forms can be regenerated by
either using the adadmin installation utility, or manually at the command line. If you
are on a UNIX platform, you will also need to relink the form executables.

Refer to the Installations Manuals that are appropriate for your Release and Platform.

HOW DO I RUN A PROGRAM AND REPORT FROM THE COMMAND LINE?

Sometimes there is insufficient information in the Concurrent log file to determine


the reason why a particular concurrent process is failing or completing with errors. In
this situation, if a TAR is logged with Oracle Support Services, they will quite often
request you to run the process from the command line. This will also rule out if there
is in fact a concurrent manager problem. To do this you need to firstly take note of
the exact parameters as listed in the parameters field when you view your concurrent
request. Then, go to the command line in your operating system and change directory
to where the program resides: Cd $/sub-directory.
Executable Program
Enter the following command, substituting the parameters from the Concurrent
Manager, and the userid and password:

0YY

is the parameters you see from that concurrent request


Note: when you run a concurrent program executable from the command line, you
should use a space instead of a comma to separate between each parameter.

/ is the username/password of the apps account for your database.

Eg For the General Ledger Posting program, if the parameters you see in the
Concurrent request are 1, 101, 2546, you would enter the following command

GLPPOS 0 Y 1 101 2546 Y

This will execute the process and produce a debug log file in the format Lxxxxx.log.
This log file will reside under the directory. The information in this log file can give
Oracle Support Services a better idea of what may be causing the problem.

Reports
Report programs reside in:

Product_TOP/srw (Apps 10.7)


Product_TOP/reports (Apps 11)

To run a report from the command line, issue the following command -

Unix
r25run userid= report= .rdf
destype= desname=<>desformat=<>batch=yes
NT
In NT, use the r25run32.exe executable to run a report:

r25run32 userid=apps/apps@vd11 report=.rdf


destype= desname=<>batch=yes

TRACE TOOLS

SQL*Trace

SQL*Trace is a facility which assists you in identifying the SQL statements that are
executed by a process. Additionally, it provides you with statistical information for
further analysis, which can be useful when there are performance issues or problems
with database objects.

There are four levels in your Oracle Applications at which SQL*Trace can be enabled ²

Database
Form
Report
Profile (applicable to Release 10.7 only)

Database Trace
A Database level trace requires a shutdown and startup of both the Concurrent
Managers and the Database. If there are a number of users on the system or the
system is unavailable due to 24 x 7 requirements, a database level trace may not
always be feasible.

The Database trace process is as follows ²

1) Perform a shut down of the concurrent managers, then the database.


2) Modify the start up parameter file (init.ora) to ensure the following initialisation
parameters are set

SQL_TRACE=TRUE
USER_DUMP_DEST (destination directory for trace output)
TIMED_STATISTICS=TRUE

3) Restart the database, then the concurrent managers.


4) Run the process for which the problem is occurring.
5) Retrieve the trace file(s) from the USER_DUMP_DEST directory.
6) Perform a shut down of the concurrent managers, then the database.
7) Restore the original init.ora file.
8) Restart the database, then the concurrent managers.

Steps 1-3 and 6-8 above should always be performed by your Database Administrator.

The value of the initialisation parameter SQL_TRACE dictates whether the SQL trace
facility is enabled or disabled when you begin a session in ORACLE. Its presence in the
init.ora file sets the initial value for this trace. This will apply to all sessions on an
ORACLE instance. You can additionally enable or disable the SQL trace facility just for
a specific session with the SQL_TRACE option of the ALTER SESSION command. This
allows you to obtain smaller and more readable trace files and is particularly useful
for developers. The SQL statement to achieve this is

ALTER SESSION
SET SQL_TRACE = TRUE

Form Trace
A Form level trace requires the Trace option to be available as a menu item. The
trace process is as follows ²

- Navigate to the Form for which you wish to set the Trace.
- Turn the Trace feature on when you are at the point in the form that you want to
turn trace on

GUI versions
Help > Tools > Trace
Rel 10.7 Character
\ Other Trace On

Rel 11i
Help > Diagnostics > Trace

- Note the time trace is started.


- Execute the actions you wish to trace.
- Turn the Trace feature off. Make sure this step is done, or you will severely degrade
your system performance.

GUI versions
Help > Tools > Trace

Rel 10.7 Character


\Other Trace Off

Rel 11i
Help > Diagnostics > Trace

- Locate the trace file(s) with the time stamp when you ran your trace. The trace file
will be in the USER_DUMP_DEST directory.

Note: if a trace file is being generated to track performance, the parameter


TIMED_STATISTICS in the init.ora file must be set to TRUE.

Report Trace
A Report level trace would generally require a developer or technical resource, and
involves the following steps ²

1) Logon as applmgr and change directory to where your .rdf file resides.
2) Create a backup copy of your current .rdf and .rex files in the appropriate srw
directory. Create the .rex text of the report if necessary.
3) If the .rex does not exist, you can easily regenerate the report executable by doing
the following:

UNIX
R25convm userid=/ source=
stype=rdffile dtype=rexfile dest=
overwrite=yes batch=yes

NT
R25conv32 syntax:

R25convm userid=/ source=


stype=rdffile dtype=rexfile dest=
overwrite=yes batch=yes

4) Modify the .rex file for the report.


Include an additional line after the statement FND SRWINIT in the Before Report
Trigger ²

srw.do_sql (¶alter session set sql_trace=TRUE·)


This will activate the SQL Trace in the report

Include an additional line after the statement FND SRWEXIT in the After Report
Trigger ²

srw.do_sql (¶alter session set sql_trace=FALSE·)


This will deactivate the SQL Trace in the report after the report completes.

5) Convert the modified .rex file into an .rdf file

UNIX
R25convm userid=/ source=
stype=rexfile dtype=rdffile dest=
overwrite=yes batch=yes
NT
R25conv32 syntax:

R25convm userid=/ source=


stype=rexfile dtype=rdffile dest=
overwrite=yes batch=yes

6) Run the report.


7) Locate the trace file(s) from the USER_DUMP_DEST directory.

There is another method that can be used to trace an individual report within Oracle
Applications for Release 10.7:

In Character Mode:
1) Log on as System Administrator.
2) \Navigate Concurrent Program Define.
3) Enter Query mode.
4) Enter your report-name in the short name field.
5) Execute Query.
6) Navigate to the next block.
7) Cursor is now in the concurrent program details.
8) Select Parameters.
9) Arrow down through the parameters, looking for the name = 'trace'.
If not found, then create a new parameter:
Add sequence number, such as 999
Name = Trace
Description = Report Trace
Enabled = YES
Value Set = Yes_No
Required = No
Enable Security = No
Default Type = Profile
Default Value= NO
10) Save the record
11) Run the report, making sure that profile option trace = YES.
In GUI Mode:
1) Log on as System Administrator.
2) Navigate to Concurrent>Program>Define.
3) Enter Query mode.
4) Enter your report_name in the short name field
5) Execute Query.
6) Click on the [Parameters] button.
7) Search through parameters, looking for parameter name = ¶trace·.
If not found, then create a new parameter:
Enter Seq = 999
Parameter = trace
Description = report trace
Click on Enabled
Value Set = Y or N
Default Type = profile
Default Value= No
Required = no
Enable Security = no
8) Save the record
9) Run the report, making sure that profile option trace = Y.

Utilities:SQL Trace
SQL trace files can also be generated for concurrent programs through the use of this
profile option. The trace can be enabled at all levels, i.e. Site, Application,
Responsibility and User, by setting the profile ´Utilities:SQL Traceµ to ´Yesµ. This
option is controlled by your System Administrator - users can see the profile option
but cannot update it.

This profile option is only applicable to Release 10.7.

In Release 11, the equivalent trace utility for concurrent programs is set within each
individual Concurrent Program definition. It is only the System Administrator or
Application Developer responsibility that has access to define concurrent programs.

The information produced by a SQL*Trace is stored in output files with a .trc


extension. As the format of the trace output is difficult to read, there is a utility
available called TKPROF which should be applied to the trace file(s) to reformat the
output in a more readable format.

The basic command syntax of the TKPROF utility is as follows ²

$ tkprof

If you do not know the destination directory for the creation of your trace files, this
value can be found easily through the following SQL query using SQL*Plus ²

SELECT value
FROM v$parameter
WHERE name = 'user_dump_dest';

DEBUG/TRACE PROFILE OPTIONS

There are several profile options across the Oracle Applications that can assist with
problem resolution. These profile options serve to provide more information in the
Concurrent log file in the form of debug tracing messages ²

Product Updateable Profile Option Comments


Oracle Cost Management At all levels for the System Administrator

CST:Cost update debug level

(Release 10.7 ² 11i) For the cost update program, available values are as follows

None Do not print debug messages.


Regular Print a debug message for each subroutine executed.
Extended Print a debug message for each SQL statement executed.
Full Print a debug message for each SQL statement executed and keep any temporary
data in the database
Oracle Master Scheduling/MRP At all levels MRP:Trace Mode

(Release 10.7 ² 11i) Indicates whether to enable the trace option. Available values are
Yes or No
At all levels MRP:Debug Mode

(Release 10.7 ² 11i) Indicates whether to enable debug messages within MRP and
Oracle Work in Process. Available values are Yes or No
Oracle Assets Application
Responsibility
User FA: Print Debug

(Release 10.7 - 11i) This profile option is used by Oracle Support Services as a tool to
identify a problem with the code. Available values are Yes or No
Application
Responsibility
User FA: Print Timing Diagnostics

(Release 10.7 - 11i) This profile option is used by Oracle Support Services as a tool to
identify a problem with the code. Available values are Yes or No
Oracle Order Entry Application
Responsibility
User OE:Debug

(Release 10.7 ² 11i) Indicates whether debug messages are displayed on forms.
Available values are Yes or No
Application
Responsibility
User OE:Debug Trace

(Release 10.7 ² 11i) This profile option is used by Oracle Support Services as a tool to
identify a problem with the code. Indicates whether a trace file is created during a
Concurrent Program execution. Available values are Yes or No
Oracle Work in Process At all levels TP:WIP Concurrent Message Level

(Release 10.7 - 11i) Determines the level of detail reported in the move transaction
concurrent log file during move and resource transaction validation/processing.
Available values are
Message level 0 Reports only errors.
Message level 1 Reports processing activities and errors.
Message level 2 Reports and time stamps processing activities and errors.
Oracle General Ledger At all levels FSG: Message Detail

(Release 10.7 ² 11i) Determines the level of detail in your error message log file for
FSG reports. Error messages are divided into three categories
Catalog I ² contains detailed statistics useful for program debugging

Catalog II ² contains process messages useful for finding out where a process failed

Catalog III ² contains only error messages

Available values for this profile option are

None No messages.
Minimal Catalog III messages.
Normal Catalog II and III messages.
Full Catalog I, II and III messages.
Must be set by System Administrator. GL:Debug Directory

(Release 11-11i) Specifies the directory where the Workflow debug files are created.
The database must have write permissions to the specified directory. Available values
are

Yes The database may create debug files in the nominated directory.
No The database cannot create debug files.
Oracle Projects Application
Responsibility
User PA:Debug Mode

(Release 11 -11i) Determines if PA processes and reports are run in debug mode.
Available values are Yes or No.
Oracle Purchasing At all levels PO:Set Debug Concurrent ON

(Release 11 ²11i) This profile option is used for finding problems with Requisition
Import. For performance reasons, the value should ALWAYS be set to No, unless
otherwise instructed by Oracle Support Services for debugging purposes.
At all levels PO:Set Debug Workflow ON

(Release 11 ² 11i) This profile option is used for finding problems with Purchasing
Workflow processes. For performance reasons, the value should ALWAYS be set to No,
unless otherwise instructed by Oracle Support Services for debugging purposes.
Account Generator At all levels Account Generator:Run in Debug Mode

(Release 11 ² 11i) Determines if Account Generator processes are run in debug mode.
Available values are Yes or No.

COMMON UNIX O/S SIGNAL ERRORS

Signal errors are how Unix processes communicate with each other and the operating
system. This is not a problem with the operating system, but is part of the operating
system·s mechanism of protecting itself. When programs terminate abnormally, this is
communicated to other processes and the kernel by means of a Signal code. The
actual code associated with the signal indicates the nature of the termination. The
meaning of the signal message is contained in a file called signal.h, which is located
usually in the /usr/include/sys sub-directory.

Some of the more common Signal errors are as follows ²

SIGNAL 4
A Signal 4 error usually indicates an illegal/unknown instruction was encountered.
This is typically caused by bugs in the code. Explore Metalink for the error to find out
if your particular problem is related to a known bug for which there may be a patch.
Some encounters with a Signal 4 error, for which there are patches available, include
²

Process Possible Cause Possible Resolution


AR to GL Transfer In Rel 10.7 Product 16.1, you may only have 5 accounting periods
open at one time. Fixed in Patch 932703 or in 11.02 Patch 713261. Fix is also included
in 11.0.AR.D for Rel 11.
FA Mass Revaluation Preview Report Possible cause is not having set up the
Revaluation Reserve and Amortization Accounts for all Categories/Books. Perform
relevant FA setup
Open and Close GL periods Retained Earnings accounts contain disabled or expired
segments Apply database Server side patch 642941 for Rel 10.7. This will provide
more meaningful error messages on the setup issues/steps not completed prior to
running the Open Next Period process.

SIGNAL 10
A Signal 10 usually suggests a memory or addressing error, but it can also be caused
by a bug in the code, or if an .rdf file has been incorrectly ftp·d from the client in
ASCII mode. There are several patches available on Metalink for Signal 10 errors
caused by bugs in the various Oracle products. If it is a memory-related problem,
sometimes shutting down the Concurrent Managers and the database, and booting the
machine will eliminate any run away processes and reclaim resources.

Examples of Signal 10 errors include -

Process Possible Cause Possible Resolution


AP Mass Additions Create Lack of space in the database. Ask your DBA to check the
available space in the FA data and index tablespaces, and allocated an additional
datafile if necessary.
Oracle Alert process If email (Unix Sendmail) is part of the action items defined in the
Alert definition, and Sendmail is not configured correctly for the system, is non-
functional or is missing. Ask your Unix Administrator to verify the Sendmail
configuration, as it is specific to certain variations of Unix.
THE DREADED SIGNAL 11!!
A Signal 11 error indicates that a memory fault has occurred, where an Oracle
executable has attempted to access memory incorrectly, e.g. program
pointers/addresses for memory allocations are incorrect or corrupted, or an attempt
has been made to use a segment of memory for which it does not have permission.
The most common causes of a Signal 11 are ²

1) Data Corruption
2) File Size ² insufficient disk space in $APPLCSF/out directory where concurrent
manager output is created
3) Kernel parameter problems
4) Lack of storage space in Tablespaces, Rollback Segments etc
5) Reports ftp·d using incorrect mode ² ASCII instead of BINARY

Examples of Signal 11 errors include -

Process Possible Cause Possible Resolution


FA Periodic Mass Copy Header file corruption Fixed with Patches 924617 and 958462.
FA Mass Change Bug in Mass Change program
For Rel 10.7 -Fixed with Patch 534568 (or Patchset C or higher)

For Rel 11 ² Fixed with 709527 (or Patchset 11.0.FA.C ² part of the 11.0.3
Maintenance Pack)

The problem with Signal errors is that whilst we know the meaning of the Signal code
that was encountered, how do we pinpoint which of the possible causes actually
caused this violation ?

What to look for with Signal errors «

Trap the Core Dump file !!

When a Signal error is encountered by the system, a core dump is usually produced.
This core dump is created at the instant the process was ¶aborted·, and is produced in
the form of a file called ¶core·. The information contained in the core dump file is
most valuable in helping to resolve a Signal error. It is important to trap the file as
soon as it is created, as these types of files are overwritten the next time a core
dump occurs. It is always a good idea to rename the core dump file produced and
move it to a different directory.

The core dump file is usually contained in the directory where the command was
issued, or in $ORACLE_HOME/dbs.

If a core dump is not generated, this is usually due to the dump destination directory
being full, or the process not having write access to the dump directory.
Produce a debugged version of the Core Dump file !!
You can relink the executable in debug mode by using the Adrelink utility. This will
allow you to rerun the process causing the Signal error, producing a debugged version
of the core dump. This provides more comprehensive statistics than the standard core
dump file produced without debug switched on.

An example of using the debug feature is outlined below, for the Oracle Assets
Periodic Mass Copy process (executable FAMCP) ²
1) Log in as APPLMGR to the operating system and environment affected.
2) Change directory to the directory in which the executable resides e.g.
$FA_TOP/bin
3) Make a backup copy of the current executable.
4) Issue the following command -

Adrelink force=y ranlib=y link_debug=y ´fa FAMCPµ

This will produce a log file called adrelink.log, which can be located in
$APPL_TOP/install/log. This log file will be appended to every time a relink command
is initiated.
5) Re-run the process that produced the Signal error, e.g. Periodic Mass Copy.
6) Turn debug off by typing in the following command ²

Adrelink force=y ranlib=y link_debug=n ´fa FAMCPµ

The problem with a core dump file is that it is not easily decipherable. You will need
to enlist the help of your DBA or a technical resource with expertise in this area, as
you will need to determine which debugger exists on your system, and use it to
produce a stack trace from the core dump. The Unix command ´scriptµ should be
used to capture the output of the debugger. This can then be sent to Oracle Support
Services to help them determine why the core dump occurred.

Checklist for Signal errors


The following general checklist, utilising the techniques described earlier in this
White paper, can be used as a guide to trying to determine the cause of a Signal error
²
Action

1. Review Concurrent Request log for any useful information on the error encountered
2. Locate core dump file ² rename and move to a separate directory
3. Identify full version numbers of the product, RDBMS and the program that
encountered the error
4. Answer the basic questions ²
Has the process worked before ? If so, when, and has anything changed since then,
such as application of patches, upgrades or configuration ?
What is the current Patch Set level for the product?
Is the error reproducible in the same environment ?
Is the error reproducible in another environment ?
Is the problem related to a specific User or Responsibility?
If using Release 10.7SC, is the error reproducible in Character mode ?
Is it standard or customised functionality ? If it·s a custom report, and is based on a
standard report, run the standard report to see if it too errors
5. Check Metalink for known bugs relating to this Signal code and any patches
available
6. Check storage space in tablespaces, rollback segments, etc
7. Bounce the Concurrent Manager
8. Run the process from the operating system command line
9. If Signal error occurred when running a report :
1) Check disk space in $APPLCSF/out directory
2) Check report was ftp·d in BINARY mode
3) Check SRW.USER_EXIT (FND SRWINIT) in Before Report Trigger
4) Check SRW.USER_EXIT (FND SRWEXIT) in After Report Trigger
5) Check for P_CONC_REQUEST_ID parameter in user parameters
10. Rerun the process in debug mode
1) Application debug mode e.g. FA:Print Debug (profile option)
2) ADRELINK utility
11. Check for data corruption:
1) Check parameters entered when submitting the Concurrent Request. If it is a
report process, try narrowing down the parameters e.g. smaller date range
2) If submitted by batch feed, check input file for errors in field mapping or the
presence of invalid characters
3) Process of elimination. Identify the records being processed. Run the SQL
statements being executed by the program, and check the status of each record for
data corruption.

12. Log TAR with Oracle Support Services


13. Send debugged core dump file and concurrent log file to
1) Oracle Support Services
2) DBA
Include all relevant version numbers

TIPS AND HINTS FOR COMMON APPLICATION ERRORS

Asset Trace
This SQL script is designed to provide detailed information on a single asset. It is
extremely useful for troubleshooting problems with your assets. Whenever you log a
TAR with Oracle Support Services for problems with particular assets, you will ALWAYS
be asked to run this script for the offending asset(s). The script captures the data
contained in all the Oracle Assets tables for an individual asset, and outputs the
details to a spool file. You should find this file in the admin/sql directory under
$FA_TOP. To run the script, log into SQL*Plus as the APPS user and type -
@trace.sql or start trace.sql

You will be prompted to enter the ASSET_ID and BOOK_TYPE_CODE [uppercase].


To get the ASSET_ID run the following statement in SQL*Plus -

select asset_id from fa_additions where asset_number = '[the asset number]';

If you cannot find the script on your system, you can locate it on Metalink, or
alternatively contact Oracle Support Services and get them to e-mail the script to
you. The script is called trace10.sql (Rel 10) or trace11.sql (Rel 11), and produces an
output file called t107.lis and t11.lis respectively. There is also a tracehtml.sql that
produces the output in html format (fa_trc.html), which can be viewed using IE
Explorer 4.0+ or Netscape 4.0+. This version of the trace script works with both
Release 10 and Release 11 of Oracle Assets.

To effectively analyse the asset data in the trace output, you really need to have a
good understanding of the Oracle Asset tables in order to determine if there is data
corruption. Alternatively, log a TAR with Oracle Support Services and send the trace
results to them for further investigation.

Profile Option FA:Deprn Single


This profile option should be used specifically when a depreciation run fails in Oracle
Assets. It controls the caching buffer used when depreciation is run. You can set the
buffer to either ¶No· or ¶Yes·. If you set the value to ¶No·, then the cache is reset
after every 20 assets. If the value is set to ¶Yes·, then the cache is reset after EVERY
asset.

When depreciation encounters an error in Rel 10.7 and 11, the program will fail at
that point and rollback any depreciation that was committed. In this situation, change
the value of this option to ¶Yes· under the ¶Fixed Assets Manager· responsibility, and
the value of the FA: Print Debug profile option to ¶Yes·, and rerun depreciation - the
process will fail again, but you will find that the log file now produces detailed
information on the actual asset causing the failure. Once your resolve this, you will be
able to resubmit the depreciation process. If there are no other errors, the process
will complete successfully.

In Release 11i, the run depreciation and close periods have been broken out to
separate processes. If depreciation fails, and you set the FA:Deprn Single value to
¶Yes·, it will process all the assets in your Depreciation Book and capture ALL errored
assets in the log file. This way you can fix all of your errors in one pass, resubmit
depreciation, and this time only the corrected assets will process.

Please note you should only ever change the value of this profile option to ¶Yes· when
depreciation fails.

ap375496.sql ² AP Trial Balance Rebuild script


This script can be useful if your AP Trial Balance report is not reporting correctly on
outstanding liability. The script is a data fix script that can be run to recreate the
trial balance records in the AP_TRIAL_BALANCE table for all records (payments and
invoice distributions) that have been transferred to GL within a specified range of
accounting dates. This script is contained in patch 375496 and can be downloaded
from Metalink.
Note the script is not needed for cash basis installations.

Account Generator FA Test Script


This script assists in diagnosing problems in Oracle Assets concurrent processes that
fail due to account generation. It replaces the Flexbuilder Test screen used in Release
10. The output from the script tells the User the following ²
1) The code combination id that Account Generator attempted to build.
2) The reason why Account Generator could not build this code combination.
3) If Dynamic insertion is set to ¶Yes·.

When you run the script, it will prompt you for a number of parameters. All the
information that feeds this script is contained in the log file from the failed run.

Release 11 of Oracle Assets is delivered with this SQL*Plus test script called
faxagtst.sql. You should find this in the $FA_TOP/admin/sql directory. The latest
script comes with 11.03 Minipack. Contact Oracle Support Services for the latest
version.

Account Generator Purchasing Workflow Test Script


In order to find out what is wrong with the Account Generator workflow, a script
called wfstatus.sql can be run. The parameter WF_ITEM_KEY is required to run this
script. The wfstatus.sql gives important debug messages, including error messages for
the processes that are erroring. This provides the same information that the Workflow
Monitor does, if the customer has Workflow Monitor set up. Running the script
involves the following steps ²

1) Check the profile option Account Generator: Run in Debug mode. This should be set
to Yes. (No or Null value means that the account generator process is being run in
'Synch' mode, in which case no information is logged in the workflow tables, and
hence wfstatus.sql would not retrieve any information)
2) After setting the profile option to ¶Yes·, simulate the error, by navigating to the
Distribution zone of Enter Purchase Order or Enter Requisition, and trying to build the
account again.
3) After you receive the error, go to Menu Help -> Tools -> Examine, and look for the
value for field CHARGE_ACC_WF_ITEMKEY (Click on the arrow next to the Field Zone
and then Query up the CHARGE_ACC_WF_ITEMKEY). Note down the itemkey value
required to run wfstatus.sql. (NOTE: If the value for the field is _SYNCH#_, the profile
option Account Generator: Run in Debug mode has not been set to ¶Yes·).
4) After getting the itemkey value, discard changes in the distribution zone, navigate
to lines/header zone and commit (Save). This is important for the wfstatus.sql to
retrieve any data.

How to create an AR Debug log file


This function is normally used in the Transactions, Customers and Receipts screens.
The output gives details of the packages that are being used on the process when an
error occurs. The Debug steps are as follows ²

1) Go through the steps that you would perform to encounter the error/problem but
stop just before you would get the actual error. i.e. Don't hit the key that will invoke
the error.

2) From the Help menu at the top of the form, select: Help, Tools, Examine. Enter
the password if prompted. This will give you a box with 3 fields. Enter the Following:
In Block: Parameter
In Field: AR_DEBUG_FLAG
In Value: FS Where F = File S = Screen.
relates to the UTL_FILE_DIR directory where the database writes files. There are a
few ways to determine what the UTL_FILE_DIR setting is ²

3) Run the following in SQL*Plus for the related instance:


Select value from v$parameter where upper(name) = 'UTL_FILE_DIR';

4) The UTL_FILE_DIR setting should be held in the init.ora file under the
$ORACLE_HOME/dbs directory. There is no default setting for this parameter if it is
not set. If the setting needs to be added to the init.ora file, then please note that
your DBA will need to bring down the database instance and bring it back up for the
changes to be effective.
is whatever you wish to call the output created. i.e. Customers.log

5) Hit the key that will invoke the error/problem. A log file should then be created in
the output directory with the filename you have specified.
ORA-01000 Maximum Number Of Cursors Exceeded
This is one of the most common errors that can occur. The reason for this error is that
Oracle has reached the set limit of open cursors allowed for that executable or User
session. It is the initialisation parameter OPEN_CURSORS that determines the
maximum number of cursors. If it is a custom program that fails with this error, then
a developer should review the code to see if the program can be modified to use
fewer cursors. If the error occurs often within your Oracle Applications, your DBA
should shut down Oracle, modify the OPEN_CURSORS parameter to increase its value,
then restart the database.

ORA-01562:failed to extend rollback segment number (n)


ORA-01628:max # extents (n) reached for rollback segment
Errors with rollbacks segments are common in the Oracle database when storage
becomes a problem. If you encounter errors with rollback segments, or a ¶max #
extents· error referring to a database object such as a table or index, contact your
DBA, as the size of the rollback segment will need to be increased, or the value of the
maximum extents parameter increased to allow for further growth.

REP-1222 Invalid body size


REP-1212 Object is not fully enclosed by its enclosing object
REP-1211 Object is larger than enclosing body
These are common errors that can occur with the running of reports. The cause of
these errors is usually that the Rows/Columns defined in the Printer Style you are
trying to use do not cover the whole of the report. If it is a Standard applications
report, then the solution is to increase the Col/Row for the Printer style. If it is a
customised report you need to establish the size of the report and set-up a Printer
Style as appropriate.

FIXES FOR DATA CORRUPTION


If your problem is in fact caused by data corruption, a data fix will be required to
update the records in error. If you need to run a data fix script, it is essential that you
perform a backup of the Oracle Application tables that will be affected by the fix.
Additionally, to form an effective audit trail, output should always be spooled before
and after the data fix to verify its results. Always review any data fix scripts sent to
you by Oracle Support Services before you run them. Back up the tables the script is
either inserting, updating or deleting records from, and if the script does not contain
a SPOOL command, then update the script to insert the SPOOL command before
execution of the script, and insert a SPOOL OFF command after the last SQL
statement. Example as follows

SPOOL UPD_PAY_DIST

Update ap_payment_distributions_all
set amount = base_amount
where invoice_payment_id = 13548
and payment_line_number = 33;

SPOOL OFF

Last, but not least, always run the data fix script in a Test environment before
running it against Production. It is always good practice to perform regular refreshes
of your Test environment to keep your Test environment up to date with your
Production environment.
2. THE ORACLE APPLICATIONS ARCHITECTURE

Depending on the Oracle Applications version you are running, the architecture of the
Oracle Applications environment will vary. The fundamental differences in the
architecture are as follows -

Release 10
The Release 10 architecture supports both the Graphical User Interface (GUI) Client
and Character Mode forms. Release 10 SmartClient (SC) works with an Oracle
Applications server running Release 10.7. Through the character mode forms tool, you
can access data through workstation clients. There are generally two tiers in this
architecture ²
Database tier (contains the database, application logic and the concurrent mangers)

Client tier (contains logic required to display the GUI front-end forms and some of the
business logic and error handling)

(Source: Oracle Applications for MS Windows Clients Installation Manual. Applications


Server for Window NT Edition. Release 10SC Production 16.1 pg 2-2)
Release 10.7 NCA ² Release 11
A third tier is introduced in the later releases of Oracle Applications. The client
interface is provided through a Java Applet Viewer or Web browser that launches the
applications forms server from the middle tier. The three-tier structure is as follows ²
Database tier contains the data manipulation logic and manages the database.

Application tier contains the applications specific logic and manages the Oracle
Applications and other development tools (the forms and procedures that define the
application), including the Web Application Server software.

Desktop client tier provides the User interface display.

The introduction of the middle Application tier eliminates the need to install and
maintain application software on each desktop client, and helps to reduce network
traffic. The fundamental difference with the Internet Computing architecture is that
only the presentation layer of Oracle Applications is on the desktop tier in the form of
a plug-in to a standard Internet Browser.

(Source: Oracle Applications Release 11 for Unix Concepts pg 1-2)

Release 11i (11.5)


The Internet Computing architecture of Release 11i provides a framework for multi-
tiered, distributed computing that supports Oracle Applications products. It is the
100% Internet Oracle Applications suite that allows companies to run their worldwide
operations from a single, centrally managed site. The Database tier manages an
Oracle8i (8.1.6) Relational Database Management System (RDBMS) which offers
additional features only used by this Release.

(Source: White Paper - Upgrading to and administering Release 11i: A Technical


Perspective)
2. Basic Structure of the $APPL_TOP

It is important to have a general understanding of Oracle·s APPL_TOP and the


directory structure of the Oracle Applications product files. It helps to know where
the applications code resides for the various products, as some of the tools and
techniques for investigating support issues require you to directly access these
specific program files.

In the basic directory structure, there is a top Applications directory that holds one
set of product files. You create this directory and set the environment variable
$APPL_TOP to point to it. The Autoinstall process sets up a directory tree for each
fully installed or dependent product within this directory.

A product directory tree starts with a directory that uses the product·s abbreviation,
eg ap for Oracle Payables, gl for Oracle General Ledger. Below that is a sub-directory
that uses the product version. This is known as the Product Top directory eg
$GL_TOP, $FA_TOP, $AP_TOP. Beneath each Product top directory are various sub-
directories to hold the different types of product files. Forms programs will reside in
the forms directory, report programs will reside in the reports directory, sql scripts in
the sql directory and so on«.

Custom programs should reside in a custom directory structure under the custom
application top directory.

A Look at $APPL_TOP

The Oracle Application software is installed in one common area in a subdirectory


tree fashion. The top directory for this repository is defined as $APPL_TOP. All other
subdirectories will key off of $APPL_TOP. For each product module that you have
installed, there will be another $_TOP variable defined. For example, if you have
Accounts Payable, General Ledger, and Purchasing, then you will have a $AP_TOP,
$GL_TOP, and $PO_TOP. Underneath these directories, you will notice a common
directory tree including /bin, /forms, /lib, /sql, /install, /srw, etc. Each directory
has its own unique use, and is similar for every product top. You should become
familiar with these standards and the contents of these directories.
Note that there exists other "tops" not associated with the product modules that your
site may use. These are either "shared" products or supporting products that Oracle
installs for you. We will explain a few of them in more detail. Note the file called
APPLSYS.env under $APPL_TOP. This is one of the environment files that are sourced
in to define your environment variables and directories necessary for your
applications. It will define $APPL_TOP, your product tops, and other variables. We
will discuss some of the "output" directory variables in a moment.

A couple of other directories worth noting under $APPL_TOP include


$APPL_TOP/install and $APPL_TOP/patch. The /install subdirectory holds many of the
utilities used for the initial installation. It also has a /log subdirectory where
installation, patches, and other adadmin utility log files will go. You will visit this
directory often. Typically, you will have $APPL_TOP/patch for holding patches that
Oracle Support may send you (will send you). You can put your patches in another
directory, but this is the default. You will become very familiar with patches, too.

A Look at $FND_TOP

Another top that you will find is $FND_TOP. When Oracle started writing the financial
applications software, they developed a core set of utilities and objects called
Application Object Library (AOL). From these objects, they wrote the "foundation" for
the Oracle Applications, referred to as Application Foundation. This foundation code
is stored under $FND_TOP. As product modules developed, they hooked these into the
Application Foundation infrastructure utilizing the AOL objects. Examples of these
AOL objects and FND products include the concurrent managers, quick picks, zooms,
etc.

Notice that $FND_TOP has a very similar directory tree to the other product modules.
You may have noticed that when you fire up the applications, that you call the script
file "found" (short for Application Foundation) which executes the $FND_TOP/bin/aiap
executable and passes the username/password stored in the variable $GWYUID
(typically applsyspub/pub) to get you to your initial login screen. See, its not magic --
just code. Many of the topics that we will cover, especially the Concurrent Managers,
are found in $FND_TOP.
A Look at $AD_TOP

Most of the other utilities used by the Oracle Fin-Apps DBA and which we will discuss
in detail are found in $AD_TOP. Of particular interest are the /bin and /sql
subdirectories. You will find the following executables in $AD_TOP/bin: adaimgr
(autoinstall for the installation or upgrade of the software), adpatch for administering
patches, and adadmin which is a menu driven utility for maintaining both the Oracle
Applications database and software. Many of these utilities in turn call other $AD_TOP
utilities.

Output Directories of the Oracle Applications

There are several directories where output is written. These directories require
routine cleanup and maintenance. The jobs that are run from the concurrent
managers create both log and output files (reports). The location of these files
depends upon the variables $APPLCSF, $APPLLOG, $APPLOUT, and $*TMP. The
$APPLLOG and $APPLOUT variables are typically set to "/log" and "/out", respectively,
but they can be set to other values. The location of these subdirectories depends
upon the value of $APPLCSF.

If $APPLCSF is set to a directory, then all of the product modules (AR, AP, PO, etc.)
output will go to a common "log" or "out" directory. The typical setting, though, is to
not have $APPLCSF set to any value. When this is true, then the output for the
product modules defaults to the specified "log" and "out" directories under the
corresponding product module top. For example, concurrent manager jobs run from
an Account Receivables responsibility would find the logs and data output in
$AR_TOP/log and $AR_TOP/out.

I would advise you to not set $APPLCSF. This way, you can more easily find and
categorize your output. There is generally a lot of output, anyway, and you can stress
the inodes by having too many files. Be sure that your temporary directories, such as
$APPLTMP or $REPTMP get cleaned up and don't fill up a file system. Note that any
SYSADMIN responsibility output will go in $FND_TOP/log or $FND_TOP/out.
3. The Concurrent Managers

One of the most attractive features of the Oracle Application Software is the
concurrent manager. Basically, this is a batch job scheduling system for running
reports and background tasks. From the concurrent managers you can manage queues,
workshifts, access and security, job priorities, job log and output, job automation,
and assorted job compatibility (or incompatibility) rules.

This feature is one of the key areas that can consume much of the Oracle Fin-Apps
DBA/SYSADMIN time. To find more complete instructions on how to setup and use the
concurrent managers and the jobs that they run, refer to the AOL Reference Manual.
For the purposes of this presentation, we will discuss major concepts in setting up the
managers, performance issues, and other general tips and suggestions.

Basic Tuning of the Concurrent Manager

We go back to the age old concepts of computer tuning and balance loading for OLTP
versus Batch Processing. OLTP (on-line transaction processing, or "real-time"
computing) is where you have end-users doing their work on the screen needing quick,
real-time results -- especially if they are servicing clients in person or on the phone.
These requests need to be completed as soon as possible as to not disrupt the
business and revenue flow! An example of these transactions may be your Order Entry
people in customer services.

Note: Just because an on-line transaction submits a job to the concurrent manager (or
the "background") that this does not necessarily qualify it as a "batch-processing" job.

On the other hand, batch-type jobs can afford to be completed at a later time than
when initially entered. They usually can be grouped together (batched) and processed
outside of the normal business hours. Examples of these type of reports could be
financial reports, summary reports, end-of-day processing, etc. Some jobs are
required to assist the on-line transaction processing but can be batched (like a sales
forecast or open ticket report) but needs to be completed prior to the days activities,
rather than after.

You may be in a 7x24 shop where OLTP is always a priority. Balancing your OLTP
versus batch jobs may be a little more complicated. Still, your objective is to reduce
the impact of the non-critical resource hungry jobs against the OLTP transactions. The
batch jobs will just have to work when OLTP demands drop. You do this by managing
queues, workshifts, priorities, incompatibility rules, and . . . end-user training or
awareness.
This end-user awareness and training is perhaps one of the most neglected areas, yet
is so important. In order to determine which jobs can be truly classified as OLTP (real-
time critical) versus batch is going to require interviews with your end-users and/or
business systems analysts. One of the most common problems that I have observed is
that sites pretty much leave the standard and default queues created by the
installation process. Then, the jobs go into the queue and operate on a first-come
first-serve basis. This will not give you the results you need.

Tips and Techniques for Concurrent Manager Management

The right answers will depend upon the results of your interviews and trial-and-error,
but here are some basic ideas that some sites use. Create queues based upon the
duration of a job, such as FAST versus SLOW. The FAST queue usually handles jobs
that complete in within a minute and the concurrency (number of jobs that can run
concurrently in the same queue) and priority is high, where the opposite criteria is
held for the SLOW queue. Another technique is to setup OLTP versus BATCH queues
where the workshift for OLTP is setup for prime-time business hours and BATCH for
non business hours. Setting up queues for workshifts, functionality, and departments
are more examples, but certainly not all of your options.

I tend to favor a combination of OLTP versus BATCH functionality. By combining


queues and their workshifts, concurrency, and incompatibility rules, you should strive
to get the maximum throughput possible for OLTP and convince users that batch jobs
which are needed for next-day activities should be moved to off-hours processing and
set with lower priorities.

Starting and Stopping the Concurrent Managers

While you can start the concurrent managers within the applications, I dislike a
couple of the defaults. 1) The default pmon time is 60 seconds. My clients usually
need this time to be sooner, like 30, 20, or 10 seconds. 2) I do not like the default
name of std.mgr for the internal manager. I prefer that it has the name of the
instance. You can overcome these defaults by scripting the start and shut commands
with different parameters. Besides, it is very useful for starting or shutting down the
concurrent managers from the command line -- especially in the .rc Unix scripts.
Example script for starting the managers:

#strmgr.sh
date
echo "Executing strmgr.sh script ..."
echo "Starting Concurrent Managers ..."
startmgr sysmgr="apps/fnd" mgrname=prd sleep=20
#exit

Actually, I would advise you to use symbolic parameters for the APPS password
instead of hard coding it. The "sleep" parameter tells the internal manager to search
fnd_requests every 20 seconds for new requests, rather than the 60 second default.
The internal log file will be called prd.mgr (typically found in $FND_TOP/log). There
are other parameters available, too, such as the debug option. Consult your manual
for more details.

Example script for stopping the managers:

#stopmgr.sh
date
echo 'Stopping Concurrent Managers ...'
#The following is one command line
$FND_TOP/bin/CONCSUB apps/fnd SYSADMIN 'System Administrator' SYSADMIN WAIT=Y
CONCURRENT FND DEACTIVATE
#End of command line
ps -ef | grep LIBR
date
echo 'Concurrent Managers Stopped'
exit 0

Notice that stopmgr.sh does not run a command line executable to directly stop the
managers. Instead, it submits a concurrent job via the concsub utility. The WAIT
parameter tells the job not to process any further until all the managers have
shutdown before proceeding and eventually exiting the script.
Debugging Concurrent Manager Errors

Look for errors in the logs. The internal manager's log file will usually be in
$FND_TOP/log (see previous discussion on defining log and out directories) defaulting
to std.mgr or named as you specified in the command line parameter, mgrname=. The
internal manager monitors the other queue managers. You will see the startup,
shutdown, print requests, and other information in this log. You may also find errors
as to why the internal or subsequent slave managers could not start.

All of the other managers have dedicated logs, too. They are preceded with a "w" or
"t" followed by an identity number, such as w139763.mgr. Each queue will have one of
these log files. You can see individual jobs and associated request ids in each of these
files. You can review error messages, too. Occasionally, a job will fail and take the
manager down with it. The internal manager will sense that the queue is down and
restart it on the next pmon cycle.

Suggestion: We will discuss purging of the fnd_concurrent_request table and


associated log and output files, later, but I would make this suggestion. Purge these
manager files frequently (daily) so that you can easily perform a search on "error"
when trying to debug concurrent manager errors.

Kick Starting Dead Managers

Sometimes you may encounter difficulty in starting either the internal concurrent
manager or the other slave queues. Consult the log files for error messages and take
appropriate action to resolve the problem. If you are unsuccessful, then enter the
"verify" command in the concurrent manager screen to force the internal manager to
read and initiate the target number of queues specified. If that doesn't work, try to
deactivate or terminate the managers, then restart them. If you have trouble bringing
them down, you may have to perform a "kill" on the background process. You can
identify the managers with "ps -ef|grep LIBR" command. If you still encounter
problems, make sure that there aren't any processes still tied to the managers. If you
find any, kill them.

If you still encounter problems, then the statuses are probably improperly set in the
tables. For example: You may see the error in the internal std.mgr log file stating
that it was unable to start because it has already started! You have verified that
there are no "FNDLIBR" background processes. The problem is that the tables have
improper statuses. You will have to clean up these tables. Here are some queries. I
put them into scripts and keep them handy for when the time arises because the
statuses are not that easy to remember.

æ Reset the concurrent queues:


UPDATE fnd_concurrent_queues
SET running_processes=0, max_processes=0;
æ Remove any completed jobs: (optional)
DELETE FROM fnd_concurrent_requests
WHERE conc_process_status_code='C';
æ Set jobs with a status of Terminated to Completed with Error: (optional)
UPDATE fnd_concurrent_requests
SET status_code='E',phase_code='C'
WHERE status_code='T';
æ Delete any current processes:
DELETE FROM fnd_concurrent_processes;

I have listed these in descending order of frequency that I have had to use them.
There is a paper available from Oracle Support which describes these and more.

Purging Concurrent Manager Logs and Output

The concurrent managers create several table entries and file output in the /log and
/out directories. You should purge these frequently to reduce excessive table growth
and fragmentation, and avoid performance degradation of the concurrent manager
processes. You should also decrease the used space on your disks from old log and
report files. This will also relieve stress on the inodes from a large number of files.
Under SYSADMIN, setup a reoccurring report called "Purge Concurrent Request and/or
Manager Data". There are several parameters, but I typically try to setup two jobs. 1)
One job for "Manager" data -- that's the concurrent manager log files typically found in
$FND_TOP/log. I set the frequency to daily, and have it purge down to one day. 2)
Another job for the "Request" data -- this is for all other modules outside of the
SysAdmin responsibility, such as AR, PO, GL, etc. I typically try to keep only one
week's worth of data out there on the system. Your needs and capacity may vary, so
set accordingly.
This purge process does two things: 1) Deletes rows from the fnd_concurrent_requests
tables, and 2) Deletes both the log and output files from the associated $XX_TOP/log
or /out directories. If for any reason the file delete did not complete, but the table
data was purged, then you will need to manually purge the output files from the /log
and /out directories. This can happen if the privileges were incorrectly set, or you
replicated a copy of the production database to your development environment, or
the file system was not mounted, etc.

Purge Signon Audit Data

This is another purge report, like above. Only this purges the signon audit data which
records every login to the Oracle Applications. Set the frequency and retention equal
to that of your request data purge.

Performance Tuning of Concurrent Manager Jobs

What has been described thus far is balancing job throughput. Yet, the jobs
themselves may be in need of sql tuning or resolving problems in the database. We
won't go into detail of sql tuning -- that is a typical skill set that should be handled by
the IT staff. What I want to discuss here are ways of identifying and classifying
problems within the Oracle Applications.

FND Tables Can Speak Volumes

The concurrent manager is just a scheduling system that keeps track of jobs,
parameters, scheduling information, and completion statuses of every job submitted.
By querying these tables, you can learn much about the patterns of your site,
including performance trends.

I strongly suggest that you become familiar with these tables and develop reports
against these tables. Some of the most useful tables are the fnd_concurrent_% tables.
Things to look for are which jobs are run, how many times executed, completion
status (especially "errors"), and run times for these jobs.

Where Can I Get Help?


When it comes to looking for established help on tuning your concurrent manager
jobs, there is an excellent reference that can never be exploited enough... the white
paper on Managing the Concurrent Managers or ("How to Herd Cats") by Barbara
Matthews. See proceeding papers from the OAUG Fall 1997 convention. This
presentation has been very useful to me. I have modified several of these scripts to
my clients' needs.

My favorites are daily errors, daily and weekly hogs, the min/max reports, and the job
schedule report (note that these are not the exact names that you'll find). Here are
some ideas on how to use these reports.

The daily errors report shows me every job that completed with an error status. I
review these from time to time to look for trends. The error could be caused by a bug
(so then you open a tar and look for an existing patch). The problem is usually
attributed to user error, such as bad parameter input. But don't let the error go on --
it could be an indication that the user needs some training or other help (you'll know
the user name because it provides the request id number that allows you to view all
the details and log of the job -- if you haven't purged it, yet).

The hog reports flag every job that exceeds some set time threshold (such as 20
minutes). It also sets a submission time range, such as weekdays 6:00 AM to 6:00 PM.
The idea here is that we are looking for jobs with very lengthy completion times
running during standard operating business hours (the prime OLTP window). If a job
exceeds this limit, then it is taking resources away from your OLTP users and should
either be 1) tuned to reduce execution time, or 2) moved to the "batch" processing
window or queue during the off-hours.

Before you tune a "hog", I would suggest that you see if a performance patch has been
issued on this program. Many times there is, and this can save you the trouble of
tuning it -- and crossing that dilemma of introducing a customized piece of code into
your environment.

The min/max reports can be modified to sort the jobs in ascending or descending
order based upon the execution time or number of times executed. This report takes
some interpretative skills. For example, lets say that you identify the job that has the
longest execution time... say 4 hours! At first glance, this looks like a sql tuning
candidate. A closer look, though, reveals that the minimum time it took to run the
job was only 2 minutes -- and that the average time for 300 submissions in one day
was only 5 minutes! Now, what you have is some sort of exception. You should cross-
reference this job to the "hogs" report -- it should be there. Or, see if it was in the
errors. By finding the request id of this aberrant job you can review the details. You
may find that the parameters specified a much larger data set, or was incorrect, or
many other things.

If you finally determine that the job was correctly submitted and that the rest of the
evidence points to an optimized sql code set, then you have probably encountered a
"non compatible" job! In other words, the job is fine by itself, but may suffer
drastically due to contention with other jobs run at the same time. With more
detective work, you should strive to find which jobs it is incompatible with and
rearrange queues, priorities, or compatibility rules to ensure that they will not run
simultaneously.

The job schedule report shows all the scheduled jobs that keep submitting
themselves, automatically. There are a few things I look for, here. One is the sheer
volume of jobs that may be scheduled -- are they really needed? Often these jobs get
scheduled, then forgotten, and are no longer useful. Or is it a batch oriented job that
runs during peak time that should be rescheduled to a more practical time slot? Or is
the owner of the job still an employee? I have seen many "ghost" jobs that were once
submitted by users who have left the company -- but their reports still run,
regardless!

One last item about scheduled jobs. See if the jobs are overlapping themselves. When
specifying the resubmission parameters, you can have a job start at a fixed time, or
reschedule at a time interval calculated when the jobs starts, or reschedule at a time
interval after the job completes. I often find jobs scheduled to resubmit some time
after the first job starts, like every 15 minutes. Maybe the job used to complete in 5
minutes. Yet, as the database grows, the job may now be taking more than 15
minutes to complete. Hence, it submits the same job, again, when the first one hasn't
even completed, yet! Then this can cause contention degrading the performance time
of both jobs and the cycle repeats itself and degrades further and further. I would
suggest that you schedule jobs to resubmit themselves on a time delay after the
previous job completes!

I Didn't Know Those Scripts Were There!

There are some other existing scripts which may be of benefit to you, but I must first
put in a very strong disclaimer: CAUTION: Do not blindly run these scripts without
analyzing their purpose, impact, and possibly consulting with Oracle Support! Test
them in your development environment, first.

I must confess that I do not fully understand why all these files are here. I suspect
that many are used in the installation/upgrade and use of the applications. I have not
found deliberate documentation of these scripts, other than what I can see in some of
the script text. Yet, I have used some of these scripts to great satisfaction -- or at
least to learn about the usage of certain tables. These scripts are in $FND_TOP/sql.
The ones of interest for the concurrent managers are afcm*.sql and afrq*.sql. These
range from reports on the concurrent managers, locks, gridlock, etc. You can also find
useful scripts in $AD_TOP/sql, too. Again, BE CAREFUL!

Things to Avoid Regarding the Concurrent Managers

These following tips seem to be common sense, but I am still amazed at how often I
see these abuses and misunderstandings, so I will mention them...

Use of the Reprint Option: Do not allow your users to run jobs multiple times in order
to recreate the same output. They can view it offline or do a reprint on a previously
run job. There are other third party tools, too, that give more flexibility in viewing
and formatting the outputs, too.

Use Query Enter to Find Your Jobs: If a user cannot see their job on the immediate
screen, then scroll down or enter a query to further define the job that they are
looking for. I have seen sites where the user couldn't find the job they submitted on
the first screen, so they would submit it again!
Whoa! on the Refresh Screen: It is very, very common to have your whole company
just hitting that refresh key on the concurrent request screen in an effort to see their
job go into the queue or its completion status -- especially when performance is
already suffering! But this only contributes to the problem! This is one of the most
common queries possible. For one, the internal manager scans this table at whatever
the pmon interval (the concurrent manager pmon, not to be confused with the Oracle
background pmon process) where it scans the fnd_requests table for the next set of
pending jobs to be processed.

Discourage Multiple User Logins: Multiple logins by the same user to get more work
done is often contributing trouble to an over researched system. Sometimes this is
unavoidable because the user wears different "functional" hats and must view
different screens/data within multiple responsibilities. Some also find it annoying to
login and navigate to particular work screens, and then keep idle sessions active until
they need them. Try to educate your users that they consume resources (memory,
CPU, etc.) every time that they do this. In the newer NCA versions, navigating to
different screens and responsibility areas will be made easier via shortcuts and should
help to eliminate this abuse.

Eliminate Redundancy of Similar Jobs: Users often submit the same job multiple times
in the same time frame, distinguished only with minor changes to the parameters.
These jobs hit the same tables over and over again and can even create locks and
resource conflicts among themselves. Many times they would find the overall
throughput to be better if they single threaded the jobs one after the other. This can
be managed by user education or by the SYSADMIN single threading the queue or
placing incompatibility rules that limit the same program to run with itself.

Another variation of this problem is having different users running the same or similar
jobs at the same time. It may be better for the SYSADMIN to schedule these jobs to
resubmit themselves in the concurrent manager at predetermined intervals and take
away the ability for the end-users to submit the jobs, themselves. This should reduce
the frequency and burden on the system, yet allow the users to still have the jobs and
processes run in a timely manner for their use.
4. Utilities for Maintaining the Applications

There are many tools and utilities available to you for maintaining and upgrading your
applications. Some are well documented, others are more mysterious. I'll describe
some of the major utilities. Note that most of these utilities are in $AD_TOP/bin or
$FND_TOP/bin.

Patching with the Adpatch Utility

This utility is used for applying patches that you receive from Oracle Support. When
you uncompress the patch from Oracle, you will get at least one driver file
(patch.drv), a readme.txt, and new code to patch your applications. The patch.drv
file is read by the adpatch utility and performs a multitude of tasks. It basically
checks the versions of your code to make sure that the patch code is more recent,
moves the new code to the proper directories while making a copy of the original
suffixed with an "O", updates the library file, links object code to make new
executables, compiles or generates code, and logs all of its activities.

All of these tasks are performed by other utilities in the $AD_TOP/bin directory,
including adlib*, admvcode, adrepgen, adrelink, adfrmgen, etc. Look at the log file
for your adpatch task and you will see the utilities that were called. These utilities
will match up to the operative key words in the patch.drv file.

You should ALWAYS review the readme.txt file prior to applying a patch. You need to
verify that the patch is going to do what you intended, and see if there are any other
manual tasks to perform either before or after applying the patch. If sql scripts are to
be performed, the patch.drv usually moves the sql script to the directory but does not
execute it. The readme.txt file will direct you to run adpatch again, and direct you to
specify the db*.drv file as the patch input. This will execute the sql scripts.

Patching Suggestions and Tips

Always make a backup of the directories that will be affected prior to applying a
patch -- a patch can be a very nasty thing to rollback! Even though admvcode will
make a backup copy of most files suffixed with a capital "O", it is not very reliable in
rolling out a patch. This is because sometimes patches are "bundled" up with other
patches and the affected files may be patched multiple times. Hence, the backup file
"O" may actually be a backup of the backup! Without your own backup, you cannot
rollback to the original.

Regardless of what the readme.txt file says, to really be certain which files and
activities can occur, look at the patch.drv and db*.drv files. If it isn't in the patch
driver file, then it isn't going to happen.

When prompted for the patch log file, do not take the default name "adpatch.log". I
recommend that you use the same patch/bug number, such as .log. This enables you
to quickly review the results of your patch without stumbling through reams of
previous patches.

A running log of patches applied resides in $APPL_TOP/applptch.txt. DO NOT DELETE


THIS FILE! It is invaluable when determining which patches have been applied, when,
and what actually happened in the patch. I am finding this file even more critical in
considering the NCA upgrades and possible Y2K upgrades -- operations where you may
have to lay down a new baseline of the applications and reapply your patches to
recreate your current configuration!

To learn more about the patching process and several other utilities, investigate the
log and patch.drv files. Many of these utilities can be run by themselves. You may find
use (at least understanding) of these utilities.

Adadmin Utilities

This is an interactive menu available to you to maintain several aspects of your Oracle
Applications environment. The menu divides into two categories: Database and File
maintenance.

The database screen gives you options for creating or maintaining database
structures, data, or privileges. Most of these activities are encountered during
installation or upgrades. You can run many of these without adverse affect -- but you
should seek the help of Oracle Support if you are not familiar with these. Be VERY
careful not to inadvertently run the Multi-Org option unless you really mean it! Some
of these options cannot be run unless your database is NOT in archivelog mode
(intended for the installation or upgrade process). You SHOULD be running in
archivelog mode if this is your production instance. Many of these menu options can
be run standalone by the corresponding utility in $AD_TOP/bin or $FND_TOP/bin.

The file maintenance screen does not manipulate the database structure nor data --
just operating system files. Most of these options were intended for the installation or
upgrade process. You should be able to run all of these operations without
consequence -- yet, I wouldn't advise it unless you are sure of the ramifications and
your needs. Again, many of these menu options can be run standalone from the
corresponding programs in $AD_TOP/bin or $FND_TOP/bin.

When I go to a new client, two of the utilities that I would like to run from adadmin
are 1) Verify that all files exist (including extras), and 2) Verify all database objects.
Note: This second option to verify database objects no longer exists past version 10.5,
but there are other ways to do this.
The file report looks at the installation driver files and reports any missing files that
are expected to be found somewhere in $APPL_TOP. I look for missing files and verify
that we have a good, complete installation. I also look at the extra files to find
opportunities for cleanup and customizations! I'll speak more on customizations, but I
am particularly interested in whether or not the customizations are done according to
Oracle's guidelines.

The database object report would show missing, extra, and modified database
objects. It would compare the objects to the *.odf files in the application top
directories using the odfcmp utilities. Since 10.6, this functionality is gone. You can
manually run these reports using the adodfcmp utility in $AD_TOP/bin. Type in
adodfcmp by itself to get the parameters, or look in the installation manual to get
more information on this utility (and many, many more).

There is a caveat to keep in mind when reviewing these reports, though. Finding
discrepancies from these utilities doesn't necessarily mean that something is wrong.
What you are looking at is comparisons to the base installation. Patches (or
customizations) can be reasons why there are differences. While I've seen some
patches upgrade the driver files, many do not. So, you will have to scrutinize the
differences. Still, these can be some very beneficial tools in maintaining your
environment.

Installation and Upgrade Utilities

This is an advanced topic, so I do not want to spend much time, here. Yet, I do want
to draw attention to the popular utilities -- many of which overlap the adadmin
utilities.

The installation starts with unloading the software from the media with the adunload
utility (actually, you run a script file which runs adunload for you to get the base
utilities -- but you can do a manual unload yourself with this tool).

Trivia: Ever wonder where the *.inp files went under $FND_TOP/forms? They were
actually downloaded to your system by adunload in the installation process. When
autoinstall generated your forms, it deleted the *.inp source files for the
$FND_TOP/forms -- and only these forms. Why? Because you're not supposed to be
messing with these files! However, if you ever delete the FND form executables, you
can run the adunload utility to get the *.inp files from the base installation media.

After adunload gets your source code to the $APPL_TOP directories, you will
eventually proceed with the installation or upgrade process using the adaimgr
(autoinstall) utility. This is a menu driven utility which will ask you several setup
questions. Eventually you will get to the "upgrade database objects" step in
autoinstall. When this starts, it will read the necessary driver files (*.drv) which then
calls several other utilities in the proper sequence, depending upon the products you
have purchased and your answers to the adaimgr setup menu questions. These
installation processes are run by the adworker background processes. You monitor
these processes via the adctrl program.

The Adctrl Utility

The adctrl utility is one that you will use with adpatch, as well as autoinstall. In the
newer versions of Oracle, the patches can now be multiplexed -- multiple processes
running concurrently. A temporary table called fnd_install_processes is created to
keep track of the drivers, sequencing, and statuses. Through adctrl you can manage
or view the status of these jobs. When the patch completes successfully, the
fnd_install_processes table will be dropped. However, if one or more of the drivers
fail, the status will be shown and you will be required to resolve the problem. You
now go to the corresponding adworker log file.

Adworker Log Files

The adworker log files are found in $APPL_TOP/install/log. They will be numbered as
adworker01.log - adworkernn.log, depending on how many concurrent processes you
specified at the prompt. Find the log corresponding to the process(es) flagged as
"failed" in the adctrl menu. Go to the bottom of the log file, find the error, and
resolve the problem.

If you were able to resolve it, go back into adctrl. If you were able to resolve the
problem before the remainder of the other adworkers became dependent upon the
failed adworker(s), then you can use the menu options to change the status to "fixed"
and restart the failed adworker(s). The process continues until it finishes or
encounters more problems requiring you to follow the same procedures mentioned
above.

However, if all the workers failed or came to a point where it could not proceed until
the dependent failed adworker(s) were resolved, the adpatch (or adaimgr) process
may have shut down. In this case you will need to restart the process.

Restarting Adadmin Utilities

Some utilities such as adrelink, adaimgr, and adpatch may abort or shutdown prior to
completing all of their steps. In this case, you need to refer to the adworker log files
to determine the problems and resolve them. If you were able to resolve them, simply
reinitiate the utility, like adpatch. Upon startup, it will check both the restart files
found in $APPL_TOP/install/restart and for the presence and contents of the
fnd_install_processes table. If the failed workers now have a status of "fixed, restart",
the appropriate adworkers are reinitiated and resume progress as tracked in the
restart files.

If you decide to completely abort the process and start over (careful, this could have
adverse affects), then answer the prompts when restarting the utility that you do NOT
want to resume the previous unfinished run. (As a safety guard, you will also be
prompted to answer the question again phrased in opposite logic.) You may then see
an error where the process cannot start because it found the presence of the
fnd_install_processes table, hence it could not create it and the job fails. That's okay.
Login to sqlplus as applsys and manually drop the fnd_install_processes table.

Please refer to your utility and installation manuals for more complete instructions on
how to use these programs. These are mentioned here to illustrate that the
application installation, maintenance, and patching procedures are not magic.
Rather, they are logical procedures which call upon several utilities within the
$AD_TOP/bin and $FND_TOP/bin directories. Become familiar with them. Here's a
recap on important utilities and programs...

Important Utilities and Tools


You should become familiar with the following tools and utilities. Adadmin tools
including the adadmin menu for both database and $APPL_TOP file maintenance
tasks. Important tools under $AD_TOP/bin include adaimgr, adunload, adpatch,
adrelink, adctrl, adfrmgen, and adodfcmp. Other directories of interest include
$FND_TOP/, and $AD_TOP/sql.

The scripts under $AD_TOP/sql are interesting, too, but I'll give the same disclaimer
as issued for the $FND_TOP/sql scripts: CAUTION: Do not blindly run these scripts
without analyzing their purpose, impact, and possibly consulting with Oracle Support!
Test them in your development environment, first.

Most of these AD*.sql scripts are your basic DBA tuning and reporting scripts. There
are two scripts, though, which alter your database. Let's look at these.

ADXANLYZ.sql creates another script which does an "analyze table estimate statistics
sample 20%". Now, the Oracle Financials database MUST be set to RULE base
optimization in the init.ora file. I am speculating the following hypothesis: The Oracle
Applications originally evolved prior to COST base optimization. Hence, the code was
originally tuned with RULE optimization in mind. However, as the Applications mature
we are seeing more and more stored procedures and code (just look at how the
system tablespace expands) and the use of HINTS. Some of these hints override RULE
based optimization. In order for some code to take the best optimization path, you
need the data statistics.

I do not recommend running this script "carte blanche" without evaluating the
benefits or consequences. I have found that some applications improve, while others
suffer from the statistics. Also, many developers assume that they should be writing
code to a RULE based database, as it is configured in the init.ora. Bottom line: Use
sparingly and run explain plan to see the best options. I have found a combination of
both works -- some code needs the statistics deleted. If you do analyze your tables,
remember that you are now taking on a process which needs to be updated on a
regular basis to be of benefit.

The Oracle Applications also suggest pinning your procedures (again, more and more
code is in the form of stored procedures). The ADXGNPIN.sql script generates an all
inclusive pinning script for every database object. ADXCKPIN.sql reports the pinned
objects and execution statistics.

In order to use this correctly, you need to monitor and adjust your shared_pool
accordingly. I would advise altering the script to pin the large and popular packages,
only. Again, this must be monitored and tuned within your shared_pool and
shared_pool_reserved SGA parameters.

Customizing the Oracle Applications

One of the highly sought features of the Oracle Applications is the ability to customize
-- and most everyone does. The following advice is perhaps the most important in this
entire paper. FOLLOW ORACLE'S RECOMMENDED PROCEDURES for CUSTOMIZATIONS!
Ignoring these guidelines will surely buy you grief and cost you more money in the
future. If you are new to the applications, be assured that you will be facing upgrades
every 18 to 24 months to stay current with technology. If you have not followed these
guidelines, then I strongly recommend that you start bringing your environment into
compliance, today.

You can find these guidelines in the AOL Reference Manual. The major points are to
create a separate application top(s), separate schema(s), and follow the registration
process for your custom objects and code. For example: Create a schema named
CUSTOM, register the schema, and create a $CUSTOM_TOP directory which will be
added to your APPLSYS.env file. If you have extensive customizations, then I suggest
that you make separate custom schemas and directories for each module, such as
$C_AR_TOP (for "custom" AR applications).

If you are altering the base applications and keeping them in the same installed
directory tops, or putting customized database objects in the base schemas, then you
are indeed in violation of the prescribed methods. You will certainly be facing a
terrible time in your upgrades. When Oracle installs or upgrades its applications (even
in a simple patch), it assumes that these standard schemas and directories are their
products. The Oracle applications may completely drop or overwrite your custom
database objects and code rendering your applications to an unusable (and certainly
unsupported) state! Please take heed to this warning. As the Oracle Fin-Apps DBA, you
must see to it that your developers comply.
Document Customizations and Tars

You should make sure that customizations are documented. These will be invaluable
in your next upgrade, believe me! As complicated as an installation or upgrade can
be, it is only compounded exponentially by your customizations.

Keep track of problems and tars in some kind of tracking system (spreadsheet, text
file, sophisticated logging application, etc.). I have no problem in allowing developers
to open up tars, but I would strongly suggest that it is ONLY the Oracle Fin-Apps DBA
who applies any subsequent patches! The reasons include keeping one central area of
control and knowledge for all changes to the Oracle code and data. You should be
making sure that an adequate backup of the code and/or database is available prior
to applying patches. As discussed earlier, sometimes the only way to completely
rollback a significant patch is a restore of code and/or the database. With one central
person in charge of patches (namely, the DBA), you will appreciate the precautions
necessary for a potential all-nighter. Don't let a developer have the luxury of making
your life miserable because they were impatient.

Oracle Applications Security and Access

The SYSADMIN has the privilege of setting up users and associating them to roles and
responsibilities. This is a heavy burden, just like setting up users and their access in
the database.

Myth: Setting up users in the Oracle Applications does not (nor does it require you to)
create schemas for them in the Oracle database! You are merely creating an account
for them in the application which makes an entry to the fnd_user table, an initial
password in the fnd_user_password table, and assigning roles to them from
fnd_responsibilities.

Often the Oracle Fin-Apps DBA does not know the functional responsibilities of the
end users within the company. Do not give out "super user" or "SYSADMIN" privileges
freely -- that's like giving out DBA privileges to anyone out there (and you should not
be doing that, either!).

Remember, you are protecting the company's assets -- information assets. Information
is power and money. Protect it wisely. Approach this as the real possibility that you
could be audited. After all, you are the keeper of the keys to this application and its
data!

I suggest that a formal process should be setup where managers of the financial and
manufacturing business groups must "sign-off" on their employees and indicate which
roles they need. If everyone comes back as "super user", then try to educate that
manager. If you don't make any progress, then at least you have their signature and
authorization on paper to show the auditors!

This process would probably be best implemented as part of HR's new-hire process. On
the converse, there should also be an exit process for terminating or transferring
employees, too. Then, you could put an end-date on their account or change their
responsibilities. Otherwise, you end up with those "John Doe" scheduled reports in the
concurrent managers that keep running in your system sucking up valuable resources
for months after the employee has terminated!

$APPL_TOP Security and Access

The code is typically owned by applmgr. This can be changed, just keep it consistent
throughout the directories. Applmgr does not need to belong to the database group (I
advise that it is not). The applmgr group should NOT be shared for write access with
your developers, either! Remember, I strongly suggested that the Oracle Fin-Apps DBA
should be the only person who can alter these files (and that is typically done by
logging in as applmgr). The only exception that is acceptable to me is giving write
access to the $CUSTOM_TOP -- and ONLY in the development environment.

We discussed the taboo of writing custom code into the base $APPL_TOP directories.
You should be using one or more "custom" top directories for your custom code. After
a piece of custom code is developed and accepted in development, then it should
migrate through sound change-control procedures to test or production environments.
Hence, the only place that may be acceptable (depending upon your environment's
policies) should be the development $CUSTOM_TOP directories.

The only thing that should be modifying the base $APPL_TOP directories are patches
that the Oracle Fin-Apps DBA applies!
Modifying AOL Objects

When it comes to modifying AOL objects (such as defining new responsibilities, menu
paths, alerts, zooms, etc.), the only supported method was to manually type these in
the AOL forms. Following good change control procedures, you would implement and
perfect these entries into your development environment. When you were satisfied
with your results, you would then have to manually retype these entries in your test
and production environments. Using sql to replicate these entries directly to the FND
tables is unsupported by Oracle because of the risk of bypassing referential integrity
controls.

I have implemented a software tool that can automate this, and is supported by
Oracle. This is the Object Migrator software product by ChainLink Technologies, who
is a certified Oracle partner. They have other products specifically for the use in
Oracle Financial Application environments and product lines. I have heard and seen
many good things about them.

Importing or Converting External Data

Discourage the practice of inserting data directly into the Oracle application base
tables. In most cases (if not all), these are unsupported actions. What is
recommended is to load the data into the interface tables (%_interface) and use the
Oracle procedures to process this data. The operations and referential integrity can
be complex. Consult your documentation and Oracle Support for details.

I would also discourage the use of triggers on the base Oracle tables for the same
reasons. Triggers can be very difficult in upgrades. Often, they are disabled or
dropped, anyway.

Considerations for Using Partitioned Server

Partitioned server is when you utilize two different versions of the Oracle RDBMS
($ORACLE_HOME). One version is linked with your $APPL_TOP code, while the other
version runs with the database. For example: $APPL_TOP links with a 7.3.3
$ORACLE_HOME while the database engine is run by an 8.0.5 $ORACLE_HOME.
Remember, this configuration must pass what I call the "Holy Trinity Certification", or
blessing, for the Oracle Applications. That is, the version of the applications, the OS
version, and the RDBMS version are certified together.
As you can imagine, the Oracle Applications is a huge set of code that requires much
time and effort in regression testing. Hence, it is usually a generation behind the
most current Oracle RDBMS technology. As you can see, there are still very old Oracle
tools in use with the version 10.x applications, including CRT, Forms 2.4 (just recently
upgrade from 2.3), Oracle Reports, etc.

Oracle has implemented a "partitioned server" architecture which allows us to take


advantage of new RDBMS technology. The difference is that the Oracle Applications,
or $APPL_TOP, must be linked with code from the RDBMS, or $ORACLE_HOME. In a
partitioned server architecture, the applications still link with the older certified
version of the RDBMS. With the installation of an interoperability patch, the database
engine can run off a more recent release of the RDBMS -- which is where the more
significant performance and feature rich solutions can be enjoyed.

A "physical" partitioned server architecture is a variation of the partitioned server


configuration explained earlier. The difference is that the application code
($APPL_TOP) resides on a different server than the database. The applications
communicate to the database via sql*net. This solution can aid in maximizing
resources by allowing the database to reside on your more powerful server which can
be configured and optimized for a database server configuration, while the
applications can reside on a less powerful server with different configuration
considerations. Keep in mind that you must now accommodate for more sql*net tuning
issues. However, the NCA applications thrive upon this multiple tier architecture,
anyway.

You might also like