Professional Documents
Culture Documents
9876
Doc
ID
PURPOSE
This Notes explains what diagnostics are required when troubleshooting the Streams
replication problems.
CONTENT
prompt
prompt ++ <a name="Capture Processes">CAPTURE PROCESSES IN DATABASE</a> ++
col capture_name HEADING 'Capture|Name' format a30 wrap
col status HEADING 'Status' format a10 wrap
col QUEUE HEADING 'Queue' format a25 wrap
col RSN HEADING 'Positive|Rule Set' format a25 wrap
col RSN2 HEADING 'Negative|Rule Set' format a25 wrap
col capture_type HEADING 'Capture|Type' format a10 wrap
col error_message HEADING 'Capture|Error Message' format a60 word
col logfile_assignment HEADING 'Logfile|Assignment'
col checkpoint_retention_time HEADING 'Days to |Retain|Checkpoints'
col Status_change_time HEADING 'Status|Timestamp'
col error_number HEADING 'Error|Number'
col version HEADING 'Version'
dbms_capture_adm.start_capture(<CAPTURE_NAME>)
If the capture process is ABORTED, check the error_number and error_messages columns for
the error that caused the process to abort.
Check the Logminer log table (SYSTEM.LOGMNR_LOG$) for the last log file for each thread,
then look at the operating system files for the next log file in sequence. Typically, this is the log
file that cannot be found.
If this doesn't help, try turning on logminer and capture tracing and restart capture, look at the
capture trace file in the bdump directory.
ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7';
exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127');
exec dbms_capture_adm.start_capture('yourcapturename');
ORA-1
ORA-1347 Supplemental log data no longer found
This error indicates that minimum supplemental logging is not enabled for the instance. This
occurs most commonly on 9iR2 RAC instances. When configuring supplemental logging for RAC
in 9iR2, it is necessary to issue the ALTER DATABASE command at each instance in the cluster
BEFORE creating the capture process. In 10g, supplemental logging can be initiated from a
single instance so it is no longer necessary to issue the ALTER DATABASE ADD SUPPLEMENTAL
LOG DATA command at multiple instances. After issuing the ALTER DATABASE ADD
SUPPLEMENTAL LOG DATA, be sure to issue an ALTER SYSTEM ARCHIVE LOG CURRENT or
ALTER SYSTEM SWITCH LOGFILE.
This error can also be signaled if supplemental logging has been dropped, either explicitly or
implicitly. ALTER DATABASE DROP SUPPLEMENTAL LOG DATA explicitly disables supplemental
logging. If this command is issued, the capture process will abort with an ORA-1347 error.
Supplemental logging can be implicitly disabled by DML statements that use a BUFFER hint.
The BUFFER hint is frequently used in TPCC benchmarks. Logging can also be disabled when
using a TEMPORARY TABLE and CLOB in combination. This is reported as bug 3172456 and
fixed in 9.2.0.6
This error indicates that there are not enough processes available to start the capture process.
Check the following:
1. Verify that the init.ora parameter parallel_max_servers is sufficient to start the capture and
apply processes. For each capture defined on the database, the number of processes required
is 2 + parallelism defined for capture. If the capture parallelism parameter is set to 1 (the
default), then 3 processes are required to start the capture. For capture parallelism value of 3,
then 2+3 or 5 processes are required to start the capture.
2. Check if the database resource manager is used for this database. Check for any plans that
have limitations et for parallel processes by running the following:
If this is the cause you may need to disable the plan or set the parallelism value high enough
for the system_plan.
Capture abort with ORA-23605 can occur for one of the following reasons:
1. Invalid value for the Streams parameter FILE_NAME indicates an inconsistency between the
capture logminer session and existing logminer sessions. Generally, the registered archived
logs view is empty when this occurs.
This error can occur when attempting to add a logfile to a logminer session. To confirm this
problem run the following query:
select logminer_id from dba_capture where not exists (select session# from
system.logmnr_session$);
If rows are returned, most likely this is the problem. Check if the customer
attempted to remove metadata at some point or performed an incomplete drop of the
capture process? To fix, drop the existing capture process with the non-existent logminer
session. Then recreate the capture process.
2. Attempting to use a dictionary build from a previously deleted logfile (bug 5278539, fixed in
10.2.0.4, 11.1). In this situation, there are multiple entries in the V$ARCHIVED_LOG view for
the same logfile, with the name being NULL for deleted logfiles. The patch avoids checking
DELETED entries in V$ARCHIVED_LOG:
If the capture process is ENABLED check its current status using the following query:
If the status of the capture process is CAPTURING CHANGES, verify if messages are being
enqueued int the capture queue.
The following query can be used to verify if capture is enqueuing messages to the queue:
prompt
prompt ++ CAPTURE RULES BY RULE SET ++
col capture_name format a25 wrap heading 'Capture|Name'
col RULE_SET format a25 wrap heading 'Rule Set|Name'
col RULE_NAME format a25 wrap heading 'Rule|Name'
col condition format a50 wrap heading 'Rule|Condition'
set long 4000
REM break on rule_set
select c.capture_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,
rsr.rule_owner||'.'||sr.rule_name RULE_NAME, r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.rule_set_owner and rsr.rule_set_name=c.rule_set_name and
rsr.rule_set_name in
(select rule_set_name from dba_capture) order by rsr.rule_set_owner,rsr.rule_set_name;
set serveroutput on
declare
overlap_rules boolean := FALSE;
verbose boolean := TRUE;
cursor overlapping_rules is
select a.streams_name sname, a.streams_type stype,
a.rule_set_owner rule_set_owner, a.rule_set_name rule_set_name,
a.rule_owner owner1, a.rule_name name1, a.streams_rule_type type1,
b.rule_owner owner2, b.rule_name name2, b.streams_rule_type type2
from dba_streams_rules a, dba_streams_rules b
where a.rule_set_owner = b.rule_set_owner
and a.rule_set_name = b.rule_set_name
and a.streams_name = b.streams_name and a.streams_type = b.streams_type
and a.rule_type = b.rule_type
and (a.subsetting_operation is null or b.subsetting_operation is null)
and (a.rule_owner != b.rule_owner or a.rule_name != b.rule_name)
and ((a.streams_rule_type = 'GLOBAL' and b.streams_rule_type
in ('SCHEMA', 'TABLE') and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'TABLE' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name and a.object_name = b.object_name
and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'SCHEMA'
and a.schema_name = b.schema_name and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'GLOBAL' and b.streams_rule_type = 'GLOBAL'
and a.rule_name < b.rule_name))
order by a.rule_name;
begin
for rec in overlapping_rules loop
overlap_rules := TRUE;
dbms_output.put_line('+ WARNING: The rule ''' || rec.owner1 || '''.''' || rec.name1 || ''' and
''' || rec.owner2 || '''.''' || rec.name2 || ''' from rule set ''' || rec.rule_set_owner || '''.''' ||
rec.rule_set_name || ''' overlap.');
end loop;
if overlap_rules and verbose then
dbms_output.put_line('+Overlapping rules are a problem especially when rule-based
transformations exist.');
dbms_output.put_line('+Streams makes no guarantees of which rule in a rule set will evaluate
to TRUE,');
dbms_output.put_line('+ hus overlapping rules will cause inconsistent behavior, and should be
avoided.');
end if;
dbms_output.put_line('+');
end;
/
If the capture is PAUSED FOR FLOW CONTROL, verify if messages that have been enqueued
have been browsed using the following queries:
Check the current number of messages in the queue and compare it to the number of
unbrowsed messages. If messages have been browsed and are not getting removed from the
queue then verify if aq_tm_processes is set explicitly to 0 in the source or target databases:
declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0' and
(ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('+ERROR: The parameter ''aq_tm_processes'' should not be explicitly
set to 0!');
dbms_output.put_line('+Queue monitoring is disabled for all queues.');
dbms_output.put_line('+To resolve this problem, set the value to 1 using: ALTER SYSTEM SET
AQ_TM_PROCESSES=1; ');
end if;
exception when no_data_found then null;
end;
/
If messages are not being browsed, check the status of the propagation and apply processes.
Note.746247.1 Troubleshooting Streams Capture when status is Paused For Flow Control
Turn on logminer and capture tracing and restart capture. Look at the capture trace file in the
bdump directory.
ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7';
exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127');
exec dbms_capture_adm.start_capture('yourcapturename');
If the archive is not registered, check the alert.log for any errors during registration and try
registering it manually using:
The following query can be used to determine the message enqueuing latency of each capture
process on the database:
The following query can be usde to identify if there are any long running transactions in the
system:
prompt
prompt ++ Current Long Running Transactions ++
prompt Current transactions open for more than 20 minutes
prompt
col runlength HEAD 'Txn Open|Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'Transaction|ID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial# sid,xidusn||'.'||xidslot||'.'||xidsqn xid,
(sysdate - start_date ) * 1440 runlength ,terminal,program from gv$transaction t, gv$session s
where t.addr=s.taddr and (sysdate - start_date) * 1440 > 20;
Check the alert.log for any messages related to large transactions. The alert.log will show
information of when the large transaction was identified and also if it has been committed or
rolled back.
If not commit or rollback for the transaction have been reported in the alert.log, it means the
transaction is still running.
Does the Propagation Use the Correct Source and Destination Queue ?
Make sure the propagation has been configured properly to propagate messages from the
correct source queue to the correct destination queue, and using a valid database link.
- Queue-to-Database Link : The propagation is defined by a source queue and a database link
pair. This is the default. The QUEUE_TO_QUEUE parameter is set to FALSE in this case.
- Queue-to-Queue : The propagation is defined by a source queue and destination queue pair.
The QUEUE_TO_QUEUE parameter is set to TRUE.
From the Healthcheck report this can be visualized in section "++ PROPAGATIONS IN
DATABASE ++"
For a propagation job to propagate messages, the propagation must be enabled. If messages
are not being propagated by a propagation as expected, then the propagation might not be
enabled.
Check :
- The date of the last error, if there are any propagation errors and the error number/ error
message of the last error
From the Healthcheck report this can be visualized in section "++ SCHEDULE FOR EACH
PROPAGATION++"
prompt
COLUMN PROPAGATION_NAME Heading 'Propagation|Name' format a17 wrap
COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT
9999999999999999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 9999999999999999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99
COLUMN LAST_ERROR_MSG HEADING 'Error Message' FORMAT A50
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999999
COLUMN CURRENT_START_DATE HEADING 'Current|Start' FORMAT A17
COLUMN LAST_RUN_DATE HEADING 'Last|Run' FORMAT A17
COLUMN NEXT_RUN_DATE HEADING 'Next|Run' FORMAT A17
COLUMN LAST_ERROR_DATE HEADING 'Last|Error' FORMAT A17
column message_delivery_mode HEADING 'Message|Delivery|Mode'
column queue_to_queue HEADING 'Q-2-Q'
In 10.2, propagation jobs use job queue processes to propagate messages. Make sure the
JOB_QUEUE_PROCESSES initialization parameter is set to 2 or higher in each database instance
that runs propagations.
It should be set to a value that is high enough to accommodate all of the jobs that run
simultaneously.
next_date := sys.dbms_aqadm.aq$_propaq(job);
prompt
set recsep each
set recsepchar =
select * from dba_jobs;
prompt
select * from dba_scheduler_jobs;
Messages about propagation are recorded in trace files for the database in which the
propagation job is running. A propagation job runs on the database containing the source
queue in the propagation. These trace file messages can help you to identify and resolve
problems in a Streams environment.
All trace files for background processes are written to the destination directory specified by
the initialization parameter BACKGROUND_DUMP_DEST. The names of trace files are
operating system specific, but each file usually includes the name of the process writing the
file.
Each propagation uses a propagation job that depends on the job queue coordinator process
and a job queue process. The job queue coordinator process is named cjqnn, where nn is the
job queue coordinator process number, and a job queue process is named jnnn, where nnn is
the job queue process number.
For example, on some operating systems, if the system identifier for a database running a
propagation job is hqdb and the job queue coordinator process is 01, then the trace file for the
job queue coordinator process starts with hqdb_cjq01. Similarly, on the same database, if a
job queue process is 001, then the trace file for the job queue process starts with
hqdb_j001. You can check the process name by querying the PROCESS_NAME column in the
DBA_QUEUE_SCHEDULES data dictionary view.
Make sure the Rule_Sets and Rules are setup properly according to the requirements
For determining the number of messages sent by a propagation, as well as the number of
acknowledgements being returned from the target site, query the V$PROPAGATION_SENDER
view at the Source site and the V$PROPAGATION_RECEIVER view at the destinarion site.
++ BUFFERED SUBSCRIBERS ++
NOTE: An optimization first available in Oracle Database 11g, Release 1, isa capture process
that automatically sends LCRs directly to an apply process. This occurs when there is a single
publisher and consumer defined for the queue that contains the captured changes. This
optimized configuration is called Combined Capture and Apply (CCA). When CCA is in use, LCRs
are transmitted directly from the capture process to the apply process via a database link. In
this mode, the capture does not stage the LCRs in a queue or use queue propagation to deliver
them.
Buffered Subscribers view statistics are zero when CCA optimization is in effect.
At times, the propagation job may become "broken" or fail to start after an error has been
encountered or after a database
restart.
The typical solution is to disable the propagation and then re-enable it.
For example, for the propagation named STRMADMIN_PROPAGATE the commands would be:
10.2
exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE');
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');
If the above does not fix the problem, stop the propagation specifying the force parameter
(2nd parameter on stop_propagation) as TRUE.
For example, for the propagation named STRMADMIN_PROPAGATE , the commands would be:
exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true);
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');
The statistics for the propagation are cleared when the force parameter is set to TRUE.
Common Propagation Errors
The most common propagation errors result from an incorrect network configuration.
Below list shows errors caused by tnsnames.ora file or database links being configured
incorrectly.
- ORA-12505: TNS:listener does not currently know of service requested in connect descriptor.
- ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.
Can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first
configuring the GLOBAL_NAME for your database.
This is an informative message that indicates flow control has been automatically enabled to
reduce the rate at which events are being enqueued into the staging area.
DBA_QUEUE_SCHEDULES will display this informational message when the automatic flow
control (10g feature of Streams) has been invoked.
This typically occurs when the target site is unable to keep up with the rate of messages
flowing from the source site. Other than checking that the apply process is running normally
on the target site, no action is required by the DBA. Propagation and the capture process will
be resumed automatically when the target site is able to accept more messages.
In some situations, propagation may become disabled (if the number of failures is 16). In
these situations, the propagation can be re-enabled manually.
This error typically indicates that an attempt was made to propagate buffered messages with
the database link pointing to an instance in the destination database which is not the owner
instance of the destination queue. To resolve, use queue to queue propagation for buffered
messages.
- ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation
If the apply process is ABORTED, you can use the following query to identified the error
condition that caused the apply to abort:
If the apply process is aborted due an error on Apply Error queue, check the error queue using
following query:
Apply won't re-start if there is any error on apply error queue when apply parameter
DISABLE_ON_ERROR is TRUE (default) for the specific apply process. Use below query to check
this parameter:
break on apply_name
On shutdown of the apply process, the error ORA- timeout occurred. When attempting to
restart the apply process, the apply process aborts with the following message:
In the case, you will need to do a "forced" shutdown of the apply process. Then restart the
apply process. For example:
exec dbms_apply_adm.stop_apply('STRMADMIN_SITE1_US_ORACLE',force=>true)
exec dbms_apply_adm.start_apply('STRMADMIN_SITE1_US_ORACLE');
Classic error where most of the time caused by a data mismatch from source and destination
tables. The ORA-1403 error occurs when an apply process tries to update an existing row and
the OLD_VALUES in the row LCR do not match the current values at the destination database.
Basically conditions to receive the errors:
- Missing substitute key columns at destination database when there is no primary key
or unique key at source database;
- Data mismatch between the row LCR and the table for which the LCR is applying the
change.
- You can update the current values in the row so that the row LCR can be applied
successfully. If changes to the row at apply site are also captured by a capture process at the
destination database, then you will need to use DBMS_STREAMS.SET_TAG to avoid re-
capturing these changes, which would lead for new ORA-1403 at others Apply sites. For
example:
a. Set a tag in the session that corrects the row. Make sure you set the tag to a value that
prevents the manual change from being replicated. For example, the tag can prevent the
change from being captured by a capture process.
In some environments, you might need to set the tag to a different value.
b. Update the row in the table so that the data matches the old values in the LCR.
c. Re-execute the error or re-execute all errors. To re-execute an error, run the
EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction
identifier for the transaction that caused the error. For example:
Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS
procedure:
d. If you are going to make other changes in the current session that you want to
replicate destination databases, then reset the tag for the session to an appropriate value, as
in the following example:
Error occurs because the instantiation SCN does not for the object. Basically it might be caused
by :
- Importing an object without preparing the object for instantiation prior export;
- You used original export/import for instantiation, and you performed the import without
specifying y for the STREAMS_INSTANTIATION import parameter.
SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN
- When applying DDL changes, and you have not set the instantiation SCN at the SCHEMA or
GLOBAL level.
report.
Refer to the following notes for additional instructions:
Note:223000.1 Streams Apply Process Fails with ORA-26687 or ''Missing Streams multi-version
data dictionary''
- Missing/disabled primary key at source database. When not using primary keys, make
sure you have set an alternative key at destination database using SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM package.
The user designated as the apply user does not have the necessary privileges to perform SQL
operations on the replicated objects. The apply user privileges must be granted by an explicit
grant of each privilege. Granting these privileges through a role is not sufficient for the
Streams apply user.
Additionally if the apply user does not have explicit EXECUTE privilege on an apply handler
procedure or custom rule-based transformation function, then an ORA-06550 error might
result when the apply user tries to run the procedure or function. Typically, this error is causes
the apply process to abort without adding errors to the DBA_APPLY_ERROR view. However,
the trace file for the apply coordinator reports the error. Specifically, errors similar to the
following appear in the trace file:
In this example, the apply user dssdbo does not have execute privilege on the
to_award_fct_ruledml function in the strmadmin schema. To correct the problem, grant the
required EXECUTE privilege to the apply user.
Although you see this message on Streams Apply destination site, it is caused by the Streams
data dictionary information for the specified object not available on source database at the
time Streams Capture was created.
Pleaser refer to the Streams Capture Troubleshooting section on this same article and below
Metalink note for additional instructions:
Note 212044.1 Resolving the MISSING Streams Multi-version Data Dictionary Error
For additional details on above errors, please refer to section "Troubleshooting Specific Apply
Errors" from on-line documentation of your Oracle book:
1. First of all check if messages are reaching Apply destination queue by querying the
buffer queue with following query multiple times:
Look for values changing for "Cumulative Number of Messages in Queue" and "Current
Number of Outstanding Messages in Queue". In an event that messages are not reaching Apply
buffered queue, this might be an indication for some problem at Capture and/or Propagation
processes, please refer to the related sections on this Article for further troubleshooting
information.
2. If messages are reaching Apply queue, check if messages are being dequeued by Apply
Reader process using below query:
SELECT ap.APPLY_NAME,
DECODE(ap.APPLY_CAPTURED,
'YES','Captured LCRS',
'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
r.STATE,
r.TOTAL_MESSAGES_DEQUEUED,
r.TOTAL_MESSAGES_SPILLED,
r.SGA_USED,
oldest_scn_num,
oldest_xidusn||'.'||oldest_xidslt||'.'||oldest_xidsqn
oldest_transaction_id
r.APPLY_NAME = ap.APPLY_NAME;
Ideally "Total Messages Dequeued" must be increasing, otherwise 'Total Messages Spilled' in
case of spilling.
3. We also might get into the scenario where messages are reaching Apply queue but dequeue
is not happening, therefore the problem might be that the apply process has fallen behind.
You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR
dynamic performance view. If apply process latency is high, then you might be able to improve
performance by adjusting the setting of the parallelism apply process parameter.
Run the following queries to display the capture to apply latency using the
V$STREAMS_APPLY_COORDINATOR view for a message for each apply process:
SELECT APPLY_NAME,
TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
"Message Creation",
FROM V$STREAMS_APPLY_COORDINATOR;
SELECT APPLY_NAME,
TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
"Message Creation",
APPLIED_MESSAGE_NUMBER
FROM DBA_APPLY_PROGRESS;
SELECT r.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
r.STATE,
r.TOTAL_ASSIGNED,
r.TOTAL_MESSAGES_APPLIED
r.SERIAL# = s.SERIAL#
RECORD LOW-WATERMARK
ADD PARTITION
DROP PARTITION
EXECUTE TRANSACTION
WAIT COMMIT
WAIT DEPENDENCY
TRANSACTION CLEANUP
conn / as sysdba
Note.428441.1 "Warning Aq_tm_processes Is Set To 0" Message in Alert Log After Upgrade
to 10.2.0.3 or 10.2.0.4
break on apply_name
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
END;
An apply server encounters contention when the apply server must wait for a resource
that is being used by another session. Contention can result from logical dependencies. For
example, when an apply server tries to apply a change to a row that a user has locked, then
the apply server must wait for the user. Contention can also result from physical
dependencies. For example, interested transaction list (ITL) contention results when two
transactions that are being applied, which might not be logically dependent, are trying to lock
the same block on disk. In this case, one apply server locks rows in the block, and the other
apply server must wait for access to the block, even though the second apply server is trying to
lock different rows. See "Is the Apply Process Waiting for a Dependent Transaction?" for
detailed information about ITL contention
The following four wait states are possible for an apply server:
- Not waiting: The apply server is not encountering contention and is not waiting. No action is
necessary in this case.
An example of an event that is not related to another session is a log file sync event,
where redo data must be flushed because of a commit or rollback. In these cases, nothing is
written to the log initially because such waits are common and are usually transient. If the
apply server is waiting for the same event after a certain interval of time, then the apply
server writes a message to the alert log and apply process trace file. For example, an apply
server a001 might write a message similar to the following:
This output is written to the alert log at intervals until the problem is rectified.
The apply server writes a message to the alert log and apply process trace file
immediately. For example, an apply server a001 might write a message similar to the
following:
A001: warning -- apply server 1, sid 10 waiting on user sid 36 for event:
This output is written to the alert log at intervals until the problem is rectified.
This state can be caused by interested transaction list (ITL) contention, but it can also
be caused by more serious issues, such as an apply handler that obtains conflicting locks. In
this case, the apply server that is blocked by another apply server prints only once to the alert
log and the trace file for the apply process, and the blocked apply server issues a rollback to
the blocking apply server. When the blocking apply server rolls back, another message
indicating that the apply server has been rolled back is printed to the log files, and the rolled
back transaction is reassigned by the coordinator process for the apply process.
For example, if apply server 1 of apply process a001 is blocked by apply server 2 of the
same apply process (a001), then the apply process writes the following messages to the log
files:
SELECT ap.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,
c.STATE,
c.TOTAL_RECEIVED RECEIVED,
c.TOTAL_ASSIGNED ASSIGNED,
c.TOTAL_APPLIED APPLIED,
c.TOTAL_ERRORS ERRORS,
c.total_ignored,
c.total_rollbacks,
c.APPLY_NAME = ap.APPLY_NAME;
8. Check for Apply Process waiting for dependent transactions (applies only when having
Apply PARALLELISM parameter greater than 1). Use same query as above:
SELECT ap.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,
c.STATE,
c.TOTAL_RECEIVED RECEIVED,
c.TOTAL_ASSIGNED ASSIGNED,
c.TOTAL_APPLIED APPLIED,
c.TOTAL_ERRORS ERRORS,
c.total_ignored,
c.total_rollbacks,
c.APPLY_NAME = ap.APPLY_NAME;
To avoid the problem in the future, perform one of the following actions:
- Increase the number of ITLs available. You can do so by changing the INITRANS setting
for the table using the ALTER TABLE statement.
- Set the parallelism apply process parameter to 1 for the apply process.
9. Check for poor Apply performance for certain transactions:
The following query displays information about the transactions being applied by each
apply server:
FROM V$STREAMS_APPLY_SERVER
ORDER BY APPLY_NAME,SERVER_ID;
If you run this query repeatedly, then over time the apply server state, applied
message number, and message sequence number should continue to change for each apply
server as it applies transactions. If these values do not change for one or more apply servers,
then the apply server might not be performing well. In this case, you should make sure that,
for each table to which the apply process applies changes, every key column has an index.
Use following queries to determine the object in interest for the poor Apply processing
transaction:
SELECT t.SQL_TEXT
ORDER BY PIECE;
p.s. Fill out "a.APPLY_NAME" and "a.SERVER_ID" from the WHERE clause
appropriately with information from previous query.
This query returns the operation being performed currently by the specified apply
server. The query also returns the owner and name of the table on which the operation is
being performed and the cost of the operation. If the results show FULL for the COST column,
then the operation is causing full table scans
- Creating indexes for each key column in this table has an index.
decode(assemble_lobs,'Y','Yes','N','No','UNKNOWN') lob_assemble,
APPLY_Database_link
from dba_apply_dml_handlers
order by object_owner,object_name,apply_name;
p.s. (i) If "Apply Process Name" is NULL as result from above query, it means that
handler is a general handler that runs for all of the local apply processes.
(ii) "Handler Type" indicates if the Apply Handler is for any "DML" (DML Handler)
applied or only when an "ERROR" (Error Handler) happens.
DML and Error handler are customized accordingly application needs and data model,
so if an apply process is not behaving as expected, then check the handler PL/SQL procedures
used by the apply process, and correct any flaws. You might need to modify a handler
procedure or remove it to correct an apply problem.
2. Common errors when DML / Error handlers are implemented:
If using schema name transformation in any way, you might get this error if the source
database schema does not exist at destination database. Let say you have tables on schema
'REP1' to be replicated to another database where the schema name is 'REP2', you will get the
ORA-1435 error if the schema 'REP1' does not exist at the destination database. The schema
name and it's object definitions need to exist at the destination site but NO ROWs or data is
required in this schema. The workaround for this problem it to create the structure definition
of the original schema and objects.
This can generally by done by a schema level export from the source site and a schema level
import with the ROWS=NO into the target site.
One of the most common reasons for receiving this error in a DML HANDLER or
TRANSFORMATION is privileges. Typically, this error is causes the apply process to 'ABORT'
with no ERRORS in the DBA_APPLY_ERROR view. However, the trace file for the apply
coordinator will report the error. If the specified apply user does not have explicit privilege
to execute the dml_handler procedure or the transformation function, you will receive errors
similar to the following in the apply trace files:
In this example, the apply user does not have execute privilege on the
"USER_FUNCTION_NAME" function in the STRMADMIN schema.
p.s. Check "APPLY_USER" column from DBA_APPLY view to see what schema is being used
to Apply the changes.
When calling SYS.LCR$ member functions this error may be raised if the value of the
parameters do not match the lcr. For example adding a old column value to an insert lcr, or
setting the value of lob column to a number. This error can occur if an incorrect value is passed
for a Streams parameter or if an INSERT LCR contains 'OLD' values, or if a DELETE LCR contains
'NEW' values. Verify that the correct parameter type ('OLD','NEW') is specified for the LCR
type (INSERT/DELETE/UPDATE).
This error is raised by SYS.LCR$* member functions, when the value of the column_name
parameter does not match the name of any of the columns in the lcr. Check the column names
in the LCR. This error is encountered if:
- You attempt to delete a column from an LCR and the LCR does not have the column
(typically occurs on UPDATEs);
- You attempt to rename a column that does not exist in the LCR;
This error can occur when a 'NULL' value is passed to an LCR method instead of an ANYDATA.
Wrong:
new_lcr.add_column('OLD','LANGUAGE',NULL);
Correct:
new_lcr.add_column('OLD','LANGUAGE',sys.AnyData.ConvertVarchar2(NULL));
Source type" not equal to "target type". Confirm that Conversion utility data type
matches the column data type in the handler / transformation. For example, if the column is
specified as VARCHAR2, then use sys.anydata.convertvarchar2 to convert the data from type
ANY to VARCHAR2. Confirm that the DATATYPE of the column name matches between the
LCR and the target table.
Confirm that all of the columns in the LCR are defined at the destination site. If the destination
table does not have all of the columns specified in the LCR. Eliminate any columns from the
LCR that should not be applied at the destination table. Check that column name casing
matches the database. Generally column names are Upper Case
3. For DML Handler samples, please refer to the following Metalink Notes:
Note.265867.1 Example Streams Apply DML Handler Showing the Adding of Columns to the
Apply LCR
Note.302018.1 Example Streams Apply DML Handler Showing LCR Column Rename
Note.265481.1 Example Streams Apply DML Handler Showing Rows and Columns Filter from
the Apply Process
Note.234094.1 Usage and Restrictions of Streams Apply Handlers
4. For Error Handler samples, please refer to the following Metalink Notes:
<to be completed>
<to be completed>
1. Configuration checking:
a. If you use substitute key columns for any of the tables at the non-Oracle database,
then make sure to specify the database link to the non-Oracle database when you run the
SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package.
b. If you use a DML handler to process row LCRs for any of the tables at the non-Oracle
database, then specify the database link to the non-Oracle database when you run the
SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package.
c. If you want to use a message handler to process user-enqueued messages for a non-
Oracle database, then, when you run the CREATE_APPLY procedure in the DBMS_APPLY_ADM
package, specify the database link to the non-Oracle database using the apply_database_link
parameter, and specify the message handler procedure using the message_handler parameter.
d. You must set the parallelism apply process parameter to 1, the default setting, when
an apply process is applying changes to a non-Oracle database. Currently, parallel apply to
non-Oracle databases is not supported. However, you can use multiple apply processes to
apply changes a non-Oracle database.
2. You can refer to the most of Apply sections from this article to troubleshoot the
heterogeneous Streams Apply process. All the steps apply, except by Error and Conflict
Handlers which currently are not supported. If an apply error occurs, then the transaction
containing the LCR that caused the error is moved into the error queue in the Oracle database.
Also please refer to the follow Metalink notes for additional information:
Note.377857.1 Apply process aborts with ORA-28584 setting up streams replication to MySQL
Note.436112.1 'ORA-28550 : Pass-Through SQL: Cursor Not Found' Error When Using Oracle
Streams Heterogenous Apply to Sybase
Note.466882.1 Streams Apply Process Aborts On Decimal Values Using Tg4sybase - Error ORA-
28500
Static Views
ALL_APPLY
ALL_APPLY_CONFLICT_COLUMNS
ALL_APPLY_DML_HANDLERS
ALL_APPLY_ENQUEUE
ALL_APPLY_ERROR
ALL_APPLY_EXECUTE
ALL_APPLY_KEY_COLUMNS
ALL_APPLY_PARAMETERS
ALL_APPLY_PROGRESS
ALL_APPLY_TABLE_COLUMNS
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_PROGRESS
DBA_APPLY_PARAMETERS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_EXECUTE
DBA_APPLY_ERROR
DBA_APPLY_ENQUEUE
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_SPILL_TXN
DBA_APPLY_VALUE_DEPENDENCIES
DBA_HIST_STREAMS_APPLY_SUM
Dynamic Views
V$STREAMS_APPLY_COORDINATOR
V$STREAMS_APPLY_READER
V$STREAMS_APPLY_SERVER
GV$STREAMS_APPLY_COORDINATOR
GV$STREAMS_APPLY_READER
GV$STREAMS_APPLY_SERVER
Spelling Counts!
Rules can be thought of as a SQL WHERE clause against which each message is evaluated. If
the message does not meet the rule condition specification, the rule evaluation return is set to
FALSE and the message is excluded from further handling by the particular streams process.
For example, if you configure Streams to capture changes to the 'SOCTT.EMP' table, changes
made to the actual table 'SCOTT.EMP' will not be captured. Each expression included in the
rule_condition must evaluate to TRUE in order for the rule to evaluate to TRUE.
When using SCHEMA or GLOBAL rules, be sure to modify the rules so that no objects with
unsupported data types are included for Streams.
Avoid eliminating tables by pattern (e.g. :dml.get_object_name like 'DR%' ) or using a NOT
operator as this will force a full rule evaluation for the rule. It is frequently much faster to
explicitly name the desired table, even if it results in multiple rules.
If you are configuring a propagation that takes ALL changes from the source queue to the
destination queue (ie. no selectivity requirements),you can remove the rule set from the
propagation definition. This will eliminate the necessity to do ANY rule evaluation and will
result in higher propagation throughput.
The DBA_STREAMS_TABLE_RULES view shows the original configuration of the rule and
ruleset. Manual modifications can be performed using the DBMS_RULE_ADM package. Be
sure to use the DBA_RULE_SET_RULES view to obtain the full set of rules participating in a
ruleset. To get the rule condition of each rule, use the DBA_RULES view.
r.rule_condition CONDITION
from dba_rule_set_rules rsr, dba_rules r where rsr.rule_name = r.rule_name and
rsr.rule_owner = r.rule_owner
order by rsr.rule_set_owner,rsr.rule_set_name;
If that query returns any such rules, then the rules returned might be causing the capture
process to discard changes to the table. If that query returns no rules, then make sure there
are one or more table rules in the positive rule set for the capture process that evaluate to
TRUE for the table. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains
an example of a query that shows rules in a positive rule set.
It is possible that the Streams capture process, propagation, apply process, or messaging client
is not behaving as expected because one or more rules should be altered or removed from a
rule set.
If you have the correct rules, and the relevant messages are still filtered out by a Streams
capture process, propagation, or apply process, then check your trace files and alert log for a
warning about a missing "multi-version data dictionary", which is a Streams data dictionary. If
you find such messages, and you are using custom capture process rules or reusing existing
capture process rules for a new destination database, then make sure you run the appropriate
procedure to prepare for instantiation:
PREPARE_TABLE_INSTANTIATION
PREPARE_SCHEMA_INSTANTIATION
PREPARE_GLOBAL_INSTANTIATION
If a Streams capture process, propagation, apply process, or messaging client is not behaving
as expected, then check the custom rule-based transformation functions specified for the rules
used by the Streams client and correct any flaws. You can find the names of these functions by
querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view. You might need to
modify a transformation function or remove a custom rule-based transformation to correct
the problem. Also, make sure the name of the function is spelled correctly when you specify
the transformation for a rule.
In some cases, incorrectly transformed LCRs might have been moved to the error queue by an
apply process. When this occurs, you should examine the transaction in the error queue to
analyze the feasibility of re-executing the transaction successfully. If an abnormality is found in
the transaction, then you might be able to configure a DML handler to correct the problem.
The DML handler will run when you re-execute the error transaction. When a DML handler is
used to correct a problem in an error transaction, the apply process that uses the DML handler
should be stopped to prevent the DML handler from acting on LCRs that are not involved with
the error transaction. After successful re-execution, if the DML handler is no longer needed,
then remove it. Also, correct the rule-based transformation to avoid future errors.
The rule sets used by all Streams clients, including capture processes and propagations,
determine the behavior of these Streams clients. Therefore, make sure the rule sets for any
capture processes or propagations on which an apply process depends contain the correct
rules. If the rules for these Streams clients are not configured properly, then the apply process
queue might never receive the appropriate messages. Also, a message traveling through a
stream is the composition of all of the transformations done along the path. For example, if a
capture process uses subset rules and performs row migration during capture of a message,
and a propagation uses a rule-based transformation on the message to change the table name,
then, when the message reaches an apply
process, the apply process rules must account for these transformations.
V$RULE
GV$RULE
V$RULE_SET
GV$RULE_SET
V$RULE_SET_AGGREGATE_STATS
GV$RULE_SET_AGGREGATE_STATS
DBA_STREAMS_GLOBAL_RULES
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_TABLE_RULES
DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES
DBA_HIST_RULE_SET
Streams Troubleshooting Guide (Doc ID 883372.1) To Bottom
Type: TROUBLESHOO
TING
Status:
SCOPE & APPLICATION PUBLISHED
Last Major
Update: 23-Sep-2013
To be used by DBAs as a reference when troubleshooting
Last Update: 23-Sep-2013
the Streams associated process.
dbms_capture_adm.start_capture(<CAPTURE_NAME>)
ORA-1
ORA-1347 Supplemental log data no longer found LAD Add-on Localizations -
R12 ISV Integration Solution
This error indicates that minimum supplemental logging is
[960846.1]
not enabled for the instance. This occurs most commonly
on 9iR2 RAC instances. When configuring supplemental
logging for RAC in 9iR2, it is necessary to issue the ALTER
DATABASE command at each instance in the cluster Show More
BEFORE creating the capture process. In 10g,
supplemental logging can be initiated from a single
instance so it is no longer necessary to issue the ALTER
DATABASE ADD SUPPLEMENTAL LOG DATA command at
multiple instances. After issuing the ALTER DATABASE
ADD SUPPLEMENTAL LOG DATA, be sure to issue an ALTER
SYSTEM ARCHIVE LOG CURRENT or ALTER SYSTEM
SWITCH LOGFILE.
If this is the cause you may need to disable the plan or set
the parallelism value high enough for the system_plan.
SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
c.CAPTURE_NAME,
c.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND.SERIAL# = s.SERIAL#;
SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
c.CAPTURE_NAME,
C.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE,
c.state_changed_time,
c.TOTAL_MESSAGES_CAPTURED,
c.TOTAL_MESSAGES_ENQUEUED, total_messages_created
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND c.SERIAL# = s.SERIAL#;
select streams_name
NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'CAPTURE'
and
(dml_condition is not null or subsetting_operation is not
null);
prompt
prompt ++ CAPTURE RULES BY RULE SET ++
col capture_name format a25 wrap heading
'Capture|Name'
col RULE_SET format a25 wrap heading 'Rule Set|Name'
col RULE_NAME format a25 wrap heading 'Rule|Name'
col condition format a50 wrap heading 'Rule|Condition'
set long 4000
REM break on rule_set
select c.capture_name,
rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,
rsr.rule_owner||'.'||sr.rule_name RULE_NAME,
r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE
c
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and
rsr.rule_set_owner=c.rule_set_owner and
rsr.rule_set_name=c.rule_set_name and
rsr.rule_set_name in
(select rule_set_name from dba_capture) order by
rsr.rule_set_owner,rsr.rule_set_name;
set serveroutput on
declare
overlap_rules boolean := FALSE;
verbose boolean := TRUE;
cursor overlapping_rules is
select a.streams_name sname, a.streams_type stype,
a.rule_set_owner rule_set_owner, a.rule_set_name
rule_set_name,
a.rule_owner owner1, a.rule_name name1,
a.streams_rule_type type1,
b.rule_owner owner2, b.rule_name name2,
b.streams_rule_type type2
from dba_streams_rules a, dba_streams_rules b
where a.rule_set_owner = b.rule_set_owner
and a.rule_set_name = b.rule_set_name
and a.streams_name = b.streams_name and
a.streams_type = b.streams_type
and a.rule_type = b.rule_type
and (a.subsetting_operation is null or
b.subsetting_operation is null)
and (a.rule_owner != b.rule_owner or a.rule_name !=
b.rule_name)
and ((a.streams_rule_type = 'GLOBAL' and
b.streams_rule_type
in ('SCHEMA', 'TABLE') and a.schema_name =
b.schema_name)
or (a.streams_rule_type = 'SCHEMA' and
b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'TABLE' and
b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name and
a.object_name = b.object_name
and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'SCHEMA' and
b.streams_rule_type = 'SCHEMA'
and a.schema_name = b.schema_name and a.rule_name
< b.rule_name)
or (a.streams_rule_type = 'GLOBAL' and
b.streams_rule_type = 'GLOBAL'
and a.rule_name < b.rule_name))
order by a.rule_name;
begin
for rec in overlapping_rules loop
overlap_rules := TRUE;
dbms_output.put_line('+ WARNING: The rule ''' ||
rec.owner1 || '''.''' || rec.name1 || ''' and
''' || rec.owner2 || '''.''' || rec.name2 || ''' from rule set '''
|| rec.rule_set_owner || '''.''' ||
rec.rule_set_name || ''' overlap.');
end loop;
if overlap_rules and verbose then
dbms_output.put_line('+Overlapping rules are a problem
especially when rule-based transformations exist.');
dbms_output.put_line('+Streams makes no guarantees of
which rule in a rule set will evaluate to TRUE,');
dbms_output.put_line('+ hus overlapping rules will cause
inconsistent behavior, and should be avoided.');
end if;
dbms_output.put_line('+');
end;
/
declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name =
'aq_tm_processes' and value = '0' and
(ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('+ERROR: The parameter
''aq_tm_processes'' should not be explicitly set to 0!');
dbms_output.put_line('+Queue monitoring is disabled for
all queues.');
dbms_output.put_line('+To resolve this problem, set the
value to 1 using: ALTER SYSTEM SET
AQ_TM_PROCESSES=1; ');
end if;
exception when no_data_found then null;
end;
/
prompt
prompt ++ Current Long Running Transactions ++
prompt Current transactions open for more than 20
minutes
prompt
col runlength HEAD 'Txn Open|Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'Transaction|ID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial#
sid,xidusn||'.'||xidslot||'.'||xidsqn xid,
(sysdate - start_date ) * 1440 runlength
,terminal,program from gv$transaction t, gv$session s
where t.addr=s.taddr and (sysdate - start_date) * 1440 >
20;
Check :
prompt
COLUMN PROPAGATION_NAME Heading
'Propagation|Name' format a17 wrap
COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in
Seconds' FORMAT 9999999999999999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds'
FORMAT 9999999999999999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT
A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures'
FORMAT 99
COLUMN LAST_ERROR_MSG HEADING 'Error Message'
FORMAT A50
COLUMN TOTAL_BYTES HEADING 'Total
Bytes|Propagated' FORMAT 9999999999999999
COLUMN CURRENT_START_DATE HEADING
'Current|Start' FORMAT A17
COLUMN LAST_RUN_DATE HEADING 'Last|Run' FORMAT
A17
COLUMN NEXT_RUN_DATE HEADING 'Next|Run' FORMAT
A17
COLUMN LAST_ERROR_DATE HEADING 'Last|Error'
FORMAT A17
column message_delivery_mode HEADING
'Message|Delivery|Mode'
column queue_to_queue HEADING 'Q-2-Q'
SELECT p.propagation_name,TO_CHAR(s.START_DATE,
'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.total_bytes,
s.FAILURES,
s.message_delivery_mode,
p.queue_to_queue,
s.LAST_ERROR_MSG
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2),
s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by
message_delivery_mode, propagation_name;
next_date := sys.dbms_aqadm.aq$_propaq(job);
prompt
set recsep each
set recsepchar =
select * from dba_jobs;
In 11.1 AQ Propagation uses Oracle SCheduler, enabling
AQ propagation to take advantage of Scheduler features.
Job queue processes parameters need not be set in Oracle
Database 11g for propagation to work. Oracle Scheduler
automatically starts up the required number of slaves for
the existing propagation schedules.
prompt
select * from dba_scheduler_jobs;
SELECT PROPAGATION_NAME,
RULE_SET_OWNER||'.'||RULE_SET_NAME Positive,
NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET
_NAME Negative
FROM DBA_PROPAGATION;
select streams_name
NAME,schema_name||'.'||object_name OBJECT,
rule_set_type,
SOURCE_DATABASE,
STREAMS_RULE_TYPE ||' '||Rule_type TYPE ,
INCLUDE_TAGGED_LCR,
rule_owner||'.'||rule_name RULE
from dba_streams_rules where streams_type =
'PROPAGATION'
order by name,object, source_database,
rule_set_type,rule;
++ STREAMS TABLE SUBSETTING RULES ++
select streams_name
NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type =
'PROPAGATION' and (dml_condition is not null or
subsetting_operation is not null);
select c.propagation_name,
rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET
,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, DBA_RULES r ,DBA_PROPAGATION
c
where rsr.rule_name = r.rule_name and rsr.rule_owner =
r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and
rsr.rule_set_name=c.negative_rule_set_name
and rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from
dba_propagation) order by
rsr.rule_set_owner,rsr.rule_set_name;
select rsr.rule_set_owner||'.'||rsr.rule_set_name
RULE_SET , r.* from
dba_rule_set_rules rsr, dba_streams_transformations r
where
r.rule_name = rsr.rule_name and r.rule_owner =
rsr.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation)
order by rsr.rule_set_owner,rsr.rule_set_name,
r.rule_owner, r.rule_name,transform_type desc,
step_number, precedence;
SELECT p.propagation_name,q.message_delivery_mode,
DECODE(p.STATUS,
'DISABLED', 'Disabled',
'ENABLED', 'Enabled') SCHEDULE_STATUS,
q.instance,
q.total_number TOTAL_NUMBER, q.TOTAL_BYTES ,
q.elapsed_dequeue_time/100 elapsed_dequeue_time,
q.elapsed_pickle_time/100 elapsed_pickle_time,
q.total_time/100 total_time
FROM DBA_PROPAGATION p, dba_queue_schedules q
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2),
q.destination)
AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
AND q.QNAME = p.SOURCE_QUEUE_NAME
order by q.message_delivery_mode,
p.propagation_name;
++ BUFFERED SUBSCRIBERS ++
10.2
exec
dbms_propagation_adm.stop_propagation('STRMADMIN
_PROPAGATE');
exec
dbms_propagation_adm.start_propagation('STRMAMDIN
_PROPAGATE');
exec
dbms_propagation_adm.stop_propagation('STRMADMIN
_PROPAGATE',true);
exec
dbms_propagation_adm.start_propagation('STRMAMDIN
_PROPAGATE');
The statistics for the propagation are cleared when the
force parameter is set to TRUE.
select apply_name,
source_database,source_commit_scn, message_number,
message_count,
local_transaction_id, error_message ,
error_creation_time, source_transaction_id
break on apply_name
exec
dbms_apply_adm.stop_apply('STRMADMIN_SITE1_US_OR
ACLE',force=>true)
exec
dbms_apply_adm.start_apply('STRMADMIN_SITE1_US_O
RACLE');
EXEC
DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id
=> '5.4.312');
SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN
- When applying DDL changes, and you have not set the
instantiation SCN at the SCHEMA or GLOBAL level.
report.
Note:783815.1 DBA_APPLY_INSTANTIATED_OBJECTS
and ORA-26687
The user designated as the apply user does not have the
necessary privileges to perform SQL operations on the
replicated objects. The apply user privileges must be
granted by an explicit grant of each privilege. Granting
these privileges through a role is not sufficient for the
Streams apply user.
PLS-00201: identifier
'STRMADMIN.TO_AWARDFCT_RULEDML' must be
declared
SELECT ap.APPLY_NAME,
DECODE(ap.APPLY_CAPTURED,
'YES','Captured LCRS',
'NO','User-Enqueued','UNKNOWN')
APPLY_CAPTURED,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
r.STATE,
r.TOTAL_MESSAGES_DEQUEUED,
r.TOTAL_MESSAGES_SPILLED,
r.SGA_USED,
oldest_scn_num,
oldest_xidusn||'.'||oldest_xidslt||'.'||oldest_xidsqn
oldest_transaction_id
r.APPLY_NAME = ap.APPLY_NAME;
SELECT APPLY_NAME,
(HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400
"Latency in Seconds",
TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS
MM/DD/YY')
"Message Creation",
TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY')
"Apply Time",
HWM_MESSAGE_NUMBER
FROM V$STREAMS_APPLY_COORDINATOR;
SELECT APPLY_NAME,
(APPLY_TIME-
APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in
Seconds",
TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS
MM/DD/YY')
"Message Creation",
TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY')
"Apply Time",
APPLIED_MESSAGE_NUMBER
FROM DBA_APPLY_PROGRESS;
4. Also check Apply Servers:
SELECT r.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS_NAME,
r.STATE,
r.TOTAL_ASSIGNED,
r.TOTAL_MESSAGES_APPLIED
r.SERIAL# = s.SERIAL#
RECORD LOW-WATERMARK
ADD PARTITION
DROP PARTITION
EXECUTE TRANSACTION
WAIT COMMIT
WAIT DEPENDENCY
conn / as sysdba
break on apply_name
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'strm01_apply',
END;
SELECT ap.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS,
c.STATE,
c.TOTAL_RECEIVED RECEIVED,
c.TOTAL_ASSIGNED ASSIGNED,
c.TOTAL_APPLIED APPLIED,
c.TOTAL_ERRORS ERRORS,
c.total_ignored,
c.total_rollbacks,
c.TOTAL_WAIT_DEPS WAIT_DEPS,
c.TOTAL_WAIT_COMMITS WAIT_COMMITS
FROM gV$STREAMS_APPLY_COORDINATOR c,
gV$SESSION s, DBA_APPLY ap
c.APPLY_NAME = ap.APPLY_NAME;
SELECT ap.APPLY_NAME,
SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4)
PROCESS,
c.STATE,
c.TOTAL_RECEIVED RECEIVED,
c.TOTAL_ASSIGNED ASSIGNED,
c.TOTAL_APPLIED APPLIED,
c.TOTAL_ERRORS ERRORS,
c.total_ignored,
c.total_rollbacks,
c.TOTAL_WAIT_DEPS WAIT_DEPS,
c.TOTAL_WAIT_COMMITS WAIT_COMMITS
FROM gV$STREAMS_APPLY_COORDINATOR c,
gV$SESSION s, DBA_APPLY ap
c.APPLY_NAME = ap.APPLY_NAME;
FROM V$STREAMS_APPLY_SERVER
ORDER BY APPLY_NAME,SERVER_ID;
SELECT t.SQL_TEXT
ORDER BY PIECE;
user_procedure, apply_name,
decode(error_handler,'Y','Error','N','DML','UNKNOWN')
TYP,
decode(assemble_lobs,'Y','Yes','N','No','UNKNOWN')
lob_assemble,
APPLY_Database_link
from dba_apply_dml_handlers
order by object_owner,object_name,apply_name;
PLS-00201: identifier
'STRMADMIN.USER_FUNCTION_NAME' must be declared
Wrong:
new_lcr.add_column('OLD','LANGUAGE',NULL);
Correct:
new_lcr.add_column('OLD','LANGUAGE',sys.AnyData.Conv
ertVarchar2(NULL));
<to be completed>
<to be completed>
APPLY IN HETEROGENEOUS ENVIRONMENTS
1. Configuration checking:
Static Views
ALL_APPLY
ALL_APPLY_CONFLICT_COLUMNS
ALL_APPLY_DML_HANDLERS
ALL_APPLY_ENQUEUE
ALL_APPLY_ERROR
ALL_APPLY_EXECUTE
ALL_APPLY_KEY_COLUMNS
ALL_APPLY_PARAMETERS
ALL_APPLY_PROGRESS
ALL_APPLY_TABLE_COLUMNS
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_PROGRESS
DBA_APPLY_PARAMETERS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_EXECUTE
DBA_APPLY_ERROR
DBA_APPLY_ENQUEUE
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_SPILL_TXN
DBA_APPLY_VALUE_DEPENDENCIES
DBA_HIST_STREAMS_APPLY_SUM
Dynamic Views
V$STREAMS_APPLY_COORDINATOR
V$STREAMS_APPLY_READER
V$STREAMS_APPLY_SERVER
GV$STREAMS_APPLY_COORDINATOR
GV$STREAMS_APPLY_READER
GV$STREAMS_APPLY_SERVER
Spelling Counts!
r.rule_condition CONDITION
from dba_rule_set_rules rsr, dba_rules r where
rsr.rule_name = r.rule_name and
rsr.rule_owner = r.rule_owner
order by rsr.rule_set_owner,rsr.rule_set_name;
PREPARE_TABLE_INSTANTIATION
PREPARE_SCHEMA_INSTANTIATION
PREPARE_GLOBAL_INSTANTIATION
Are Declarative Rule-Based Transformations Configured
Properly?
V$RULE
GV$RULE
V$RULE_SET
GV$RULE_SET
V$RULE_SET_AGGREGATE_STATS
GV$RULE_SET_AGGREGATE_STATS
DBA_STREAMS_GLOBAL_RULES
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_TABLE_RULES
DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES
DBA_HIST_RULE_SET
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)
Auto Correction Example for Streams using Error Handlers (Doc ID 387829.1) To Bottom
Purpose
Requirements
SAMPLE
Type: CODE
Configuring
Status: PUBLISHED
CurrentOut of Date
References
Applies to:
Configuring
see below
Streams
Caution Conflict
Resolution
This sample code is provided for educational purposes only,
[230049.1]
and is not supported by Oracle Support. It has been tested
internally, however, we do not guarantee that it will work No
for you. Ensure that you run it in your test environment References
before using. available for Usage and
this Restrictions
Sample Code document. of Streams
Apply
SQL SCRIPT
Handlers
===========
[234094.1]
Rem
Rem Copyright (c) 2002, 2004, Oracle. All rights reserved.
Rem Master Note
Rem NAME for
Rem Best Practices Examples - Auto Correction Troubleshoot
Rem ing Streams
Rem DESCRIPTION Apply Errors
Rem Auto-correction with control at source ORA-1403,
Rem control table is created and replicated to all sites ORA-26787
Rem Setting auto_correct to 'Y' handles in case of error or ORA-
during apply 26786,Confli
Rem ct Resolution
Rem NOTES [265201.1]
Rem We recommend that PKs are not modified when
autocorrection is
Rem being used. Example
Rem Streams
Apply DML
Handler
SET ECHO ON Showing
SET FEEDBACK 1 Rows and
SET NUMWIDTH 10 Columns
SET LINESIZE 80 Filter from
SET TRIMSPOOL ON the Apply
SET TAB OFF Process
SET PAGESIZE 100 [265481.1]
------------------------------------------------------
-- - Create Streams Queues
-- - Verify if they are secure Queues.
------------------------------------------------------
connect stradm/stradm@dbs2.net;
exec dbms_streams_adm.set_up_queue( ) ;
select owner , queue_table , secure from dba_queue_tables
where queue_table = 'STREAMS_QUEUE_TABLE' order by
owner , queue_table;
connect stradm/stradm@dbs1.net;
exec dbms_streams_adm.set_up_queue( ) ;
select owner , queue_table , secure from dba_queue_tables
where queue_table = 'STREAMS_QUEUE_TABLE' order by
owner , queue_table;
------------------------------------------------------
-- Create propagation rules to dbs2.net
------------------------------------------------------
connect stradm/stradm@dbs1.net;
begin
dbms_streams_adm.add_schema_propagation_rules(
schema_name => 'hr',
streams_name => 'dbs1net_to_dbs2net',
source_queue_name => 'stradm.streams_queue',
destination_queue_name =>
'stradm.streams_queue@'||:site2,
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/
connect hr/hr@dbs1.net
-------------------------------------------------------------------------
-- Create control table for control at source
-- auto_correct is set to 'Y' if error needs to
-- be handled
-------------------------------------------------------------------------
connect hr/hr@dbs2.net
-------------------------------------------------------------------------
-- Create control table for control at source at target
database
-- auto_correct is set to 'Y' if error needs to
-- be handled
-------------------------------------------------------------------------
CREATE TABLE control_table (sname varchar2(30),
oname varchar2(30) ,
auto_correct varchar2(2) );
connect stradm/stradm@dbs1.net
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.regions',
streams_type => 'capture',
streams_name => 'capture_hr',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true);
end;
/
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.control_table',
streams_type => 'capture',
streams_name => 'capture_hr',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true);
end;
/
select capture_name, queue_name, queue_owner, status
from dba_capture@dbs1 order by 1,2;
connect stradm/stradm@dbs1.net
exec :scn:= dbms_flashback.get_system_change_number;
------------------------------------------------------
-- - Create apply @ dbs2.net
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.regions',
streams_type => 'apply',
streams_name => 'apply_from_dbs1net',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/
begin
dbms_streams_adm.add_table_rules(
table_name => 'hr.control_table',
streams_type => 'apply',
streams_name => 'apply_from_dbs1net',
queue_name => 'stradm.streams_queue',
include_dml => true,
include_ddl => true,
source_database => :site1);
end;
/
------------------------------------------------------
-- - Start apply at dbs2.net and capture at dbs1.net
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_apply_adm.set_parameter(
apply_name => 'apply_from_dbs1net',
parameter => 'disable_on_error',
value => 'N');
end;
/
begin
dbms_apply_adm.start_apply(
apply_name => 'apply_from_dbs1net');
end;
/
connect stradm/stradm@dbs1.net
begin
dbms_capture_adm.start_capture(
capture_name => 'capture_hr');
end;
/
connect stradm/stradm@dbs2.net
create sequence reg_exception_s start with 9000;
-------------------------------------------------------------------------
-- Create error handler package. When an error is raised
-- and control is set to 'Y' at source, the error is handled
-------------------------------------------------------------------------
ret := ov2(i).DATA.GetNumber(r_id) ;
END IF;
ELSIF lcr.get_object_name() = 'REGIONS' and
auto_correct_mode = 'N' THEN
ret := ov2(i).DATA.GetVarchar2(vc) ;
vc := vc || '_A'||r_id;
ov2(i).DATA := Sys.AnyData.ConvertVarchar2( vc ) ;
END IF;
END IF;
END LOOP;
-- set NEW values in the LCR.
lcr.set_values ( value_type => 'NEW' , value_list => ov2 );
ret := tmp.getvarchar2(vc);
vc := vc || '_A'||r_id;
lcr.set_value('NEW','REGION_NAME',sys.anydata.convertva
rchar2(vc) );
lcr.execute ( true );
END IF;
END IF;
-- if delete is failing because of some foreign key constraint,
handle this
-- if auto_correct is TRUE
ELSIF cmd_type = 'DELETE' and auto_correct_mode = 'Y'
THEN
-- Delete the row referencing region_id and delete from
regions
IF lcr.get_object_name() = 'REGIONS' THEN
IF ( lcr.get_value ( 'OLD','REGION_ID' ) is not null ) THEN
END IF;
END IF;
END IF;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
tmp := lcr.get_value ( 'NEW','REGION_ID' );
ret := tmp.getnumber ( r_id );
Sample Output
---------------------------------------------------------------------
-- Case 1 : Auto-correction for a hr.regions
-- Auto_correct is set to TRUE. The value is replicated and
-- the value is checked inside an error handler and error is
-- handled. Supplemental logging is enabled in hr.regions
---------------------------------------------------------------------
connect hr/hr@dbs1.net;
alter table regions add SUPPLEMENTAL LOG GROUP
regions_log_group (region_id,region_name);
connect hr/hr@dbs1.net
connect system/manager@dbs1.net
set serveroutput on
-- sleep to allow for replication this time can be adjusted
(default=5min)
dbms_lock.sleep(300);
connect stradm/stradm@dbs2.net
-- no error message expected
select error_message from dba_apply_error;
-------------------------------------------------------------------------
-- Auto_correct is set to FALSE. The value is replicated and
-- the value is checked inside an error handler
-- Error is handled and a new value is inserted insted of
failing value
-- Disable supplemental logging for the hr.regions
-------------------------------------------------------------------------
connect hr/hr@dbs1.net;
alter table regions drop SUPPLEMENTAL LOG GROUP
regions_log_group;
connect system/manager@dbs1.net
set serveroutput on
----wait for replication - time can be adjusted
(default=5min)
exec dbms_lock.sleep(300);
connect hr/hr@dbs2.net
-- one more row is inserted
select * from regions
where region_id = 1001 or region_id >= 9000
order by region_id, region_name;
connect stradm/stradm@dbs2.net
-- No errors reported
select error_message from dba_apply_error;
------------------------------------------------------
-- Test for update
------------------------------------------------------
connect hr/hr@dbs1.net;
delete from control_table;
-- Set auto correct to true
insert into control_table values('HR','REGIONS','Y');
commit;
connect system/manager@dbs1.net
set serveroutput on
-- sleep for 5 min
exec dbms_lock.sleep(300);
connect hr/hr@dbs2.net
select * from regions
where region_id = 1001 or region_id = 1002 or region_id >=
9000
order by region_id, region_name;
connect stradm/stradm@dbs2.net
-- no error message expected
select error_message from dba_apply_error;
------------------------------------------------------
-- - Test for Delete. Violate foreign key constraint
-- - After bug fix 2271626, set an apply parameter.
------------------------------------------------------
connect stradm/stradm@dbs2.net
begin
dbms_apply_adm.set_parameter(
apply_name => 'apply_from_dbs1net',
parameter => '_restrict_all_ref_cons',
value => 'N');
end;
/
connect hr/hr@dbs2.net
insert into countries values ( 'N1','N1land',1003 );
commit;
-------------------------------------------------------------------------
-- Delete region=1003 @ dbs2.net. Will raise error at
dbs2.net
-- Since this is referenced by countries. Error is handled by
the
-- error handler and auto-correct is set to true
-------------------------------------------------------------------------
connect hr/hr@dbs1.net
delete from regions where region_id = 1003;
commit;
connect system/manager@dbs1.net
set serveroutput on
-- sleep for 5 min
exec dbms_lock.sleep(300);
connect stradm/stradm@dbs2.net
select error_message from dba_apply_error;
References
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)
Keywords
Errors
ORA-01403
Master Note for Streams Recommended Configuration (Doc ID 418755.1) To Bottom
Purpose
Scope
BULLETI
N
Details Type:
PUBLIS
Status: HED
Currency Check
Required.
2.1 Significance of AQ_TM_PROCESSES with respect to Oracle
Streams
Out of
Current
Date
3.0 Database Storage
Related Products
3. 2. Separate queues for capture and apply
Oracle
Database -
4.0 Privileges Enterprise
Edition
Informatio
5.5. Flow Control
n Center:
Overview
No of
5.6. Perform periodic maintenance
Inform Database
ation Security
Center Products
Database Version 9iR2 and 10gR1 availa [1548952.2
ble for ]
this
docum
Database Version 10gR2 and above
ent. Informatio
n Center:
Overview
5.7. Capture Process Configuration Database
Server/Clie
nt
Installation
5.8. Propagation Configuration
and
Upgrade/M
igration
5.9. Additional Configuration for RAC Environments for a Source [1351022.2
Database ]
6.2. Instantiation
No Streams
Refere DML Types
nces Supported
availaband
6.3. Conflict Resolution le for Supported
this Datatypes
docum [238455.1]
6.4. Apply Process Configuration ent.
Streams
Recommen
6.5. Additional Configuration for RAC Environments for an Apply
ded
Database
Patches
[437838.1]
OPERATION
Troublesho
oting
Global Name Oracle
Streams
Performanc
e Issues
Certification/compatibility/interoperability between different
[730036.1]
database versions
Example of
Apply Error Management a Streams
Heartbeat
Table
Backup Considerations [461278.1]
Master
NLS and Characterset considerations
Note for
Streams
Performanc
Batch Processing e
Recommen
dations
[335516.1]
Source Queue Growth
Show More
Streams Cleanup/Removal
Recently Viewed
Automatic Optimizer Statistics Collection
Master Note for
Streams
MONITORING Recommended
Configuration
[418755.1]
Auto Correction
Example for
Alert Log Streams using
Error Handlers
[387829.1]
Monitoring Utility STRMMON
SPED - Electronic
Invoice Integration
References
Support
[876892.1]
Applies to:
Purpose
Oracle Streams enables the sharing of data and events in a data Streams
stream either within a database or from one database to another. Troubleshooting
This Note describes best practices for Oracle Streams configurations Guide [883372.1]
for both downstream capture and upstream (local) capture in version
9.2 and above.
Show More
Scope
Details
Configuration
Software Version
For example:
Configure separate queues for changes that are captured locally and
for receiving captured changes from each remote site. This is
especially important when configuring bi-directional replication
between multiple databases. For example, consider the situation
where Database db1.net replicates its changes to databases db2.net,
and Database db2.net replicates to db1.net. Each database will
maintain 2 queues: one for capturing the changes made locally and
other queue receiving changes from the other database.
dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SIT
E_A, queue_name=>'CAP_SITEA', )
dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FR
OM_SITEB', queue_name=>'APP_FROM_SITEB');
4.0 Privileges
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
In Oracle 10g and above, all the above (except DBA) can be granted
using the procedure:
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
ORACLE redo log files contain redo information needed for instance
and media recovery . However, some of the redo based applications
such as STREAMS, Logical Standby, Adhoc LogMiner need additional
information to be logged into the redolog files. The process of logging
this additional information into the redo files is called Supplemental
Logging. Confirm supplemental logging is enabled at each source site
Identification Key Logging - This places the before and after images
of the specified type of columns in the redo log files. This type of
logging can be specified for ALL ,PRIMARY KEY, UNIQUE and FOREIGN
KEY. This can be enabled using the following command
You can check the database level supplemental logging using the
following query
select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,
SUPPLEMENTAL_LOG_DATA_all from v$database;
select
owner,log_group_name,table_name,column_name,logging_property
, from DBA_LOG_GROUP_COLUMNS;
1. All columns that are used in Primary keys at the source site for
which changes are applied
on the target must be unconditionally logged at the table level or at
the db level.
2. All columns that are used as substitute columns at the APPLY site
must be unconditionally logged .
3. All columns that are used in DML handlers, Error handlers, Rules,
Rule based transformations,
virtual dependency definitions, Subset rules must be unconditionally
logged.
4. All columns that are used in column list for conflict resolution
methods must be conditionally logged,
if more than one column from the source is part of the column list.
In Oracle 9iR2, when the threshold for memory of the buffer queue is
exceeded, Streams will write the messages to disk. This is sometimes
referred to as "spillover". When spillover occurs, Streams can no
longer take advantage of the in-memory queue optimization. One
technique to minimize this spillover is to implement a form of flow
control. See the following note for the scripts and pre-requisites:
A. Configuring Capture
Minimize the number of rules added into the process rule set. A
good rule of thumb is to keep the number of rules in the rule set to
less than 100. If more objects need to be included in the ruleset,
consider constructing rules using the IN clause. For example, a rule
for the 6 TB_M21* tables in the MYACCT schema would look like the
following:
The Streams capture process requires a rule set with rules. The
ADD_GLOBAL_RULES procedure can be used to capture DML changes
for entire database as long as a negative ruleset is created for the
capture process that includes rules for objects with unsupported
datatypes.. ADD_GLOBAL_RULES can be used to capture all DDL
changes for the database.
B. Capture Parameters
Values Comment
Parameter &
Recommendation
PARALLELISM=1 Default: Number of parallel
1 execution servers to
configure one or more
preparer processes used
to prefilter changes for
the capture process.
Recommended value is 1.
A logminer checkpoint is
requested by default
every 10Mb of redo
mined. If the value is set
to 500, a logminer
checkpoint is requested
after every 500Mb of
redo mined. Increasing
the value of this
parameter is
recommended for active
databases with significant
redo generated per hour.
It should not be
necessary to configure
_CHECKPOINT_FREQUEN
CY in 10.2.0.4 or higher
exec
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequ
ency','1000');
A. Configuring Propagation
The rules in the rule set for propagation can differ from the rules
specified for the capture process. For example, to configure that all
captured changes be propagated to a target site, a single
ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for
the propagation even though multiple ADD_TABLE_RULES might have
been configured for the capture process.
B. Propagation mode
For new propagation processes configured in 10.2 and above. set the
queue_to_queue propagation parameter to TRUE. If the database is
RAC enabled, an additional service is created typically named in the
format: sys$schema.queue_name.global_name when the Streams
subscribers are initially created. A streams subscriber is a defined
propagation between two Streams queues or an apply process with
the apply_captured parameter set to TRUE. This service automatically
follows the ownership of the queue on queue ownership switches (ie,
instance startup, shutdown, etc). The service name can be found in
the network name column of DBA_SERVICES view.
dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queu
e','DEST_DB',destination_queue=>'Q1',latency=>5);
D. Network Connectivity
When using Streams propagation across a Wide Area Network
(WAN), increase the session data unit (SDU) to improve the
propagation performance. The maximum value for SDU is 32K
(32767). The SDU value for network transmission is negotiated
between the sender and receiver sides of the connection: the
minimum SDU value of the two endpoints is used for any individual
connection. In order to take advantage of an increased SDU for
Streams propagation, the receiving side sqlnet.ora file must include
the default_sdu_size parameter. The receiving side listener.ora must
indicate the SDU change for the SID. The sending side tnsnames.ora
connect string must also include the SDU modification for the
particular service.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max
# number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
Archive Logs
The archive log threads from all instances must be available to any
instance running a capture process. This is true for both local and
downstream capture.
Queue Ownership
For queues created with Oracle Database 10g Release 2, a service will
be created with the service name= schema.queue and the network
name SYS$schema.queue.global_name for that queue. If the
global_name of the database does not match the
db_name.db_domain name of the database, be sure to include the
global_name as a service name in the init.ora.
For example, consider the tnsnames.ora file for a database with the
global name db.mycompany.com. Assume that the alias name for the
first instance is db1 and that the alias for the second instance is db2.
The tnsnames.ora file for this database might include the following
entries:
db.mycompany.com=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
Propagation Restart
Example:
exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propag
ation'); or
exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propag
ation',force=>true);
exec
DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propa
gation');
6.1. Privileges
Examples:
Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY)
INDEX, CREATE (ANY) PROCEDURE
6.2. Instantiation
For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or
GLOBAL level).
A. Rules
B. Parameters
The
DBA_APPLY_SPILL_TXN
and
V$STREAMS_APPLY_REA
DER views enable you to
monitor the number of
transactions and
messages spilled by an
apply process.
Refer to Document
365648.1 Explain
TXN_LCR_SPILL_THRESH
OLD in Oracle10GR2
Streams
exec
dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
Queue Ownership
the apply process is run at the owning instance of the target queue
OPERATION
Global Name
Local capture:
Downstream capture:
Backup Considerations
1. Ensure that any manual backup procedures that include the any of
the following statements include a non-null Streams tag:
The tag should be chosen such that these DDL commands will be
ignored by the capture rule set.
3. Ensure that all archive logs (from all threads) are available.
Database recovery depends on the availability of these logs, and a
missing log will result in incomplete recovery.
Batch Processing
For best performance, the commit point for batch processing should
be kept low. It is preferable that excessively large batch processing be
run independently at each site. If this technique is utilized, be sure to
implement DBMS_STREAMS.SET_TAG to skip the capture of batch
processing session. Setting this tag is valid only in the connected
session issuing the set_tag command and will not impact the capture
of changes from any other database sessions.
DDL Replication
When replicating DDL, keep in mind the effect the DDL statement will
have on the replicated sites. In particular, do not allow system
generated naming for constraints or indexes, as modifications to
these will most likely fail at the replicated site. Also, storage clauses
may cause some issues if the target sites are not identical.
Propagation
At times, the propagation job may become "broken" or fail to start
after an error has been encountered or after a database restart. The
typical solution is to disable the propagation and then re-enable it.
exec
dbms_propagation_adm.stop_propagation('propagation_name');
exec
dbms_propagation_adm.start_propagation('propagation_name');
If the above does not fix the problem, perform a stop of propagation
with the force parameter and then start propagation again.
exec
dbms_propagation_adm.stop_propagation('propagation_name',forc
e=>true);
exec
dbms_propagation_adm.start_propagation('propagation_name');
Source queue may grow if one of the target sites is down for an
extended period, or propagation is unable to deliver the messages to
a particular target site (subscriber) due to network problems for an
extended period.
Streams Cleanup/Removal
aq$_{qtable_name}_i
aq$_{qtable_name}_h
aq$_{qtable_name}_t
aq$_{qtable_name}_p
aq$_{qtable_name}_d
aq$_{qtable_name}_c
Oracle has the ability to restore old stats on tables including data
dictionary tables using the dbms_stats.restore... API's. This feature
can be used for short term resolution, but the real solution is the first
one, where you lock optimizer stats of volatile tables.
MONITORING
Alert Log
References
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)
Keywords
Errors
This is a customer overview of the "Bug Description" documents which can be seen in My
Oracle.
Each article is a placeholder note for a summary description of a bug (defect) in an Oracle
product. The articles give the following summary information on an individual bug:
Related To - Key features / product areas / parameters / views etc.. which the issue is related
to
Many of the articles will show very little information against the description of the problem
but may still be useful as they show which versions are likely to be affected and can give some
idea of the impact of the bug.
IMPORTANT:
These short articles will not usually give enough information to identify that you are hitting, or
may hit, a specific issue. Always check with Oracle Support if a particular issue is of interest.
Also please note that some of the workarounds given may be very specific, or may have side
effects. Again contact Oracle Support for information about any particular bug.
Please note that not all bugs have a bug description present. The aim is to have a description
for all customer related bugs which appear in RDBMS Patch Sets. Other products do not have
these summaries present.
Affects
This section shows the product that the bug is reported against, the range of version believed
to be affected, the versions which are confirmed as being affected and details of the platforms
affected. If the issue is believed to be a regression than that is indicated also.
Product (Component)
The product / component shows the code area that the bug is filed against. In some cases the
tools affected may differ from the actual code area where the fix occurs. Eg: An Oracle Server
(CORE) fix may only show up as an Export / Import issue. You are advised to see the "Related
To" section of the bug description for features / products believed to be affected.
Platforms affected
If the bug is specific to certain platforms this is indicated here. If the bug is "Generic" this is
shown. A generic bug is one which affects all or most platforms, although the exact symptoms
and chance of hitting the bug may differ between platforms. eg: Some bugs may affect only big
or little endian platforms but the bug itself is marked as generic if the issue is in code which is
common to all platforms.
Regressions
A regression is a problem which is introduced in a particular release but does not affect the
default behaviour in lower releases. Hence it is something to watch out for if upgrading.
Problems with new features are NOT considered as regressions unless the feature is enabled
by default in the new release and it is not obviously controlled by some compatibility related
parameter. The most serious regressions are those introduced by application of a Patch Set.
Changes in behaviour
Some bug fixes intentionally introduce a change in documented behaviour. Ideally such
changes should be protected by some event or parameter to allow them to be turned off but
this is not always the case.
Current Version.
Consider using any workaround if given. Consider using the latest
Affects 12.1.0.1
Patch Set if available on your platform. It may be possible to get an
interim patch.
Affects 10.2.0.2 or Consider using latest patch set. Interim patches are no longer
10.2.0.1. created for this release.
Affects 10.1.0.2, Consider using latest patch set (10.1.0.5) or 10.2 or 11g. Interim
10.1.0.3 or 10.1.0.4 patches are no longer created for this release.
Fixed
This lists the releases where the bug has been fixed.
Interim / one-off patches are NOT listed in this section. You can search the "Patches" section of
My Oracle using the bug number to check for one-off / interim emergency patch availability.
Note that if an issue is listed as fixed in a particular patch set then the fix should also be
included in all subsequent Patch Sets for that release. Eg: If a bug is listed as fixed in 9.2.0.3
then the fix will also be included in the 9.2.0.4 Patch Set. It is advisable always to use the latest
Patch Set rather than a specific patch set version.
This fix has not been included in any full database release nor any
This bug fix is only Patch Set and is only available as an interim (one off) patch. Typically
available as an interim such issues are addressed in some other manner in a later release
patch and so the fix is not applicable to versions other than those detailed
in the bug description.
This issue is reported The issue may be fixed at some point in the future but the fix may be
as a bug but currently done under some other bug number or even under some separate
has no fix coded. project.
Expected to be Fixed This is just a marker version for a future version of Oracle. There is
in Oracle 12c Release currently no such version. A fixed of 12.2.0.0 just means that the
2 issue has been fixed in 12c Release 2 but this is not guaranteed.
Fixed in the listed See Note:854428.1 for details of Patch Set Updates (PSU).
10.2.0.4 Patch Set See Note:1340024.1 for details of fixes in each 10.2.0.4 PSU
Update (PSU) Overlay Note that 10.2.0.4.5 onwards PSU are supplied as an overlay patch
patch which must be applied on top of the 10.2.0.4.4 Patch Set Update.
Fixed in one of 9.0.1.0, These are old 9i releases. It is advisable to use a newer release such
9.0.1.2, 9.0.1.3, as 10g or 11g. Summary 9.0 information can be seen in
9.0.1.4, 9.0.1.5 Note:149018.1
Note: If a release is shown as "does not exist yet" then this indicates a planned future base
release or Patch Set. A patch set or base release with the indicated version is expected to be
released at some time in the future but this is not a guarantee that there will be such a release
- plans are subject to change.
Symptom
Code Improvement
The code fix in this bug is considered as an enhancement. This may be as simple as enhanced
diagnostics or may be a small enhancement to functionality.
Corrupt/Bad Backup
The problem can result in a bad backup being produced, such that restore from that backup
may be impossible or not restore data to be the same as it was at the time of the backup.
Corruption
A corruption issue which does not fit into one of the corruption issues below.
Corruption (Dictionary)
The Oracle data dictionary can end up containing incorrect or inconsistent data. Sometimes
such issues can be repaired with the help of Oracle Support, and sometimes it requires the
database to be rebuilt or restored. Never attempt to repair dictionary corruptions by yourself.
Doing so can make your database unsupportable.
Either a false corruption error is reported or code which checks for corrupt blocks / data is
incorrect and may not notice that bad data really is corrupt.
Corruption (Index)
This issue can cause an index / table data mismatch where the problem can typically be
resolved by rebuilding the index.
Note: When an index is built / rebuilt is may be built based on data in the corrupt index itself.
Hence it is very important to drop ALL corrupt indexes before recreating them.
Corruption (Logical)
This issue can cause logical corruption to data. This category is typically used where data itself
can get corrupted but the underlying table structure is intact.
Eg: "DBVERIFY" or "ANALYZE VALIDATE" may not report any problems for any table affected.
This category is closely related to Wrong Results in that wrong results issues can cause logical
corruption if used to update data or take decisions. The "Corruption" category is typically used
where the corrupt data is persistent (stored).
Corruption (Physical)
This issue can cause physical corruption. Physical corruption is typically taken to mean that the
underlying structure of datablocks or files can be corrupted.
Eg: "DBVERIFY" or "ANALYZE VALIDATE" is likely to report a problem.
Deadlock
This issue can either lead to a deadlock scenario, or a deadlock situation may go unnoticed. In
some cases a false deadlock may be reported.
Diagnostic output relates to output in Oracle trace files etc.. used by Oracle Support to
diagnose a problem. The diagnostic information may not be sufficiently detailed, or may
contain incorrect , misleading or incomplete output. This sort of issue does not normally affect
normal day to day running of a database but can hamper diagnosis when things go wrong.
The issue causes an unexpected or incorrect error to be reported. Details of the actual error/s
appear in the description of the bug.
Excessive CPU Usage
This issue can cause high CPU usage but is not typically a spin scenario - just excessive CPU use.
For CPU spinning issues see Hang / Spin.
Feature Unusable
A process may hold a shared resource a lot longer than normally expected leading to many
other processes having to wait for that resource. Such a resource could be a lock, a library
cache pin, a latch etc.. The overall symptom is that typically numerous processes all appear to
be stuck, although some processes may continue unhindered if they do not need the blocked
resource.
A process may hang, typically in a wait state. Note that this is different to a process which is
spinning and consuming CPU.
A process enters a tight CPU loop so appears to hang but is actually consuming CPU.
An internal error (ORA-600) may occur. Details of the actual error/s appear in the description
of the bug.
Latch Contention
Memory is continually consumed appearing as a memory leak. Some memory leaks are not
"true" leaks in that the memory may be freed up when a long running operation completes but
the issue is still marked as a leak if the operation should run without the memory growth.
The issue leaks some form of resource (other than memory or CPU).
Examples are cursor leaks , file handle leaks etc...
Memory Corruption
This issue can result in memory corruption. Memory corruptions can have side effects of
signalling unexpected errors, unexpected internal errors or can even produce incorrect results
depending on how the memory gets corrupted. The text description of the bug usually
indicates if the issue affects private or shared memory and whether the issue is a client or
server side memory corruption issue.
Mutex Contention
Mutex contention may be seen. Typically mutex contention is focused around specific objects
or SQL statements . As mutexes can use very tight loops with only a yield of the CPU between
iterations then mutex contention may often be accompanied by increased CPU usage.
The optimizer estimates a bad cardinality when evaluating the best execution plan. This can
typically lead to a suboptimal execution plan being chosen as the cost computation are based
on bad estimates.
This issue can cause parsing of a SQL statement to take excessive resource and time. In this
context "parsing" includes time taken in the optimizer to choose the best plan for a statement.
If a SQL statement takes a long time or a lot of resource during the parse / optimize operations
this can cause waits in other sessions wanting to execute the same SQL statement.
One can typically help the optimizer portion of parse time by use of hints or outlines to reduce
the number of options that the optimizer has to consider.
Query performance is affected. This may be due to a poor execution plan or due to un-
necessary operations to execute the query.
The process may die unexpectedly. More details on the likely functions or error at the time of
the dump should appear in the bug description.
This issue can show as errors during relink operations or as undefined symbols.
This issue is either a security loophole or a vulnerability to denial of service attacks. Such issues
are typically either alerted and/or are included in Oracle Critical Patch Updates .
The amount of space used for the storage of database objects is affected by this issue.
Typically more space than expected is used. Note that this relates to the "on disk" storage
space in the database and not to memory space used.
This issue can cause an execution plan for a statement to change suddenly. ie: The plan is not
stable and the SQL may execute quickly sometimes and poorly at other times.
This issue can cause unwanted tracefiles , tracefile content, alert log entries or other
extraneous output which might be considered a nuisance. Such files / entries can typically be
deleted at regular intervals but be careful not to also remove useful trace / output.
Wrong Results
This bug can cause wrong results to be returned. If the source of the wrong results is used in
any form of data update or decision this issue could lead to permanent logical corruption.
A wrong version of a shared cursor may be used. eg: A wrong child cursor may be used. This
sort of problem can show as strange errors (such as ORA-942) or can lead to logically incorrect
behaviour, such as accessing data from tables in the wrong schema.
This issue can cause incorrect permissions to be set on files / directories at OS level. Such
permissions may prevent access when expected OR may allow access to the files/directories by
users that should not normal have permission to read/write the file.
"hcheck.sql" is a custom script available in Note:136697.1 which can be executed to help check
for potential DB data dictionary inconsistencies. The script specifically includes checks for
dictionary inconsistencies that could be caused by this bug and reports any found with an
HCKE-nnnn or HCKW-nnnn message.
Task Related
Instance Startup
This issue can occur when attempting to start a database or ASM instance.
This issue can affect upgrade, downgrade or migration of a database between releases.
Performance Monitoring
Performance monitoring may be affected.
eg: performance monitoring views may not show correct information to identify a problem.
Recovery
This issue is related to the Adaptive Cursor Sharing functionality introduced in 11g. See
Note:836256.1 for details of this feature.
Relates to the use of analytic SQL constructs such as the windowing functions.
ANSI Joins
Relates to the use of ANSI joins. Often a workaround for problems with ANSI SQL is to recode
the SQL to Oracle conventional format.
Application Context
This issue affects SQL which uses the "CONNECT BY" SQL clause.
Constraint Related
This issue is related to the use of constraints. The description should clarify which kinds of
contraint the problem may relate to. eg: CHECK constraints, use of Foreign key constraints etc..
Datatypes (AnyData)
Datatypes (LOBs/CLOB/BLOB/BFILE)
Relates to one of the large object (LOB) datatypes such as CLOB, BLOB or BFILE.
Datatypes (Objects/Types/Collections)
Datatypes (TIMESTAMP)
Relates to use of the TIMESTAMP datatype, or timezone data as used for timestamp datatypes.
Direct path operations may be affected. Direct path operations can occur in various places. eg:
INSERT /*+APPEND*/ type operations use direct path access at SQL level, whilst direct path
SQL Load and direct path OCI APIs can also use this form of data access. For issues affecting
direct path operations a workaround can often be to use the equivalent non-direct path
option.
This issue relates to the use of rules / expression filters. eg: As created by DBMS_RLMGR
Hash Join
A HASH Join is a specific form of join of row sources. This issue relates to this specific join
method. Hash joins can typically be disabled and other join methods used.
This issue relates to the use of literal replacement , which is used when the parameteer
CURSOR_SHARING is set to either SIMILAR or FORCE. A workaround for such issues is to
disable literal replacement by setting CURSOR_SHARING=EXACT for any problem statement,
although this can then result in increased load on the shared pool in systems with high
concurrency.
Online DDL
Relates to the use of ONLINE DDL operations. Often a workaround for such issues is to use the
non-online equivalent, although that may then need an short outage to allow the operation to
run.
Optimizer
This problem is related to the SQL optimizer which determines the execution plan for a given
SQL statement.
Relates specifically to the use of bind peeking by the Cost Based Optimizer.
This issue affects, or is related to, the use of the SQL Plan Management (SPM) feature within
Oracle which was introduced in Oracle 11g. SQL Plan Management (SPM) is intended to allow
controlled plan evolution by only using a new plan after it has been verified to be perform
better than the current plan. The feature is typically controlled via:
<Parameter:optimizer_capture_sql_plan_baselines>
<Parameter:optimizer_use_sql_plan_baselines>
<Package:DBMS_SPM>
See the "Performance Tuning Guide" for more details of this feature.
Relates to the use of WITH clauses within SQL statements. Most statements which use a WITH
clause can be recoded into equivalent SQL which does not use the WITH clause.
Parallel query automatic degree of parallelism (Auto DOP) is a Parallel Query feature
introduced in 11.2 which is enabled when the parameter PARALLEL_DEGREE_POLICY=AUTO .
If you encounter problems with this feature it can be disabled by setting
PARALLEL_DEGREE_POLICY to MANUAL .
Relates to the use of Parallel Query or Parallel DML. A workaround for such issues may be to
run the statement serially.
Regular Expressions
Result Cache
This issue affects, or is related to, the use of the Result Cache feature within Oracle which was
introduced in Oracle 11g. When the result cache is enabled then a query execution plan may
include a RESULT CACHE node in the plan. When such a query executes the database looks in
the cache memory to determine whether the result exists in the cache. If the result exists, then
the database retrieves the result from memory instead of executing the query. If the result is
not cached, then the database executes the query, returns the result as output, and stores the
result in the result cache.
<Parameter:RESULT_CACHE_MODE>
See the "Performance Tuning Guide" for more details of this feature.
Securefiles
Relates to the use of Secure File LOBS. Often a workaround may be to use the equivalent
BASICFILE lob.
Star Transformation
Relates to using STAR transformation in SQL statements. Sometimes such issues can be
avoided by disabling star transformation for the problem SQL.
eg: Set STAR_TRANSFORMATION_ENABLED=FALSE
Relates to using STAR temporary table transformations. Often such issues can be avoided by
using STAR transformation but without the temp table transformation.
eg: Set STAR_TRANSFORMATION_ENABLED=TEMP_DISABLE
Triggers
Truncate
Virtual Columns
This issue is related to the use of Virtual Columns. A virtual column is a column that is not
stored on disk but has a queryable value which is the result of some expression.
Virtual columns may be defined explicitly in the table definition or may be used implicitly by
some SQL operations (such as when a function based index is defined or used).
Relates to the use of ASSM segments. eg: SEGMENT SPACE MANAGEMENT AUTO
BIGFILE Tablespaces
Bitmap Indexes
This issue is related to chained or migrated rows. A "chained" row is one where the row is split
into two or more pieces - the pieces may be stored in the same block or in different blocks.
Any row with more than 255 columns is internally stored in a chained manner. Row chaining
can typically occur if a row is updated with longer data than currently in place - the row then
has to be split and chained in order to fit in the new data. A migrated row is similar but occurs
where the entire row piece is moved leaving just the head piece in the original location in
order to retain the same "rowid" value. Delete and reinsert of chained / migrated rows can
often remove the chaining (unless the row is very long or has more than 255 columns).
Domain Indexes
External Tables
Relates to the use of Index Only Tables. ie: Created with ORGANIZATION INDEX
Partitioned Tables
Relates to the use of read only tablespaces. eg: ALTER TABLESPACE READ ONLY
Space Management
Relates to space management within locally managed tablespaces. ie: Tablespaces created
with "EXTENT MANAGEMENT LOCAL"
SYSAUX Tablespace
This issue relates to the use of the SYSAUX tablespace which is a special auxiliary tablespace
that is a standard part of the database.
System Managed Undo (SMU)
Relates to the use of System Managed Undo (SMU). ie: As is used when UNDO_MANAGEMENT
is set to AUTO.
This issue is related to Advanced or Secure Networking. Such issues may relate to a specific
encryption or advanced Net feature.
eg: The use of a specific secure network protocol such as Kerberos or SSL etc..
The issue is related to the use of a database link between Oracle databases. Note that this
does NOT include database links to Heterogenious Services / Gateways but only those
between Oracle instances.
Related to the use of Oracle Gateways / Heterogeneous Services generally (as opposed to a
specific gateway / HS service)
Gateways / ODBC
Oracle Names
This issue is related to Transparent Application Failover (TAF) . This is a feature of the client
that enables an application to automatically reconnect to a database if the database instance
to which the connection is made fails.
See Note:453293.1 for more details of TAF.
XA / Distributed Transactions
Advanced Queuing
This issue relates to use of the Advanced Queue features in the Oracle database.
This issue relates to use of the Automatic Memory Management feature of Oracle.
eg: As used when MEMORY_TARGET parameter is used .
Problems in this area can often be avoided by manually configuring memory for the database.
"Block change tracking" is a feature that allows the database to keep track of blocks that have
been modified (changed). This might be enabled on a primary or standby database. If enabled
then RMAN uses a block change tracking file to identify changed blocks for incremental
backups. By reading this small bitmap file to determine which blocks changed, RMAN avoids
having to scan every block in the data file that it is backing up.
If a bug issue affects the Block Change Tracking feature then one can often work around /
avoid the bug issue by disabling this feature. Disabling BCT may incur long backup times.
This issue relates to use of Database File System (DBFS), including the DBFS client.
This issue relates to use of Database Resident Connection Pooling (DRCP), which is a
connection pool that can be configured in the Database Server. DRCP can be used to achieve
scalability by sharing connections among multiple-processes. For details of this feature and
how to enable / disable it see Note:1501987.1
This issue relates to the Direct NFS feature of Oracle introduced in 11g. You can configure the
Oracle Database to access NFS version 3 servers directly using Direct NFS. This enables the
storage of data files on a supported NFS system. See the "Database Installation Guide" for your
platform and Oracle version for further details of this feature.
Editions
This issue relates to the use of Oracle Editions which allows editioned objects to be used. See
the documentation for details of Oracle Editions.
Flashback
This issue relates to the use of stored Java in the database. Note that this is separate from
general JDBC and Java issues outside of the database.
Job Queues
This issue relates to the use of job queues or the Database job scheduler options in the
database, including DBMS_JOB and DBMS_SCHEDULER pacakges.
LogMiner
This issues relates to the use of LogMiner, particularly adhoc LogMiner for manually mining
redo. Note that issues in LogMiner can also affect other options which use the LogMiner code,
such as Streams and Logical Standby.
This issue relates to the use of the the Oracle GoldenGate (OGG) Integrated Extract product.
This issue is related to the use of Oracle Label Security in the database.
Oracle OLAP
This issue relates to use of Oracle Text which allows text indexing of table content within the
database .
See Note:1087143.1 for details of Oracle Text.
This issue relates specifically to use of the Text filters within Oracle Text . The filters allow non-
ascii documents to be text indexed.
See Note:1087143.1 for details of Oracle Text, including links to supported document formats
and filters.
This issues is related to Standby databases, either manually configured or part of a Data Guard
configuration.
See Note:1101938.1 for information about Data Guard.
This issue is related to the use of RAC , or for older releases is related to the use of Parallel
Server.
Recycle Bin
Replication
Resource Manager
This issue is related to the use of Row Level Security (RLS) or Fine Grain Auditing (FGA) against
tables / views in the database.
This issue relates to the use of shared servers in the database. Often one can avoid problems
with shared servers by using dedicated connections instead.
Spatial Data
Spatial RTREE
This issue is related to the use of Streams and/or Logical Standby. Such issues can affect any
product / feature built on top of the Streams architecture.
Supplemental Logging
This issue affects, or is related to, the use of the Supplemental Logging. When enabled
Supplemental Logging logs additional data into the redo stream - such data is typically need for
redo-based applications such as LogMiner, Streams, Logical Standby etc.. An issue which
affects supplemental logging can have downstream effects on these other features.
Transportable Tablespaces
This issue is related to the use of transportable tablespaces. It may be directly related to the
transport operation, or may be some effect seen later in time which relates to transported
data.
Trusted Oracle
This issue relates to use of the workload repository or related reporting features.
XDB
Programming Languages
JDBC
NCOMP
This issues relates to the use of the NCOMP Native Compilation option for PLSQL. Such issues
can typically be avoided by not "ncomp"ing the code.
OCCI
OCI
This issue is related to the use of the Oracle Call Interface on the client side. Such issues can
affect various clients which use OCI to interface to the database. eg: OCI issues can affect pre-
compiler based clients, client SQLPLUS etc..
ODBC Driver
PL/SQL
This issue affects the use of certain packages in the database. The actual packages affected will
typically be listed in the bug description note.
This issue affects the use of PLSQL External procedures as can be executed by the "extproc"
process from PLSQL.
Pro* Precompiler
SQLJ
XML
Product Related
This issue realtes to Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
This is a multi-platform file system and storage management technology that extends Oracle
Automatic Storage Management (Oracle ASM) functionality to support customer files
maintained outside of the Oracle Database. See the "Automatic Storage Management
Administrator's Guide" for more information.
This issue relates to the Automatic Diagnostic Repository introduced in Oracle 11g. The
Automatic Diagnostic Repository (ADR) is a file-based repository that stores database
diagnostic data such as trace files, the alert log, and Health Monitor reports. Key
characteristics of ADR include:
This issue relates to the Cluster Ready Services / Parallel Server Management / Grid
Infrastructure elements of Oracle as used in as RAC environment.
This issue relates to use of the Oracle Database Configuration Assistant (DBCA). Often one can
use manual steps to perform the same task as DBCA.
Database Replay
This issue relates to use of the Oracle Database Upgrade Assistant (DBUA). Often one can use
manual steps to perform the same task as DBUA.
Datapatch Utility
This issue relates to the Datapatch utility, which is used for automated apply and rollback of
SQL steps of patches. See Note:1585822.1 for information about Datapatch.
Datapump Export/Import
This issues relates to the use of the Datapump Export / Import utilities of the Oracle Database.
ie: The "expdp" and/or "impdp" executables.
Note that this is different to conventional Export / Import which it replaces.
In some cases a workaround to Datapump export / import issues can be to use conventional
export / import.
DBVerify Utility
This issues relates to the DBVERIFY (DBV) utility used to check the consistency of database
files.
See Note:35512.1 for details of the DBVERIFY utility.
Exadata
Export/Import
This issues relates to the use of the conventional Export / Import utilities of the Oracle
Database. ie: The "exp" and/or "imp" executables.
Note that this is different to Datapump Export / Import which replaces conventional export /
import .
In some cases a workaround to conventional export / import issues can be to use Datapump
export / import.
FailSafe
Intelligent Agent
J-Publisher
This issue is related to the use of LDAP features in the database, inclusing the use of Oracle
Internet Directory from the database. This includes the DBMS_LDAP package and the use of
Enterprise Users configured in OID.
OLEDB
This issue relates to the Oracle Counters for Windows Performance Monitor product.
Oracle Lite
Oracle ONS
This issue relates to use of Oracle Universal Installer product. Note that this is different to an
issue where an install / patch operation is not performed correctly.
Portal (MOD_PLSQL)
SQL*Loader
SQL*Plus
Ultra Search
Wallet Manager
This issue relates to use of Wallets in Oracle, and especially to issues with Oracle Wallet
Manager itself..
Workspace Manager
Miscellaneous
This bug# is a special marker bug for a molecule in a Critical Patch Update. Refer to the
relevant Critical Patch Update documentation for details of what security issues the molecule
addresses.
See Note:467881.1 for details of the latest Critical Patch Update.
From 11g the Oracle database records more detailed information about dependencies
between objects. For example if table T has columns C1, C2 and C3 but view V selects only
columns C1 and C2 then adding a fourth column C4 to T or changing the definition of C3 has no
need to invalidate the view. Prior to 11g any change to T would cause V to be invalidated.
From 11g, only changes that affect the parts of T that V depends on will cause V to be
invalidated. Similar fine grained dependency checking applies to other types of objects too,
especially PL/SQL library units, and is known as fast validation.
If you have problems with Fine Grained Dependencies see help notes in Note:1061696.1 under
"11g and Fine Grained Dependency Checking"
This issue relates to number boundary issues such as 2Gb limits for filesize, memory size etc..
NUMA Related
This issue is related to the use of NUMA features within the Oracle database.
Miscellaneous
Description
This is a brief description of the bug itself, including any workaround if known. Sometimes this
will be very short such as "A dump can occur in XXXXX" and sometimes this will include a good
description with an example. Any workaround should be treated with caution. Any hidden
parameters or events mentioned should not be used unless clarified with Oracle Support.
Streams Complete Reference FAQ (Doc ID 752871.1) To Bottom
Purpose
Currency Check
Architecture / Components of streams Required.
CurrentOut of Date
What are Rules & Rulesets ?
Related Products
Why we need streams ?
Oracle Database
What are the advantages of streams over Advanced - Enterprise
Replication ? Edition
Index of
What is Spilling ? Oracle
Database
Information
CanOracleStreams be used between different hardware Centers
platforms and OS versions? [1568043.2]
Information
New Streams features with different versions oforacle?
Center:
No Overview of
Informa Database
What is the significance of AQ_TM_PROCESS with respect tion Security
toOracleStreams ? Center Products
availabl [1548952.2]
e for
this
Why we need instantiation under streams environment ? docume Information
nt. Center:
Overview
What is streams conflict resolution ? Database
Server/Client
Installation
What is Streams tags ? and
Upgrade/Mig
ration
[1351022.2]
What is Streams Flow control ?
Document References
Streams Heterogeneous services
Script to
What are the different SCN's with respect to Streams ? No Prevent
Referen Excessive
ces Spill of
availabl Message
e for From the
this Streams
Buffer Queue
STREAMS CONFIGURATION / ADMINISTRATION docume To Disk
nt. [259609.1]
"Warning:
Aq_tm_proce
How to convert current existing Advanced replication set up sses Is Set To
to Streams ? 0" Message
in Alert Log
After
How are LOBs queued and propagated? What happens Upgrade to
when there is an update/insert operation involving 1GB 10.2.0.3 or
sized LOB? Higher
[428441.1]
Recently Viewed
When using Streams, how can I tell which archived logs can Streams Complete
be removed from disk? How can I tell which archivelogs are Reference FAQ
needed by Streams capture? [752871.1]
Show More
Streams Idle Wait events
STREAMS TROUBLESHOOTING
Streams Healthcheck
Streams Monitor
References
Applies to:
Purpose
What is streams ?
Capture
Implicit capture mines redo log, either by hot mining the online
redo log or, if necessary, by mining archived log files.
After retrieving the data, the capture process formats it into a
Logical Change Record (LCR) and places it in a staging area for
further processing.
The capture process can intelligently filter LCRs based upon
defined rules. Thus, only changes to desired objects are
captured.
Staging
Propagation
Consumption
Default Apply
The default apply engine applies DML changes and DDL changes
represented by implicitly or explicitly captured LCRs. The default
apply engine will detect conflicts where the destination row has
been changed and does not contain the expected values. If a
conflict is detected, then a resolution routine may be invoked.
The apply engine can pass the LCR or a user message to a user-
defined function. This provides the greatest amount of flexibility
in processing an event. A typical application of a user-defined
function would be to reformat the data represented by the LCR
before applying it to a local table, for example, field format,
object name and column name mapping transformations. A
user-defined function could also be used to perform column
subsetting, or to update other objects that may not be present
in the source database.
Explicit Dequeue
User applications can explicitly dequeue LCRs or user messages
from the receiving staging area. This allows a user application to
efficiently access the data in a Streams' staging are. Streams can
send notifications to registered PL/SQL or OCI functions, giving
the applications an alternative to polling for new messages. Of
course, applications can still poll, or even wait, for new
subscribed messages in the staging area to become available.
Unidirectional Streams
Bi-directional Streams
Hub-spoke Configuration
Downstream Capture
Real time Downstream Capture
What is Spilling ?
The logical change records are staged in a memory buffer
associated with the queue, they are not ordinarily written to
disk.
If the messages/LCR's staged in the buffer for a period of time
without being dequeued, or if there is not enough space in
memory to hold all of the captured events, then they are spilled
to disk.
Every redo entry in the redo log has a tagassociated with it. The
datatype of the tagis RAW. By default, when a user or
application generates redo entries, the value of the tagis NULL
for each redo entry, and a NULL tagconsumes no space in the
redo entry. The size limit for a tagvalue is 2000 bytes.
You can control the value of the tags generated in the redo log in
the following ways:
10g
http://download.oracle.com/docs/cd/B19306_01/server.102/b1
4228/rep_tags.htm#STREP00
11g
http://docs.oracle.com/cd/E11882_01/server.112/e10705/rep_t
ags.htm#i1007387
The first SCN is the lowest SCN in the redo log from which a
capture process can capture changes. If you specify a first SCN
during capture process creation, then the database must be able
to access redo data from the SCN specified and higher.
Start SCN
The start SCN is the SCN from which a capture process begins to
capture changes. You can specify a start SCN that is different
than the first SCN during capture process creation, or you can
alter a capture process to set its start SCN. The start SCN does
not need to be modified for normal operation of a capture
process. Typically, you reset the start SCN for a capture process
if point-in-time recovery must be performed on one of the
destination databases that receive changes from the capture
process. In these cases, the capture process can be used to
capture the changes made at the source database after the
point-in-time of the recovery.
captured SCN
applied SCN
Instantiation SCN
The system change number (SCN) for a table which specifies that
only changes that were committed after the SCN at the source
database are applied by an apply process.
When does Streams read the Oracle on-line Redo Logs? Does the
presence of Stream replication affect log-switching/archiving
mechanism?
Streams Capture reads the changes after they are written to the
redo log. Streams is independent of the log-switching and
archiving mechanism. Streams can seamlessly switch between
reading the archived logs to online logs, and back again, if
necessary.
When using Streams, how can I tell which archived logs can be
removed from disk? How can I tell which archivelogs are needed
by Streams capture?
Note.290143.1 Minimum Archived Log Necessary to Restart 10g
and 11g Streams Capture Process.
http://download.oracle.com/docs/cd/B19306_01/server.102/b1
4229/ap_strup.htm#i642623
SQL> execute
DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
Note that the procedure will not remove the STRMADMIN user
and will
need to be run separately, once at each database where a
Streams environment resides.
DBMS_STREAMS
DBMS_STREAMS_ADM
DBMS_STREAMS_AUTH
DBMS_STREAMS_MESSAGING
DBMS_STREAMS_TABLESPACE_ADM
DBMS_CAPTURE_ADM
DBMS_APPLY_ADM
DBMS_PROPAGATION_ADM
STREAMS TROUBLESHOOTING
Streams Healthcheck
Streams Monitor
For EBS
It is true that Streams can replicate tables that are part of EBS
11.5 -
but it is also true that EBS 11.5 cannot be used at the target on
these tables.
In other words, EBS tables can be replicated to a target database
for the use of
a 3rd party application, but not for use by EBS 11.5.
References
NOTE:259609.1 - Script to Prevent Excessive Spill of Message
From the Streams Buffer Queue To Disk
NOTE:421176.1 - Usage of RMAN in Streams Environment
NOTE:455797.1 - Streams Transformation
NOTE:422252.1 - How to Skip a Transaction at Apply Site
NOTE:428441.1 - "Warning: Aq_tm_processes Is Set To 0"
Message in Alert Log After Upgrade to 10.2.0.3 or Higher
NOTE:429543.1 - Purpose of Instantiation in Streams
Environment
NOTE:437838.1 - Streams Recommended Patches
NOTE:418755.1 - Master Note for Streams Recommended
Configuration
NOTE:230901.1 - What are Streams Queue Buffers?
NOTE:550955.1 - Instantiating Objects Using Original
Export/Import and Data Pump Export/Import - Example
NOTE:336266.1 - 10gR1 Streams New Features
NOTE:382826.1 - Understanding Rules and Rulesets
NOTE:392809.1 - 11g R1 Streams New Features
NOTE:471845.1 - Streams Bi-Directional Setup
NOTE:301431.1 - How To Setup One-Way SCHEMA Level
Streams Replication
NOTE:304268.1 - 9i Best Practices For Streams RAC Setup
NOTE:305662.1 - Master Note for AQ Queue Monitor Process
(QMON)
NOTE:335516.1 - Master Note for Streams Performance
Recommendations
NOTE:336265.1 - Best Practices For Managing Backups In A
Streams Environment
NOTE:1264598.1 - Master Note for Streams Downstream
Capture - 10g and 11g [Video]
NOTE:224255.1 - 9i: How To Setup Oracle streams replication.
NOTE:249443.1 - Migrate 9i Advanced Replication to 10g
Streams
NOTE:459922.1 - How to setup Database Level Streams
Replication
NOTE:746247.1 - Troubleshooting Streams Capture when status
is Paused For Flow Control
NOTE:753158.1 - How To Configure an Oracle Streams Real-Time
Downstream Capture Environment
NOTE:551106.1 - Instantiating Objects in Streams Using
Transportable Tablespace or RMAN
NOTE:472440.1 - How to Purge Apply Spilled Transactions in
Streams Environment.
NOTE:550593.1 - Minimize Performance Impact of Batch
Processing in Streams
NOTE:733853.1 - Database Upgrade From 9.2 To 10.2 Very Slow
For Streams Enabled Database
NOTE:735976.1 - All Replication Configuration Views For
Streams, AQ, CDC and Advanced Replication
NOTE:230049.1 - Streams Conflict Resolution
NOTE:789445.1 - Master Note for Streams Setup and
Administration
NOTE:855964.1 - How to do SQL Trace for the Streams Processes
NOTE:733691.1 - How To Setup Schema Level Streams
Replication with a Downstream Capture Process with Implicit
Log Assignment
NOTE:265201.1 - Master Note for Troubleshooting Streams
Apply Errors ORA-1403, ORA-26787 or ORA-26786,Conflict
Resolution
NOTE:729860.1 - Troubleshooting Queries in Streams
NOTE:268994.1 - How to Start and Stop Apply Process in
Streams
NOTE:273674.1 - Streams Configuration Report and Health
Check Script
NOTE:274456.1 - Downstream Capture
NOTE:275323.1 - Minimum Archive Log Necessary To Restart
Capture Process - 9iR2
NOTE:276648.1 - Remove Streams Procedure for 9.2.0.X
NOTE:461279.1 - Streams Idle Wait Events in 10g
NOTE:471695.1 - Required Steps to Recreate a Capture Process.
NOTE:401275.1 - Handling Apply Insert And Delete Conflicts In A
Streams Environment - Error Handlers
NOTE:290143.1 - Minimum Archived Log Necessary to Restart
10g and 11g Streams Capture Process
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:297273.1 - 9i Streams Recommended Configuration
NOTE:471713.1 - Different States of Capture & Apply Process
NOTE:313279.1 - Master Note for Troubleshooting Streams
capture 'WAITING For REDO' or INITIALIZING
NOTE:413353.1 - 10.2 Best Practices For Streams in RAC
environment
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database -
Enterprise Edition > Streams (Replication and Messaging)
Keywords