Professional Documents
Culture Documents
ASSUMPTIONS:
Operating System : Oracle Enterprise Linux 6
This configuration option actually guides other configuration options like how redo
will be transported or what the primary database will do when a standby or the
network fails.
You have the option of choosing any one of the below protection mode.
1) Maximum Performance
So, if your standby database is disconnected from the primary database (network,
system, or standby database failure issues etc), the primary database is not
affected at all, which means redo generation at primary will not be paused at all.
Also in this mode it doesn�t matter how far apart your primary and standby
databases are.
The Maximum Performance protection mode is useful for applications that can
tolerate some data loss in the event of a loss of the primary database.
SRL files must be the same size as your online redo log (ORL) files
you need to have the same number of SRL files as you do ORL files, plus one
SRL files need to be created on your standby as well as on your primary in
preparation for switchover.
2) Maximum Availability
Here Data Guard has to take care to NOT allow any data loss without impacting the
availability of the primary database. So standby will always remain in sync with
primary database and at failover situation a synchronized standby database will
result in zero data loss.
So if our applications cannot tolerate data loss in the event of a loss of the
primary database and also we don�t want any downtime due to standby and/or network
failures then Maximum Availability protection mode is useful.
--------------------------------------------------------------------------------
CAVEAT:
There can be some situation where you can lose data in Maximum Availability mode:
If the network went down first or the standby went down and didn�t have a chance to
resynchronize before the failover, then all redo generated at the primary database
will be lost when you failover, Which means data loss
--------------------------------------------------------------------------------
Requirements for Maximum Availability mode:
The difference between SYNC and ASYNC redo transport method is that in the SYNC
mode LGWR process at primary databse will wait (maximum wait=NET_TIMEOUT value,
default 30 SEC) for an acknowledgement from stand by database that redo sent to
standby was received and recorded.
So looking at the difference in the way the SYNC redo transport method works, our
success in configuring Maximum Availability mode really depends on some important
factors like:
A) network bandwidth
B) network is tuned or not
C) distance between primary and secondary
D) redo generation rate at primary
3) Maximum Protection
Here Data Guard has to take care to NOT allow any data loss without any caveat (as
discussed above in Maximum Availability mode) and if required primary database can
be taken down to save it from any data-loss situation.
So you see the requirements for Maximum Protection and Maximum Availability mode
are same.
Remember that a minimum of one standby database destination is required for primary
database to run in Maximum Protection mode while in both Maximum Availability and
Maximum Performance mode primary database can run even if no standby database is
present.
In the SYNC method in Maximum Protection mode, LGWR process at primary databse will
wait (maximum wait=NET_TIMEOUT value, default 30 SEC) for an acknowledgement from
stand by database that redo sent to standby was received and recorded. If no
response is received within NET_TIMEOUT value, the standby database is marked as
failed and the LGWR continues committing transactions, ignoring the failed standby
database as long as at least one synchronized standby database meets the
requirements of Maximum Protection. And if the unreachable standby is the last
remaining synchronized standby database, the primary database will abort. Before
aborting the primary databse LGWR process will stall primary database and will try
a reconnect to standby about 20 times and if it fails all the times, it will abort
the primary database.
In Maximum availability mode, data guard process will never abort primary database
So basically, the best way to use Maximum Protection mode is to create at least two
standby databases that meet the requirements for the Maximum Protection mode. So
that when one of them becomes unreachable, the primary database will continue
generating redo without a pause of more than NET_TIMEOUT seconds. Meantime you can
correct the errors in failed standby database.
Here in our setup we will first configure our dataguard setup to run Maximum
Performance mode and later on will convert it to Maximum Availability and Maximum
Protection for testing purpose.
Now since you already chosen a the protiction mode for your data guard setup, next
you need to choose how the redo generated at primary database will be transferred
to standby database for applying.
For the Maximum Performance mode(ASYNC and NOAFFIRM) , the LOG_ARCHIVE_DEST_n
parameter will look like below:
And for Maximum Availability or Maximum Protection mode (SYNC and AFFIRM) , the
parameter will look like:
Where:
It is very important to configure and tune the network so that Data Guard can work
successfully without long waits for redo to get transferred. Check �PART 2 :
CONFIGURING AND TUNING THE NETWORK�.
The apply method is the type of standby database you choose to set up�a physical
standby database using Redo Apply or a logical standby database using SQL Apply.
2) hardware resources
switchover and failover are key terms that you should think of when creating the
standby database.
2.1) BANDWIDTH
2.2) ORACLE NET SERVICES SESSION DATA UNIT (SDU) SIZE
2.3) TCP TUNING
2.4) NETWORK DEVICE QUEUE SIZES
2.5) SRL FILES� I/O TUNING
2.1) BANDWIDTH
Bandwidth is �capacity� of our network to send maximum amount of data bits at the
same time. It is NOT the speed of our network so a very high bandwidth network
doesn�t essentially means fastest network.
So to know how much bandwidth we will require we need to know the redo generation
rate of our database.
i) check the AWR report generated during steady state times and peak times.
ii) Look at primary database alert log, calculate the time between log switches
during steady state and peak periods. Now add hom much MB of archive logs was
generated for those log switches and divide this total MB by the total time to get
the average megabytes of Redo generated per second.
NOTE: For RAC system you need to add up the MB from all instances to reach at final
value.
Remember that you need to size the network for peak load and also you will always
need �more� bandwidth than your peak redo rate value. That �more� bandwidth value
can be 20% to 50% as per situation.
The bandwidth required is always �more� than the redo generation rate as we also
have to take into account the latency or round trip time (RTT) of the network. The
latency can be caused by various hardware devices at each end, size of the data
packet going through.
�traceroute� is good tool to track network latency but not a true measure as there
can be various other reasons that affect our ability to ship redo.
2.2) ORACLE NET SERVICES SESSION DATA UNIT (SDU) SIZE
Oracle net send data to the network layer in units called session data unit (SDU).
The default size of SDU is 8192 bytes which is not very efficient for Data Guard
setup. Since large amounts of data are usually being transmitted to the standby,
increasing the size of the SDU buffer can improve performance and network
utilization.
You can configure high SDU globally within the sqlnet.ora file
DEFAULT_SDU_SIZE=32767If you want this high SDU value only for specific Dataguard
related connection, then you can do so in the tnsnames.ora and listener.ora file in
the primary database.
tnsnames.ora
stby=
(DESCRIPTION=
(SDU=32767)
(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.domain)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME= stby))
)
This will cause Data Guard to request 32,767 bytes for the session data unit
whenever it makes a connection to the standby.
listener.ora
SID_LIST_prod=
(SID_LIST=
(SID_DESC=
(SDU=32767)
(GLOBAL_DBNAME=prod)
(SID_NAME=prod)
(ORACLE_HOME=/u01/oracle/DB11G/product/11.2.0/dbhome_1)))Here prod is our primary
database.
Now the incoming connections from the standby database will also get the maximum
SDU size.
Also, we will make similar changes in our standby database tnsnames.ora and
listener.ora to make the standby system to use the same SDU size.
We need to prepare our TCP network layer to handle the large amounts of redo we
will be sending during Data Guard processing
It is amount of memory on the system that a single TCP connection can use.
net.core.wmem_max = 1048576
net.core.rmem_max = 4194304This maximum can prove sufficient but if necessary you
need to increase this maximum limits.
It is the values that a TCP connection will use for its send and receive buffers.
sysctl -a can give you the current value
here:
It is generally seen that you need 3 times the value of BDP as you TCP socket
buffer size.
So, like we did for SDU setup, we need to make these changes at Oracle Net Services
level. Put this value in the tnsnames.ora and litener.ora files:
(SEND_BUF_SIZE=<value of BDP*3>)
(RECV_BUF_SIZE=<value of BDP*3>)
TCP local queue limit the number of buffers or packets that may be queued for
transmit or they limit the number of receive buffers that are available for
receiving packets.
When you have high rate of small-sized packets, you should tune both transmit or
receive queue.
The transmit queue size is configured with the network interface option txqueuelen,
and the network receive queue size is configured with the kernel parameter
netdev_max_backlog.
transmit queue
receive queue:
If you change any of the queue values, you should change them in both primary and
standby database servers.
2.5) SRL FILES� I/O TUNING
RFS process at standby database uses SRL files writes the incoming redo so that it
is persistent on disk for recovery.
Do not multiplex the SRLs as there is really no need and it will only increase the
time lag.
In case of Maximum performance mode, Oracle database is configured for asynchronous
I/O. We must also properly configure the operating system, host bus adapter (HBA)
driver, and storage array.
--------------------------------------------------------------------------------
In DB 11g, LNS read redo directly from the log buffer and if the redo to be sent is
not found in the log buffer, then the LNS process will go to the ORL to retrieve
it.
And in a bandwidth-strapped network LNS has to go down to archive log also
sometimes.
As reading from log buffer is much faster than reading from disk (ORL), so the log
buffer should be sized so that LNS is always able to find the redo that it needs to
send within the log buffer. The log buffer hit ratio is tracked in the view
X$LOGBUF_READHIST. A low hit ratio indicates that the LNS is frequently reading
from the ORL instead of the log buffer.
The default value for log buffers is generally not that high (512KB). Increasing
the log buffers improves the read speed of the LNS process.
Other important feature in 11g is redo transport compression, which is very
beneficial when you have low-bandwidth. Redo transport communication can reduce
both when you have low-bandwidth and redo transfer time.
There are some points to think upon before you actually use compression feature
> As compression take CPU so you should have sufficient CPU resources
> some kind of data like images can be compressed much as they are already
compressed.
--------------------------------------------------------------------------------
Grid Control uses the Data Guard Broker to set up and manage the configuration
In this guide we will use conventional tried and tested SQL*Plus method.
$ ORACLE_HOME=/backups/oracle/stby/product/11.2.0/dbhome_1
$ ORACLE_SID=stbyUse Oracle Database 11.2.0.4 runInstaller to start Oracle
Universal Installer and install oracle database software ONLY.
db_software_1
db_software_2
db_software_3
db_software_4
db_software_5
db_software_6
db_software_7
db_software_8
a) DB_UNIQUE_NAME
This parameter defines the unique name for a database, if the parameter is not
defined, it is defaulted to the DB_NAME.
This defines the list of valid DB_UNIQUE_NAME parameters for your Data Guard
configuration.
c) LOG_ARCHIVE_MAX_PROCESSES
On the primary, one archive process is dedicated to servicing only the Online redo
log files, while the others are all allowed to perform both functions
Current settings in our system:
d) DB_CREATE_FILE_DEST
you will need to define it at the standby database if you are using ASM
e) LOG_ARCHIVE_DEST_n
This is the main parameter for Data Guard redo transport and we will understand it
well before we proceed further with our installation.
Remember that the local archiving is defaulted to the flash recovery area and you
no longer need to define a local destination. Below is the settings for the
log_archive_dest_1.
1) SERVICE
2) SYNC or ASYNC
For Maximum Availability or Maximum Protection you must use SYNC and for Maximum
Performance you use ASYNC. ASYNC is the default mode.
In our setup we will use Maximum Availability mode so we will be using �SYNC�
3) NET_TIMEOUT
The number of seconds that the LGWR process will wait for an LNS process to respond
before abandoning the standby as failed. The default is 30 seconds
4) REOPEN
It Controls the wait time before Data Guard will allow the primary database to
attempt a reconnection to a failed standby database. Its default value is 300
seconds but you can reduce it to 30 seconds depending on your requirements.
5) DB_UNIQUE_NAME
This parameter defines the unique name for a database, if the parameter is not
defined, it is defaulted to the DB_NAME.
You must also set the LOG_ARCHIVE_CONFIG parameter before using this attribute in
your LOG_ARCHIVE_DEST_n parameter
6)VALID_FOR
> PRIMARY_ROLE Valid only when the database is running in the primary role
> STANDBY_ROLE Valid only when the database is running in the standby role
> ALL_ROLES Valid regardless of database role
Only if the answer to both of its parameters is TRUE, LOG_ARCHIVE_DEST_n
destination parameter will be used
Also LOG_ARCHIVE_DEST_n can have 30 available destinations, so up to thirty standby
databases.
So for our system we will set below value in our primary database:
1) AFFIRM/NOAFFIRM
2) COMPRESSION
You can turn on compression using the Advanced Compression option for standby
destination. It can compress archives for gap resolution or current redo stream or
both.
For redo stream, the compression is only during transport and not at disk level.
3) LOCATION
Earlier this attribute was required to specify a location where the archive
processes could store the archive log files. With the flash recovery area and local
archiving defaults, you no longer need to define a destination with this attribute.
4) MAX_FAILURE
It defines how many times at log switch time the LGWR will attempt to reconnect to
a failed standby database and if it still is unsuccessful in reconnecting to the
standby database, it will stop trying it.
f) DB_FILE_NAME_CONVERT
This is necessary parameter if your directory structures are different between
primary and standby databases. Until the standby database becomes a primary
database, this
translation occurs only at the runtime, but once switchover or failover to the
standby has occured, these values are hard coded into the control file and the data
file headers.
g) LOG_FILE_NAME_CONVERT
log_file_name_convert='/u01/oracle/DB11G/fast_recovery_area/prod' ,'/backups/oracle
/stby/fast_recovery_area/stby'We will use this parameter when we will run RMAN
duplicate command.
h) FAL_SERVER
FAL means Fetch Archive Log. This is also known as reactive gap resolution and is
the process where a physical standby can go and fetch a missing archive log file
from one of the databases (primary or standby) in the Data Guard configuration when
it finds a problem.
you define the FAL_SERVER parameter as a list to TNS names that exist on the
standby server that point to the primary and any of the standby databases.
fal_server='prod'We will update these parameter immediately after we are done with
standby database creation.
i) FAL_CLIENT
It is the TNS name of the gap-requesting standby database that the receiver of the
gap request (the FAL_SERVER) needs so that the archive process on the FAL server
database can connect back to the requestor
fal_client='stby'�stdy� must be defined in the FAL server�s TNS names file so that
Data Guard can make a connection to the standby database. since for redo transport
parameters setup we already have put the �stby� value in priamry�s tnsnames so we
are good.
We will update these parameter immediately after we are done with standby database
creation.
j) STANDBY_FILE_MANAGEMENT
If you set this parameter to �AUTO�, then whenever data files are added or dropped
from the primary database, the corresponding changes are automatically made on the
standby database. Data Guard will execute the data definition language (DDL) on the
standby to create the data file
By default, this parameter is set to �MANUAL� but we will use �AUTO� mode
For our standby system it will be:
standby_file_management=�AUTO�
We will update these parameter immediately after we are done with standby database
creation.
5.2.1 Setup Listener and Create a static listener entry for the standby
$ cd $ORACLE_HOME/bin
$ netca
listener1
listener2
listener3
listener4
listener5
listener6
SID_LIST_stby =
(SID_LIST =
(SID_DESC =
(SDU=32767)
(GLOBAL_DBNAME = stby)
(ORACLE_HOME = /backups/oracle/stby/product/11.2.0/dbhome_2)
(SID_NAME = stby)
)
)
Make sure you reload the listener after you put this in the listener file:
5.2.2 Create tnsnames.ora file and add both standby and target entries
prod =
(DESCRIPTION =
(SDU=32767)
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclelinux6.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prod)
)
)
stby =
(DESCRIPTION =
(SDU=32767)
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclelinux6.localdomain)(PORT = 1529))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = stby)
)
)
As of now just put only basic parameters in it. During the standby database
creation, RMAN will replace this file.
$ cd $ORACLE_HOME/dbs
$ cat initstby.ora
db_name='stby'
local_listener='stby'
compatible='11.2.0.4.0'
sga_max_size=800m
sga_target=800m
You can only put db_name in this file and it will be sufficient too.
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwstby password=prodesh
You must start the standby database in NOMOUNT mode so RMAN process can attach to
the instance.
$ echo $ORACLE_SID
stby
$ echo $ORACLE_HOME
/backups/oracle/stby/product/11.2.0/dbhome_2
$ sqlplus / as sysdba
SQL> STARTUP NOMOUNT;5.2.6 Check the Listener services
At this moment check the standby listener if ti is registering well with the
standby database or not.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oraclelinux6.localdomain)
(PORT=1529)))
By using the new RMAN functionality of Oracle Database 11g, we have saved some time
as we will not be taking RMAN backup of primary database manually. This method
suits us as our database size is small. If you have a really big database with TBs
of data files and also have hige network load, then conventional RMAN method of
backing up at primary and recovering at standby is best for you.
Aas we will be using SRL files, so if we create them on the primary database before
we create the standby, RMAN will create them for us on the standby database during
standby database creation.
SQL> ALTER DATABASE ADD STANDBY LOGFILE
'/u01/oracle/DB11G/fast_recovery_area/prod/onlinelog/SRL1.log' SIZE 100M;SQL> ALTER
DATABASE ADD STANDBY LOGFILE
'/u01/oracle/DB11G/fast_recovery_area/prod/onlinelog/SRL2.log' SIZE 100M;SQL> ALTER
DATABASE ADD STANDBY LOGFILE
'/u01/oracle/DB11G/fast_recovery_area/prod/onlinelog/SRL3.log' SIZE 100M;SQL> ALTER
DATABASE ADD STANDBY LOGFILE
'/u01/oracle/DB11G/fast_recovery_area/prod/onlinelog/SRL4.log' SIZE 100M;5.3.2 Add
standby database entry to the tnsnames.ora file
Our tnsnames.ora file at the Primary database looks like below with both primary &
standby entries in it.
prod =
(DESCRIPTION =
(SDU=32767)
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclelinux6.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prod)
)
)
stby =
(DESCRIPTION =
(SDU=32767)(ADDRESS = (PROTOCOL = TCP)(HOST = oraclelinux6.localdomain)(PORT =
1529))
(CONNECT_DATA =(SERVER = DEDICATED)
(SERVICE_NAME = stby)
)
)5.3.3 Enable logging
This is the recommended Data Guard setting, as this ensures that all transactions
are logged and can be recovered through media recovery or redo apply.
Database altered.
We are using here Active duplicate feature of 11g database. Oracle 11g introduced
active database duplication using which we can create a clone database of the
target database without taking any manual backups. Active database duplication
copies the target database over the network to the destination and then creates the
duplicate database. Only difference is you don�t need to have the pre-existing RMAN
backups and copies. The duplication work is performed by an auxiliary channel.
$ rman
RMAN> CONNECT TARGET sys/prodesh@prod;
RMAN> CONNECT AUXILIARY sys/prodesh@stby;
RMAN> run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate auxiliary channel stby1 type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'prod','stby'
set 'db_unique_name'='stby'
set control_files='/backups/oracle/stby/oradata/control.ctl'
set audit_file_dest='/backups/oracle/stby/admin/stby/adump'
set db_create_file_dest='/backups/oracle/stby/oradata
'set
db_create_online_log_dest_1='/backups/oracle/stby/fast_recovery_area/stby/onlinelog
'
set
db_create_online_log_dest_2='/backups/oracle/stby/fast_recovery_area/stby/onlinelog
'
set db_recovery_file_dest='/backups/oracle/stby/fast_recovery_area/stby'
set DB_RECOVERY_FILE_DEST_SIZE='4G'
nofilenamecheck;
}
$ rman
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
RMAN> @connect.rman
RMAN> **end-of-file**
RMAN> @stby.rman
RMAN> run {
2> allocate channel c1 type disk;
3> allocate channel c2 type disk;
4> allocate channel c3 type disk;
5> allocate auxiliary channel stby1 type disk;
6> duplicate target database for standby from active database
7> spfile
8> parameter_value_convert 'prod','stby'
9> set 'db_unique_name'='stby'
10> set control_files='/backups/oracle/stby/oradata/control.ctl'
11> set db_create_file_dest='/backups/oracle/stby/oradata'
12> set audit_file_dest='/backups/oracle/stby/admin/stby/adump'
13> set
db_create_online_log_dest_1='/backups/oracle/stby/fast_recovery_area/stby/onlinelog
'
14> set
db_create_online_log_dest_2='/backups/oracle/stby/fast_recovery_area/stby/onlinelog
'
15> set db_recovery_file_dest='/backups/oracle/stby/fast_recovery_area/stby'
16> set DB_RECOVERY_FILE_DEST_SIZE='4G'
17> nofilenamecheck;
18> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=19 device type=DISK
allocated channel: c2
channel c2: SID=133 device type=DISK
allocated channel: c3
channel c3: SID=20 device type=DISK
IN STANDBY DATABASE
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT;
Database altered.
IN PRIMARY DATABASE
5.6 VERIFY
NAME
-----------------------------------------------------------------------------------
-
/backups/oracle/stby/oradata/stby/datafile/o1_mf_system_4op30rp8_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_sysaux_4pp30rp8_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_undotbs1_4tp30rv8_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_users_4vp30s0e_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_example_4rp30rtp_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_users_4up30s06_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_system_4qp30rp8_.dbf
/backups/oracle/stby/oradata/stby/datafile/o1_mf_undotbs1_4sp30rue_.dbf
8 rows selected.
TYPE MEMBER
-------
-----------------------------------------------------------------------------------
-----------------
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_4_9l235
hby_.log
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_5_9l235
xp6_.log
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_5_9l235
y1z_.log
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_4_9l235
hhv_.log
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_6_9l236
cm5_.log
ONLINE
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_6_9l236
ctk_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_1_9l234
x3d_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_2_9l235
2p4_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_3_9l235
94v_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_7_9l236
s5b_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_1_9l234
x8q_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_2_9l235
2rr_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_3_9l235
980_.log
STANDBY
/backups/oracle/stby/fast_recovery_area/stby/onlinelog/stby/onlinelog/o1_mf_7_9l236
s9w_.log
14 rows selected.Also to test the log shipment, execute the following command in
primary database