You are on page 1of 12

TimeFinder from EMC

EMC has three instant copy products, TimeFinder/Mirror, TimeFinder/Clone and TimeFinder/Snap
The key differences between the products are summarised in the table below.
TimeFinder/Mirror

TimeFinder/Clone

TimeFinder/Snap

Copy
methodology

Full mirror. The mirror is always in


synch. with the source until the split
command is issued.

Point in time copy based on pointers until


copy process is complete

Point in time copy always based


on pointers

Copy space
required

Full disk

Full disk

Partial space depending on amount


of data updates. Typically 30% of
source

Availability of
copy data

BCV cannot be split off until the copy


process is complete

Copy is available as soon as the PiT


pointers are established

Copy is available as soon as the


PiT pointers are established

Performance
impact

Hardly any

Initial copy is a background process. Some


All initial updates will require
performance impact if data is updated
extra processing to move data.
while copy is in progress

DR capability

Full DR once copy is complete

Full DR once copy is complete, but not if


Copyonaccess setting is enabled

Minimal DR as full disk is reliant


on pointers to original disk

Not accessible until copy is complete


Accessability of
and the BCV is split from the standard
copy
volume

Immediately accessible

Immediate

Protection

The clone copy can be RAID5

The snap copy can be RAID5

The BCV cannot be RAID5

All products can be managed by the EMC Replication Manager, if used for Open Systems data. Also, all products
can use Copy Assist, a product which ensures a consistent Point-in-time copy over multiple disks, by temporarily
freezing IOs until the copy is complete.

TimeFinder/Mirror
TimeFinder/Mirror uses dynamic mirror volumes called Business Continuity Volumes, or BCVs. The TimeFinder
terminology is Standard Volume (SV) for the primary disk, and BCV for the copy disk(s). A BCV is a mirrored
copy of an SV, and has its own host address. You can have up to 16 copies, 4 of which can be actively copying
data in the background. A BCV cannot be accessed while it is in association with a standard volume, but if it is
split from the SV, then it can be accessed for backup, testing or whatever.
The TimeFinder/Mirror Commands are

To set up a BVC you must first create a device group, add an SV to it, then associate a BCV device to the SV. The
BCV must be offline, and effectively becomes another mirror to the SV, so the BCV data is synchronised with the
Standard Volume. The commands below create a default type group called group1, add a Standard Volume 01f to
it, associate BCV device 110 to it, then starts to create the BCV data. As this is the first time the BCV has been
created, then a full establish is required.

symdg create group group1


symld -g group1 add dev 01f
symbcv -g group1 associate dev 110
symbcv -g group1 -full establish 01f bcv 110

To remove an association between the SV and the BCV you issue the split command. The point-in-time of the copy
is the time the Split is issued. The SV is unaffected by a split. TimeFinder keeps a record of changed tracks after a
split, to speed up a refresh of the BCV. The command below will split off a BCV once the copy operation is
complete.

symmmir -g group1 split

The establish command is used to re-synchronise a BCV which was formerly established, then split. It copies over
tracks which have been changed on the Standard Volume, and also replaces tracks which were changed on the
BCV with tracks from the SV, to get the BCV synchronised again.

symmmir -g group1 establish

It is also possible to restore the SV from data on a split BCV. This will restore the Standard Volume back to the
state it was in at the start of the Split command, provided the BCV has not been updated. The first command just
does a restore of changed tracks, the second command does a full restore of all the tracks.

symmmir -g group1 restore


symmmir -g group1 -full restore

If you want to report on the status of your BCV devices, use the following commands.

symmmir -g group1 query

For TimeFinder/Mirror, the point-in-time happens when the mirror split command is issued.

Timefinder/Snap
Timefinder/snap works by creating a new image of a LUN that consists of pointers that reference the data on the
original LUN. If any updates are scheduled to the source, the original data is copied to the snap before the
source is overwritten. However, the snap does not reserve space for a full disk to cater for any updates. You
allocate a 'Save Device', which is a common pool for original data which needs to be copied if updates are made
to the primary.
Unlike other implementations, TimeFinder/Snap is designed for applications that need temporary access to
production data, maybe for reporting or testing. It is not designed to be, nor is it suitable for disaster recovery, as
it is completely dependent on the existence of the source data.
The Snap utility can normally create up to 16 independent copies of a LUN, when the target data appears to be
frozen at the time that each Snap command was issued. You can increase this to 128 copies by issuing the
command
SET SYMCLI_MULTI_VIRTUAL_SNAPSIZE=ENABLED

The starting point for defining a snap copy is to set up a volume group that contains all the data that you want
snapped. The examples below refer to a volume group called SNAPDB. Once you have your volume group, you
need to start the session between a standard volume and a snap copy with a create command. The device numbers

are for illustration only. Use your own device numbers. addall means add all the ungrouped devices in the specified
range, -vdev means the command just applies to virtual devices

symdg create SNAPB


symld -g SNAPDB addall -range 00:09
symld -g SNAPDB addall dev -range 3E:37 -vdev
symsnap -g SNAPDB create

Activate starts the copy-on-write process that preserves the snap copy.

symsnap -g SNAPDB activate -consistent

If you want to 'refresh' your snap copy to make it look like a current copy of the source group, you need to
terminate the existing session, then re-establish the snap. This starts a new point in time copy using a differential
update.

symsnap -g SNAPDB terminate


symsnap -g SNAPDB create
symsnap -g SNAPDB activate -consistent

RESTORE used to recover a volume back to the point in time state. This can be the original volume or a new
volume.

symsnap -g SNAPDB restore

Timefinder/Clone
TimeFinder/Clone volumes are called clone copies, and can be BCVs. The Clone copies can be in RAID5 format
and do not require that a previous mirror has been established . You can have up to 8 concurrent clone copies.
Clone data is immediately accessable from a host, unlike standard BCVs where you need to wait for the copy to
complete.
TimeFinder/Clone has two activate modes; -copy and -nocopy. With the -copy mode you will eventually have a
complete copy of the original disk at the clone, as it was at the point-in-time the activate command was issued.
With the -nocopy mode, only updated tracks are copied and uncopied data is maintained at the clone with
pointers. Either option requires that the clone be the same size as the source. In open systems, Nocopy is the
default and as all the data is not copied, it cannot be used as a DR position. The create command has a -precopy
option that starts the full copy process off before the activate, so speeding up the process of creating a full copy.
In a mainframe setup, the SNAP command automatically starts a full copy process.
The TimeFinder/Clone Commands are

Create initiates a session between a standard volume and a clone copy. You can initiate sessions for an entire
device group, between two devices in a group, or between two ungrouped devices. The first command below
assumes a device group called CLONEDB has already been defined and creates clone sessions to target devices
within the group. The second command will initiate a session between 2 specific devices. The third command uses
the -precopy option so the copy process begins as soon as the clone relationship is established, and -differential,
which allows the clone to be refreshed at at later date.

symclone -g CLONEDB -tgt create


symclone create DEV001 sym ld DEV002
symclone create DEV001 sym ld DEV002 -precopy -differential

Activate makes the clone available for read/write and with the -copy option, starts the data copy process from
standard volume to clone. The default action is no-copy, which means that only updated trackes are copied over
from the source. You can query the status of a clone, including the status of the copy process, with the third
command below. The copy status will be either 'copyinprog' or 'copied'.

symclone -g CLONEDB -tgt activate -consistent


symclone activate DEV001 sym ld DEV002
symclone -g CLONEDB query

If the clone was started with the -differential option, it is possible to refresh the clone copy to the current point in
time. To do this you need to issue the recreate then activate commands below.

symclone -g CLONEDB -tgt recreate


symclone -g CLONEDB -tgt activate -consistent

You use RESTORE to recover a volume or group back to the point in time state. This can be the original volume or
a new volume. You need the -force option if your source volume is in an active RDF session with remote R2
devices. The symclone query command will show the status as 'Restore in Progress' or 'Restored'. Once the restore
completes you need to split the clone before you can re-establish cloning in the normal direction

symclone
symclone
symclone
symclone

-g CLONEDB -tgt restore -force


restore DEV001 sym dev 0041
-g CLONEDB query
-g CLONEDB split

Use terminate to break a clone relationship into discrete volumes, but the clone must be in 'copied' status or the
data on it will not be complete

symclone -g CLONEDB query


symclone -g CLONEDB terminate DEV001 sym ld DEV002

Secondary IP

HostName

Source LUN

Tgt(BCV) LUN

10.3.48.28

DC1EnrDFSC1D2

5BA1

5C61

10.3.48.30

DC1EnrDFSC1D4

5BC1

5C81

10.3.48.32

DC1EnrDFSC1D6

5BE1

5CA1

10.3.48.33

DC1EnrDFSC1D7

3821

5C41

10.3.48.36

DC1EnrDFSC1D10

5C21

5D21

10.3.48.41

DC1EnrDFSC1D15

31C1

5CC1

10.3.48.43

DC1EnrDFSC1D17

3201

5CE1

10.3.48.44

DC1EnrDFSC1D18

3221

10.3.48.81 (DC1EnrBIDFSCD1)

36E1

10.3.48.82 (DC1EnrBIDFSCD2)

5AE1

5D61

10.3.48.83 (DC1EnrBIDFSCD3)

36C1

5D81

10.3.48.84 (DC1EnrBIDFSCD4)

5AC1

5DA1

5D01

5D41

Steps for adding new source luns & associating the target devices with that:

symdg create SHRED_Mongo_DG -type regular


symld -g SHRED_Mongo_DG add devs 37C1,5BA1,37E1,5BC1,3801,5BE1,3821,5C01 -sid 2901
symdg -g SHRED_Mongo_DG list/show
symbcv -g SHRED_Mongo_DG associate dev 5C41:5D21
symdg -g SHRED_Mongo_DG list/show
symclone -g SHRED_Mongo_DG create -copy -v -nop
symclone -g SHRED_Mongo_DG query

Device Group (DG) Name: SHRED_Mongo_DG


DG's Type
: REGULAR
DG's Symmetrix ID : 000292602901
Source Device
Target Device
State Copy
--------------------------------- ---------------------------- ------------ ---Protected Modified
Modified
Logical Sym Tracks Tracks Logical Sym Tracks CGDP SRC <=> TGT (%)
--------------------------------- ---------------------------- ------------ ---DEV001 3821 292831
0 BCV001 5C41
0 XXX. CopyInProg 99
DEV002 5BA1 417587
0 BCV002 5C61
0 XXX. CopyInProg 98
DEV003 5BC1 1218025
0 BCV003 5C81
0 XXX. CopyInProg 96
DEV004 5BE1 2192703
0 BCV004 5CA1
0 XXX. CopyInProg 93
DEV005 31C1 740308
0 BCV005 5CC1
0 XXX. CopyInProg 97
DEV006 3201 568366
0 BCV006 5CE1
0 XXX. CopyInProg 98
DEV007 3221 532000
0 BCV007 5D01
0 XXX. CopyInProg 98
DEV008 5C21 598911
0 BCV008 5D21
0 XXX. CopyInProg 98
DEV009 3101 295228
0 BCV009 5DC1
0 XXX. CopyInProg 99
DEV010 3841 212195
0 BCV010 5DE1
0 XXX. CopyInProg 99
Total
Track(s)
MB(s)

-------- -------7068154
441760

-------0
0

symclone -g SHRED_Mongo_DG activate

0
0

Steps for adding a single source lun & associating the target device with that:

symld -g SHRED_Mongo_DG add dev 3821


symdg -g SHRED_Mongo_DG list/show
symbcv -g SHRED_Mongo_DG associate dev 5C41

Steps for creating & copying the data for a Device Group (SHRED_Mongo_DG):

symdg -g SHRED_Mongo_DG list/show


symclone -g SHRED_Mongo_DG create -copy -v -nop
symclone -g SHRED_Mongo_DG query
symclone -g SHRED_Mongo_DG activate

Steps for incremental copying the data for a single device pair for SHRED_Mongo_DG:
symclone g SHRED_Mongo_DG recreate DEV001 sym ld BCV001
symclone -g SHRED_Mongo_DG query
symclone -g SHRED_Mongo_DG activate DEV001 sym ld BCV001

Steps for incremental copying the data for a single device pair for SHRED_Mongo_DG:
symclone g SHRED_Mongo_DG recreate -v -nop
symclone -g SHRED_Mongo_DG query
symclone -g SHRED_Mongo_DG activate

VMAX Allocation Steps:

symaccess -sid 4072 list view


symaccess -sid 2702 list assignment -dev 27E4:27E5
symaccess -sid 2702 list -type initiator | grep BPMALQ40
check_logins.ksh 10000000C9C07EA9
check_logins.ksh 10000000C9C07EA8
symaccess -sid 2702 list -type port
symaccess -sid 2702 list -type initiator | grep BPMALQ40

Create Initiator Group:


symaccess -sid 4072 -name IG_PSYKLX04 -type initiator create
symaccess -sid 4072 -name IG_PSYKLX04 -type initiator -wwn 10000000C9BFC812 add
symaccess -sid 4072 -name IG_PSYKLX04 -type initiator -wwn 10000000C9BFC813 add

Create Storage Group:


symaccess -sid 2702 create -type storage -name SG_BPMALQ40 devs 27E4:27E5
symaccess -sid 2702 -type storage show SG_BPMALQ40

Create Masking View:


symaccess -sid 2702 create view -name MV_BPMALQ40 -sg SG_BPMALQ40 -pg PG_05E0_12E0 -ig
IG_BPMALQ40

symaccess -sid 2702 show view MV_BPMALQ40

**Incase of Addition of Extra Device:


symaccess -sid 4072 -name SG_BPMALQ40 -type storage add devs 05C5,05D5,06S5

Adding to Composite Group:


symcg create <NAME> -type Regular | RDF1 |RDF2 | RDF21 | ANY

symcg list | grep ppmalq


symcg -sid 2702 -cg ppmalq add dev 27E4
symcg -sid 2702 -cg ppmalq add dev 27E5
symcg list | grep ppmalq

SRDF
SRDF mirroring modes
SRDF has 4 modes. The choice basically depends on whether you want the best possible performance, or to be
absolutely sure that your data is consistent between sites.

Synchronous (SRDFe/S)
In this mode, a copy of the data must be stored in cache in both local and remote machines, before the calling
application is signaled that the I/O is complete. This means that data consistency between sites is guaranteed. If
the remote symmetrix is more that 15k away, then this can significantly degrade performance.
When SRDF mirroring is running in SYNC mode, it is also possible to switch on the 'domino effect'. If you then
get a problem with a disk or the SRDF links so that mirroring cannot proceed, the Symm places the other disk
into 'not ready' mode, so it cannot be accessed by the host until the problem is fixed.

Semi-synchronous (SRDFe/A)
The data on a secondary logical volume can be one write I/O behind the primary, which may sound almost as
good as Synchronous, but Semi-synch will not give you I/O consistency across volumes. The local symmetrix will
return Channel end / Device end once a write I/O is safely in the local cache, and then it sets the logical volume
to busy status, so it will not accept any more writes. Then SRDF passes the write I/O to the remote symmetrix,

and once it is safely stored in cache there, the busy flag is removed from the logical volume.
The advantage of Semi-synch is that the application does not have to wait for the remote I/O to complete, so
performance does not suffer.
The disadvantage is that in a disaster there is no guarantee that all the I/Os that an application thinks it
completed actually made it to the remote site. There could be several write I/Os queued up in the local controller
(one for each logical disk) and these are processed by a FIFO queue. If an application is sending I/Os to more
than one controller, there is no FIFO synchronisation between controllers so the remote data could be
inconsistent.

Adaptive copy - write pending (SRDFe/AR)


Data is written asynchronously to the secondary device and can be up to 65535 IOs behind the primary. Data
which has not been copied are called 'dirty tracks', and the amount of dirty tracks permissible is set by a 'skew
value' parameter. If the skew value is exceeded, then the mode switches to Synchronous or Semi-synchronous
until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode. Adaptive
copy is useful where sites are too far apart for synchronous operation, and some data loss is acceptable.

Adaptive Copy -Disk (SRDFe/DM)


This mode is intended for electronically moving data between sites. There is no I/O consistency across volumes;
data is simply moved without any acknowledgment

SRDF Volume terminology and Device Groups


R1, Source volume, the production volume that is accessed by the user, equivalent to a PPRC primary volume
R2, Target volume, the mirrored copy of a source volume, equivalent to a PPRC secondary volume
Local volume, simply a non-mirrored volume. (EMC often use the term 'mirroring' to describe RAID1 protection,
which can cause confusion, as a local volume can be RAID1 mirrored. In this context, mirroring means remote
mirroring between symms.)
SRDF volumes must be formed into device groups, a device group is just a set of volumes that needed to all be
handled the same way. There are three different types of device group, corresponding to the three types of
device above. The RDF group types are RDF1 and RDF2, normal disk groups are type regular. You define device
groups with the commands below, where r1_devg_001 and r2_devg_001 are just names, you can call yours
whatever you like. If you do not specify a -type parameter then the group type defaults to normal.

symdg create r1_devg_001 -type RDF1


symdg create r2_devg_001 -type RDF2

You then add devices to the correct device groups. The devices must be the correct type, local, R1 or R2, and
must be in the same symm. The devices themselves can be standard, RAID or BCV, as long as they match the
group type.

symdg list
symld -g r1_devg_001 add dev 01c

Volumes can be in three possible states.

Not ready (NR) - can't be accessed by the host at all

Write Disabled (RO) - can be accessed by the host for read only

Write enabled (RW) - can be accessed by the host for read and write

The actual status of a volume depends on its SRDF state, and its Channel interface state. A Source volume has
six different possible combinations of states, and a target volume has nine.
The desirable state for a source volume is SRDF state=RW and CI state=RW so volume state=RW If a primary
volume CI state is RW, but the SRDF state is NR, then it may be possible to access the data from the target
volume, if it is in the correct state.
The desirable state for a target volume is SRDF state=RO and CI state=RW so volume state=RO

SRDF and Consistency Groups


An SRDF group is basically a set of SRDF director ports in a local symm that are configured to connect to
another set of SRDF director ports in a remote symm. SRDF groups can be static or dynamic. Static group
definitions are held in the bin file and are usually maintained by EMC staff. Dynamic RDF groups are maintained
using CLI commands. They are often called RA groups or RDF groups and the three terms more or less mean
the same thing. The command to create a new SRDF group looks like this, but many of the parameter values will
depend on your site.

symrdf addgrp -label your_name -rdfg r1_devg_001 -sid 1234 -dir 4C


-remote_rdf r2_devg_001 -remote_sid 4567 -remote_dir 2C

To add devices to your SRDF group, create a file that contains devices in pairs, where the first device is the R1
and the second device the corresponding R2 like this, it's always best of you can arrange devices to there is a
straight correspondence between R1 and R1. Then you run the command below to pick them up. My text file is
called rdf_list.txt

0220
0221
0222
0223

0320
0321
0322
0323

srmrdf establish -sid 1234 -rdfg r1_devg_001 -type rdf1 -file rdf_list.txt
-g group_name -estalish

Once you create an SRDF group, you can use composite SRDF commands to control all the disks in that group.
For example

symrdf -g group-name failover

You can use this command to fail an entire consistency group over to the DR site. It will Write Disable the source
volumes, set the link to Not Ready and Write Enable the target volumes
To Failback, that is restore service to your primary site, use the command

symrdf -g group-name failback

This will write disable the target (remote) disks, suspend the RDF link, merge changed disk tracks, resume the
link then write enable the source disks.
While failback is in progress, you do not have a remote DR position. You can speed the failback operation up by
copying invalid tracks before write disabling any disks with the command

symrdf -g group-name update

If you want to split the SRDF managed disks, that is stop mirroring and allow the disks at both sites to be updated
independently, then you need the split command. This suspends the RDF link and write-enables the target disks.

symrdf -g group-name split

And once you do this, you will probably want to go back to an SRDF mirrored state again, so you need the
establish command

symrdf -g group-name -full establish

This will write-disables the target disks, suspend the rdf link, Copy data from source to target then resume the rdf
link.
The restore command does this the other way around. It will copy the data from the target disk back to the
source. The command is

symrdf -g group-name

-full restore

This write disables both source and target disks, suspends the rdf link, merges the track tables, resumes the rdf
link then write enables R1
Other useful commands, which should be self explanatory are;

symrdf
symrdf
symrdf
symrdf
symrdf

-g
-g
-g
-g
-g

group-name
group-name
group-name
group-name
group-name

suspend
resume
set mode sync
set domino on
set acp-disk skew 1000

A Consistency Group is a collection of volumes in one or more symmetrix devices that need to be kept in a
consistent state. If a write to a Symmetrix cannot be propagated to the Remote Site, the Symmetrix will hold the
I/O for a fixed period of time. At the same time it presents a SIMM back to the host. The Congroup STC will
detect the SIMM and issue the equivalent of PPRC FREEZE to all the other Symmetrix online to that Host. All
Volumes in that consistency Group will then be suspended. Once they are all suspended the equivalent of PPRC
RUN is issued and I/O can complete, including the first I/O that triggered the SIMM.
Consistency Group processing with SRDF does not lose data because it employs a FREEZE/RUN approach
similar to PPRC FREEZE/RUN.
To create a consistency group, add devices to it and enable it, you use commands

symcg
symcg
symcg
symcg

create r1_cg001 -type rdf1


-cg r1_cg001 -sid 1234 add dev 0220
-cg r1_cg001 -sid 0011 add dev 001C
-cg r1_cg001 enable

SRDF data replication software from EMC probably has better functionality than PPRC, but it used to have one
major failing when used on an IBM mainframe, its command set was totally different. What? Well, SRDF
commands only work on EMC disks. Other vendors such as HDS took the IBM PPRC command set, and
interpreted it to run their own replication software, so the underlying code is different, but the command set is the
same. This meant that you could run a disk farm of IBM and HDS disks, and control all the mirroring using one
set of commands. EMC did have a half-way solution; you could run a mainframe started task that intercepted the
PPRC commands and converted them to SRDF commands before passing them down the channel. This was far
from ideal, and prone to error. However, EMC have now joined the fold; they will now accept native PPRC
commands at the Symmetrix, and convert them into SRDF commands in the microcode.

You might also like