Professional Documents
Culture Documents
Technical Notes
P/N 300-004-905
Rev A01
1
Introduction
Introduction
The EMC RecoverPoint system provides full support for data
replication and disaster recovery with AIX-based host servers.
This document presents information and best practices relating to
deploying the RecoverPoint system with AIX hosts.
RecoverPoint can support AIX hosts with host-based, fabric-based,
and array-based splitters. The outline of the installation tasks for all
splitter types are similar to installation with any other host, but the
specific procedures are slightly different. The work flow outline of the
installation tasks for all splitter types is:
1. Creating and presenting volumes to their respective hosts at each
location; LUN masking and zoning host initiators to storage
targets (not covered in this document)
2. Creating the file system or setting raw access on the host.
3. Configuring RecoverPoint to replicate volumes (LUNs):
configuring volumes, replication sets, and consistency group
policies; attaching to splitters; and first-time initialization (full
synchronization) of the volumes.
4. Validating failover and failback
AIX and replicating RecoverPoint does not support replicating AIX boot-from-SAN
boot-from-SAN volumes when using host-based splitters. When using fabric-based or
volumes array-based splitters, boot-from-SAN volumes are replicated the
same way as data volumes.
Supported Consult the EMC Support Matrix for RecoverPoint for information
configurations about supported RecoverPoint configurations, operating systems,
cluster software, Fibre Channel switches, storage arrays, and storage
operating systems.
Environmental prerequisites
This document assumes you have already installed a RecoverPoint
system and are either replicating volumes or ready to replicate. In
other words, it is assumed that the RecoverPoint ISO image is
installed on each RecoverPoint appliance (RPA); that initial
configuration, zoning, and LUN masking are completed; and that the
license is activated. In addition, it is assumed that AIX hosts with all
necessary patches are installed both at the production side and the
replica side. In the advanced procedures (LPAR-VIO, HACMP, etc.),
it is assumed that the appropriate hardware and environment are set
up.
Ensuring fast_fail For AIX hosts using host-based splitters, the replicated device’s FC
mode for AIX-based SCSI I/O Controller Protocol Device attribute (fc_err_recov) must be
hosts set to fast_fail. Although this setting is not mandatory for fabric-based
or storage-based splitters, it is required by some multipathing
software. Check your multipathing software’s user manual.
To check the current settings for the attribute, run:
# lsattr –El<FC_SCSI_I/O_Controller_Protocol_Device>
For example:
# lsattr –El fscsi0
PowerPath device If you need to know the relationship between PowerPath logical
mapping devices (as used by the Logical Volume Manager) and the hdisks (as
seen by the host), use the following command in PowerPath:
# powermt display dev=<powerpath_device_#>
Example:
# powermt display dev=0
AIX and SCSI By default, AIX hosts use SCSI reservation. Whether RecoverPoint
reservation can support SCSI reservation depends on the SCSI reservation type
(SCSI-2 or SCSI-3) and the code level of the storage array. If
RecoverPoint cannot support the SCSI reservation, it must be
disabled on the AIX host.
SCSI-3 reservation is supported if the consistency group’s
Reservation support is enabled in RecoverPoint.
SCSI-2 reservation is supported with host-based splitters according to
Table 1 on page 6
Stand- Stand-
alone alone
Storage host Host in cluster host Host in cluster
Symmetrix OK OK OK OK
5772 or later
Disabling SCSI A RecoverPoint appliance (RPA) cannot access LUNs that have been
reservations on AIX reserved with SCSI-2 reservations. For the RPA to be able to use those
host LUNs during replication, AIX SCSI-2 reservation on those LUNs
must be disabled; that is, the AIX disk attribute reserve_policy must
be set to no_reserve. For more information on the reserve_policy
attribute, search for reserve_policy at www.ibm.com. On some AIX
systems, reserve_lock = no is used instead of reserve_policy =
no_reserve.
Then run chdev again as before. Then mount the disks and
reactivate the volume groups as follows:
# varyonvg <volume_group>
# mount /dev/r<logical_volume_name> /<mount_point>
Restart applications.
• Use the same command with the addition of the –P flag:
# chdev –l hdisk1 –a reserve_policy=no_reserve –P
Configuring RecoverPoint
To use RecoverPoint with AIX hosts, the following tasks must be
completed:
◆ Installing RecoverPoint appliances and configuring the
RecoverPoint system
◆ Installing splitters on AIX servers
◆ Required device adjustments when using fabric-based splitters
◆ Configuring consistency groups for the AIX servers
◆ Performing first-time failover
Installing splitters on To install splitters on AIX servers, refer to the EMC RecoverPoint
AIX servers Installation Guide for your version of RecoverPoint.
Required When using AIX operating system, adjustments are required to the
adjustments when RecoverPoint system because of the way AIX uses FC ID and Physical
using fabric-based Volume Identifiers.
splitters with AIX
servers
FC ID The AIX operating system uses the Fibre Channel identifier (FC ID)
assigned by the fabric switch as part of the device path to the storage
target. Fabric-based splitters rely on changing the FC ID to reroute
I/Os. When attaching a volume to a fabric-based splitter, its FC ID
may change. As a result, the AIX operating system may not recognize
a volume that it accessed previously.
When using Brocade switches, to allow hosts running AIX to identify
the volume with the new FC ID, hosts must disable the storage device
(using rmdev -d) and rediscover them (using cfgmgr). When using
SANTap switches, manually set the FC ID to be persistent.
This procedure is required for both all production and all replica
volumes. If the volumes are unbound, it will be necessary to
repeat this procedure.
SANTap-based splitter:
a. When a volume is attached to a SANTap-based splitter,
manually make the FC ID of storage volumes persistent. Refer
to the section “Persistent FC ID” in EMC Deploying
RecoverPoint with SANTap Technical Notes for instructions.
b. When the persistent FC ID is used, no additional special
procedures are required, because the FC ID is not altered.
If persistent FC ID is not used, the following procedure must
be carried out:
1. After moving the AIX initiators to the front-end VSAN,
remove the volume from the host:
# rmdev -dl <volume>
Physical Volume RecoverPoint replicates at the block level. As soon as the replica
Identifier storage is initialized for replication, the Physical Volume Identifier
(PVID) of production storage devices will be copied to replica storage
devices. However, the Object Data Manager at the replica side will
know the replica volume by its PVID before initialization. It will not
recognize the volume with the replicated PVID. Run “First-time
First-time failover You must perform this procedure on a consistency group before you
can access images or fail over the consistency group, because
RecoverPoint will change the Physical Volume Identifier of the
volumes. Carry out the following procedure after first-time
initialization of a consistency group:
1. Ensure that the initialization of the consistency group has been
completed.
2. At the production host, stop I/O to volumes being replicated, and
unmount the production file systems:
# sync
# umount <mount_point>
# varyoffvg <volume_group_name>
Failing over and After performing first-time failover, subsequent failovers do not
failing back require disabling and enabling the storage devices.
Planned failover A planned failover is used to make the remote side the production
side, allowing planned maintenance of the local side. For more
Failing back For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrator’s Guide.
Virtual I/O Overview The Virtual I/O Server is part of the IBM System p Advanced Power
Virtualization hardware feature. Virtual I/O Server allows sharing of
physical resources between logical partitions (LPARs), including
virtual SCSI and virtual networking. This allows more efficient
utilization of physical resources through sharing between LPARs and
facilitates server consolidation.
For more information about the Virtual I/O Server, search at
www.ibm.com for Virtual I/O Server Advanced Power
Virtualization.
RecoverPoint and Deploying RecoverPoint in a system with Virtual I/O Server and the
Virtual I/O IBM System p Advanced Power Virtualization hardware feature
requires detailed knowledge of volumes, and an understanding of
the implications and special handling of Virtual I/O configuration.
The following procedure illustrates the recommended practices for
such deployment.
The steps for configuring Virtual I/O in several basic RecoverPoint
scenarios will be presented.
Note: Due to the nature of the Virtual I/O implementation, you may replicate
a disk volume only between two Virtual I/O systems or between two
non-Virtual I/O systems. RecoverPoint does not support replicating between
a Virtual I/O system and a non-Virtual I/O system.
Volume group
vd
i sk
k
is
vd
Virtual
Virtual I/O
client
I/O
server
LUN
vd
isk
Virtual
Volume group vdisk I/O
client
Storage
AIX LPAR
Storage—a storage array, which contains the user volume(s) and the
RecoverPoint volumes (optional).
LUN—the LUN from the storage which is to be replicated with
RecoverPoint and which contains the user data, from which the
virtual disks are created.
AIX LPAR—dynamic partitioning of server resources into logical
partitions (LPARs), each of which can support a virtual server and
multiple clients.
Virtual I/O Server—the instance in the LPAR which runs the server
portion of the Virtual I/O. This instance uses the physical HBA(s)
and sees the ‘real world’.
vdisk—the portion of the LUN presented to the VIO client by the
VIO server.
Virtual I/O Client—the instance in the LPAR which runs the client
portion of the Virtual I/O. The client does not have access to the
physical devices (HBA, LUN, etc.) but gets a virtual device via the
Virtual I/O mechanism instead. Its disk is of the type Virtual SCSI
Disk Drive.
Volume group—The Virtual I/O server has a volume group
configured on the storage LUN. One of the volumes in the volume
group is mapped to the Virtual I/O client.
First-time failover The need for first-time failover is explained in “Required adjustments
when using fabric-based splitters with AIX servers” on page 8. After
first-time initialization of a consistency group:
1. Ensure that the initialization of the consistency group has been
completed.
2. At the production host, stop I/O to replicated volumes, and
unmount the file systems:
# sync
# umount <mount_point>
# varyoffvg <volume_group_name>
12. Wait (if needed) for the full resynchronization to finish. Then, at
the replica-side virtual I/O client, test the data integrity. To do so,
run the following commands.
a. Enable image access.
b. At the replica virtual I/O client, import the volume group:
# importvg –y <volume_group_name> <virtual_disk#>
# varyoffvg <volume_group_name>
# exportvg <volume_group_name>
Failover After the first failover has been completed (“First-time failover” on
page 10), subsequent failovers do not require disabling and enabling
the storage devices.
Planned failover A planned failover is used to make the remote side the production
side, allowing planned maintenance of the local side. For more
information about failovers, refer to the EMC RecoverPoint
Administration Guide.
1. Ensure that the first initialization for the consistency group has
been completed.
2. At the production-side virtual I/O client, stop the applications.
Then flush the local devices and unmount the file systems (stop
I/O to the mount point):
# sync
# umount <mount_point(s)>
# varyoffvg <volume_group_name>
8. Then, at the replica-side virtual I/O client, test the data integrity.
To do so, run the following commands.
a. Enable image access.
b. At the replica virtual I/O client, import the volume group:
# importvg –y <volume_group_name> <virtual_disk#>
Failing back 10. For instructions to fail back to the production side, refer to the
EMC RecoverPoint Administrator’s Guide.
3. On the replica-side virtual I/O server, map the disks to the virtual
I/O client:
# ioscli mkvdev -vdev <device_name> -vhost <vhost_name>
-dev <virtual_lun_name>
Failing back For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrator’s Guide.
The same command will also sync, umount, and varyoffvg the
volume group.
3. Access an image. For instructions, refer to “Accessing a Replica”
in the EMC RecoverPoint Administrator’s Guide.
4. When using host-based splitters on the replica host, detach all
volumes from the replica-side host-based splitters.
5. At the replica-side host, force AIX to reread volumes. Run the
following commands.
a. For each storage device, update the disk information:
# chdev -l <device name> -a pv=yes
# lspv
6. Disable image access.
7. When using host-based splitters on the replica side, you must
reattach volumes. Attaching volumes to splitters will trigger a full
synchronization. Since storage volumes are not mounted at this
point, no data has changed. As a result, if you need to avoid a full
resynchronization because of bandwidth limitations, you may
clear markers (attach as clean) and not resynchronize the
volumes. The best practice is to allow a full resynchronization at
this point and not to clear markers or attach as clean.
a. Reattach the volumes to those host-based splitters. For
instructions, refer to the EMC RecoverPoint Administrator’s
Guide.
b. Repeat this step for each volume that you detached in Step 4
on page 10.
8. Bring the resource group back on-line at the replica side. Use the
following command:
# clRGmove -s false -u -i -g <resource_group_name> -n
<node_name>
The same command will also vary on the volume group and
mount the file systems.
9. At the replica-side host, test the data integrity. Verify that all
required data is on the volume.
10. After testing is completed, disable image access at the replica
side.
11. Bring the resource group back on-line at the production side. Use
the following command:
# clRGmove -s false -u -i -g <resource_group_name> -n
<node_name>
The same command will also vary on the volume group and
mount the file systems.
Failing over and The procedures for failing over and failing back HACMP clusters are
failing back identical to the procedures for stand-alone hosts, except for the
commands for taking resources off-line and bringing them on-line at
the active side.
The same command will also sync, umount, and varyoffvg the
volume group.
Use the following command to bring resources on-line:
# clRGmove -s false -u -i -g <resource_group_name> -n
<node_name>
The same command will also vary on the volume group and
mount the file systems.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.