You are on page 1of 0

White Paper

Abstract
This white paper describes the high availability and readily
movable capability of the Virtual Data Mover (VDM) technology
delivered in the EMC

VNX series of storage platforms. VDMs


are delivered standard with the VNX Operating Environment.

July 2012




Virtual Data Movers on EMC

VNX




2 Virtual Data Movers on EMC VNX















Copyright 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as
of its publication date. The information is subject to change
without notice.

The information in this publication is provided as is. EMC
Corporation makes no representations or warranties of any kind
with respect to the information in this publication, and
specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in
this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC
Corporation Trademarks on EMC.com.

VMware is a registered trademark or trademarks of VMware, Inc.
in the United States and/or other jurisdictions. All other
trademarks used herein are the property of their respective
owners.

Part Number h10741


3 Virtual Data Movers on EMC VNX


Table of Contents
Executive summary ........................................................................................... 4
Audience ............................................................................................................................ 4
Technology introduction .................................................................................... 5
VDMs on VNX...................................................................................................................... 6
Physical Data Mover versus VDM .................................................................................... 6
Managing VDMs on VNX................................................................................... 10
Rollout or feature activation.............................................................................................. 10
User interface choices ...................................................................................................... 10
Performance considerations ............................................................................................. 10
Troubleshooting ............................................................................................................... 11
Planning considerations .................................................................................. 12
Planning or tools .............................................................................................................. 12
VDM Names.................................................................................................................. 12
VDM root file system size, layout, and inode density .................................................... 12
Internationalization modes .......................................................................................... 12
Backing up VDMs ......................................................................................................... 12
Antivirus....................................................................................................................... 13
Name resolution when moving VDMs ........................................................................... 13
VDM Replication ........................................................................................................... 14
Replicating a VDM for NFS with ESX datastore................................................................... 15
VDM states and functionalities ......................................................................... 16
Create a fully functioning and active VDM ......................................................................... 16
Restrict access to the VDM associated file systems ...................................................... 17
Create a VDM as a replication destination ........................................................................ 17
Use cases and test results................................................................................ 19
Disaster Recovery with Replication, Checkpoints, and VDMs ............................................ 19
Interoperability ............................................................................................... 22
Limitations ..................................................................................................... 23
Conclusion ..................................................................................................... 24
References ..................................................................................................... 25



4 Virtual Data Movers on EMC VNX
Executive summary
A Virtual Data Mover (VDM) is an EMC

VNX

software feature that enables the


grouping of CIFS and/or NFS file systems and servers into virtual containers. Each
VDM contains all the data necessary to support one or more CIFS or NFS servers (or
both) and their file systems.
By using VDMs, it is possible to separate CIFS and/or NFS servers from each other and
from the associated environment. VDMs allow replicating or moving CIFS and/or NFS
environments to another local or remote Data Mover. They also support disaster
recovery by isolating and securing independent CIFS and/or NFS server configurations
on the same Data Mover. This means you do not have to failover the entire physical
Data Mover, and can pick and choose which VDM to failover.
VNX provides a multinaming domain solution for the Data Mover in the UNIX
environment by enabling the implementation of NFS server(s) per VDM. The Data
Mover hosting several VDMs is able to serve the UNIX clients that are members of
different Lightweight Directory Access Protocol (LDAP) or Network Information Service
(NIS) domains, assuming that each VDM works in a unique domain namespace.
Similar to the CIFS service, several NFS domains are added to the physical Data Mover
in order to provide access to the VDM for different naming domains. Each of these
NFS servers is assigned to one or more Data Mover network interfaces.
Audience
This white paper is intended for storage and networking administrators, customers
and other users interested in getting a good understanding and overview of the
standard VDM functionality that is included in the VNX series of storage systems.

5 Virtual Data Movers on EMC VNX
Technology introduction
VDMs contain the data needed to support one or more CIFS or NFS servers, or both,
and their file systems. Each VDM has access only to the file systems mounted to that
VDM, that provide a logical isolation between multiple VDMs. When a VDM is created,
a root file system is created for the VDM. This is the file system that stores the CIFS or
NFS identity information, such as local groups, shares, security credentials, and audit
logs for the CIFS servers created within the VDM. Data file systems are mounted to
mount points that are created on the root file system, and user data is kept in user file
systems. Only one root file system is created per VDM. The root file system is 128 MB.
VDMs can be implemented for several reasons:
VDMs enable replication of segregated CIFS environments.
VDMs enable the user to separate, or isolate CIFS and or NFS servers to provide a
higher level of security. This is particularly valuable in multitenant environments.
VDMs enable the separation of CIFS configurations (home directories,
usermapping and so on), which provide flexibility to service multiple disparate
CIFS customers.
VDMs allow a physical system to appear as many virtual servers for simplified
consolidation.
VDMs can be moved from one Data Mover to another to help load balance Data
Mover resources in order to enhance performance of applications.
VDMs help to ensure that data is always available and accessible with minimal
disruption to business in the event of a disaster.
The multinaming domain solution implements an NFS server per VDM named 'NFS
endpoint'. The VDM is used as a container that includes the file systems exported
by the NFS endpoint or the CIFS server, or both. These file systems on the VDM are
visible through a subset of the Data Mover network interfaces attached to the
VDM. The same network interface can be shared by both CIFS and NFS protocols
on that VDM.

6 Virtual Data Movers on EMC VNX
VDMs on VNX
VDMs are containers that organize and segregate file systems to different CIFS or NFS
servers, are readily movable, and provide higher availability.
In a storage environment, with limited resources and budgets, the commonly faced
challenges are:
Provide greater accessibility to users with remote access, without compromising
security.
Reduce complexity by redesigning file server directory architecture. This is very
labor intensive and can impact the clients ability to access data.
Ensure data high availability and manage catastrophic events that are critical and
can prove to be time-sensitive. When production data becomes inaccessible,
administrators need to find a solution to restore data availability. Migrating
individual file systems to alternative storage can be a tedious process. As the
storage environment grows, the level of difficulty to do this increases.
Maximize server resources and reduce TCO by managing physical servers.
Storage administrators can use VDMs to:
Reduce the demands on IT, simplify workload management, and increase
productivity.
Consolidate file systems and CIFS/NFS servers into a single group so that they can
be migrated between Data Movers quickly and easily. For example, Internet
Service Providers (ISP) have data from various departments that must be hosted
on the same physical Data Mover. This can all be added to a VDM where you could
move them all at once.
Separate VLANS and mount file systems for different home directories, thus
isolating home directory databases and their associated users.
Maintain high availability and reduce potential downtime and business disruption
during a catastrophic event.
Provide a multinaming domain solution for the Data Mover in a UNIX environment.

Physical Data Mover versus VDM

A Data Mover is a component that is running its own operating system that retrieves
data from a storage device and makes it available to a network client. The Data
Mover can use the NFS, CIFS or pNFS protocols.

The configuration and control data is stored in the root file system of a physical Data
Mover, and includes:

7 Virtual Data Movers on EMC VNX
Databases (including localgroup.db and homedir, CIFS shares, and Registry
directories)
VDM root file systems (which store the VDM configuration files)
Configuration files (passwd, group, viruschecker.conf, Group Policy Object (GPO)
cache, and Kerberos file)
Audit and event log information
Data file system mount points (appear as directories)
BranchCache
All VDM CIFS event logs are stored in the global event log of the physical Data Mover.
Each VDM stores its configuration information in its respective VDM root file system,
which is the . et c directory within the root file system of the physical Data Mover. The
VDM configuration file is a subset of the configuration files on the physical Data
Mover.
In addition, there are operations that can only be performed at the physical Data
Mover level. Operations performed at the Data Mover level affect all VDMs.
These operations include:

Stop, start, delete (CIFS, MPFS, viruschk)
Data Mover failover
Figure 1 shows the VDM configuration within the physical Data Mover.


Figure 1: VDM root file systems within a physical Data Mover

8 Virtual Data Movers on EMC VNX
Table 1 shows the files that can only be configured at the physical Data Mover level.
These files cannot be configured at the VDM level.
Table 1: Data Mover configuration files
Configuration Files Physical Data Mover Virtual Data Mover
Default CIFS database
Security
Networking
Internationalization mode
Virus checker
Parameters
Standby Data Mover assignment and
failover policy

Local group
HomeDir
Share and repository directories
File systems
Kerberos

A default CIFS server and those CIFS servers that are within a VDM cannot coexist on
the same Data Mover. A default CIFS server is a global CIFS server assigned to all
interfaces, and CIFS servers within a VDM require specified interfaces. If a VDM exists
on a Data Mover, a default CIFS server cannot be created.
Maximum number of VDMs per VNX array corresponds to the maximum number of file
systems per Data Mover, which is 2048. A VDM has a root file system which reduces
one number from the total count. Any file system created on those VDMs will continue
to decrease the total number as well. Realistically, one would not create a VDM
without populating it so there must be least two file systems per VDM, which
decreases the maximum number of VDMs per VNX to 1023 (1024-1). There are, of
course, other contributing factors to the number of file systems on a Data Mover, such
as checkpoints, storage pools, and other internal file systems. The total number of
VDMs, the file systems, and checkpoints cannot exceed 2048 per Data Mover.
Figure 2 shows a physical Data Mover with the VDM implementation. A VDM is an
independent subset of a physical Data Movers environment. For instance, GPO
policies can be applied to the physical Data Mover or VDM.

9 Virtual Data Movers on EMC VNX


Figure 2: Data Mover with VDM implementation

10 Virtual Data Movers on EMC VNX
Managing VDMs on VNX
Rollout or feature activation
During rollout, the following rules apply to VDMs:
During the planning and design phase, it is considered a best practice to
implement VDMs prior to constructing your environment.
Network configuration should also be considered. For example, NTP, DNS, and
domain controllers.
You will need to create the file systems, CIFS servers or shares, NFS endpoints,
and home directories within the given VDMs.
Administrators should consider the following before creating, configuring, or
managing VDMs:
Establish a naming convention that will help to indicate the function and easily
identify the VDM and its root file system.
Allocate appropriate space for a VDM and its configuration file system. The default
size is 128 MB.
Consider file system groupings. In addition, consider which CIFS servers and NFS
endpoints, or both, reside on which physical Data Mover.
User interface choices
Administrators use the Unisphere GUI to create and manage VDMs. Alternatively,
administrators can execute a CLI nas_ser ver command in combination with
associated options to create and configure VDMs.
Performance considerations
Customers should expect no performance impact. VDMs perform in the same way as
the physical Data Mover.
A users ability to access data from a VDM is no different from accessing data that
resides on the physical Data Mover.

11 Virtual Data Movers on EMC VNX
Troubleshooting
While working with VDMs, an administrator may have to troubleshoot the following
issues:
CIFS and or NFS server accessibility errors: When no accessible CIFS or NFS
servers are found, verify the VDM and VNX system logs (just as you would
troubleshoot a physical Data Mover). Optionally, verify the state of the VDM. If the
VDM is in the mounted state, try changing the VDM to the loaded state before
verifying again. In addition, you can check to ensure that the CIFS server is still
joined to that domain and that the IP interface is up.
Troubleshoot this error by performing these tasks:

Display the CIFS protocol configuration
Display the attributes of a VDM
Change the VDM state
Display the attributes of a VDM
VDM unloading errors: Do not unload a VDM with mounted file systems. File
systems must be unmounted prior to unloading a VDM.
Troubleshoot this error by performing these tasks:
Display a list of mounted or temporarily unmounted file systems
Display the attributes of a VDM
Unload a VDM temporarily


12 Virtual Data Movers on EMC VNX
Planning considerations
Planning or tools
When planning a field deployment of VDMs, you must consider the following:
VDM Names
By naming a VDM, you can indicate its function and easily identify the VDM and its
root file system. For example, you might group Marketing CIFS servers together into a
VDM named Marketing, so that, you can easily identify the VDM.
You must specify a unique name when you create a VDM. If a VDM name is not
specified, a default name is assigned in the form vdm_<x>, where <x>is a unique
integer. The VNX assigns the VDM root file system a name in the form
r oot _f s_vdm_<vdm_x>, where <vdm_x>is the name of the VDM For example, the
root file system for a VDM named Marketing is named r oot _f s_vdm_Mar ket i ng. (If
you name a VDM vdm_Mar ket i ng, the VNX does not duplicate the VDM part of the
name. The root file system is still named r oot _f s_vdm_Mar ket i ng.)
If you rename a VDM, its root file system is renamed accordingly. For example, if you
rename a VDM to HR, its root file system name will be changed to r oot _f s_vdm_HR.
You cannot manually change a VDM root file system name without first having
changed the VDM name.
VDM root file system size, layout, and inode density

A root file system is assigned to a VDM when it is created. The default size is 128 MB,
which is the same for a physical Data Mover. In an environment with a large number
of users or shares, you might need to increase the size of the root file system. You
cannot extend the root file system automatically.
Internationalization modes
The VNX supports clients in environments that use multibyte character sets. Multibyte
character sets are supported by enabling universal character encoding standards
(Unicode).
When a VDM is created, its Internationalization mode is set to the same mode as the
Data Mover in which it resides. When the VDM is unloaded, its mode matches the last
physical Data Mover on which it was loaded.

Backing up VDMs
A full path is required to back up VDM-mounted file systems with NDMP backup and
the ser ver _ar chi ve command. An NDMP example is / r oot _vdm_Mar ket i ng/ f s.
A server archive example is:

13 Virtual Data Movers on EMC VNX
ser ver _ar chi ve <mover name> - w - f / dev/ cl t 4l 0/ - J / r oot _vdm_Mar ket i ng/ f s.
Antivirus
In addition to CIFS servers that are created within a VDM, a global CIFS server is
required for antivirus functionality. A global CIFS server is a CIFS server created at the
physical Data Mover level. This means you must have at least one CIFS server on the
physical Data Mover if you plan on using an antivirus solution.
Name resolution when moving VDMs
Consider name resolution when moving a VDM to another physical Data Mover and
using different IP addresses. The name resolution method depends on whether the
environment uses Domain Name System (DNS), the Windows Internet Naming Service
(WINS), or a static file.
DNS
DNS configurations on the server side must be considered when moving VDMs:

The destination Data Mover DNS resolver must be correctly configured to satisfy
the DNS resolution requirements for the VDM. (see server_dns command).
The VDM load operation updates the DNS servers configured on the physical Data
Mover.
The administrator can force DNS database updates to all DNS servers to which the
clients might be pointing.
When you move the VDM to the destination Data Mover, the DNS server is
automatically updated and displays the correct entry.
The update time between DNS servers depends on how the DNS servers are
configured.
All the name resolutions for the VDM are confined in the VDM if the VDM name
server domain configuration is enabled (see server_nsdomains command).

DNS configurations on the client-side must be considered when moving VDMs.

WINS
When moving VDMs in an environment using WINS, consider the following:
The VDM load updates WINS servers configured on the VDM.
The update time between WINS servers depends on how WINS servers are
configured.
Administrators can force WINS database updates to all WINS servers to which the
clients might be pointing.
To clear and update the WINS cache for clients, use one of the following methods:

14 Virtual Data Movers on EMC VNX
o Restart the computer.
o Wait until the cache is cleared (TTL 10 minutes).
o From the DOS prompt, run the command nbtstat R (preferred method)
Static file (LMHOSTS/Host)
The static file (Host) used by clients must be updated with the IP address of each new
CIFS server.
Replication
The names of the network interfaces assigned to the VDM should be identical on the
source and destination Data Movers.
To design a VDM solution, create the following components within the VDM:

Mount file system
CIFS server
CIFS share
While designing VDMs, a best practice to be considered is that VDMs should be
created with file access in mind. Administrators should consider load balancing the
VDMs in a given physical Data Mover.
VDM Replication
In order for a CIFS object to be fully functional and accessible on a remote VNX
system, the administrator must copy the complete CIFS working environment,
including local groups, user mapping information, Kerberos objects, shares, event
logs, and Registry information to the destination site. In addition to the environment
information, the administrator must replicate the file systems associated with the
CIFS server. Not all file systems need to be replicated, only the ones with a DR SLA
defined should be replicated. However, it is common to see that the entire set is
replicated.
VDM replication accommodates the CIFS working environment and replicates
information contained in the root file system of the VDM, which includes the CIFS
working environment information mentioned above. This form of replication produces
a point-in-time copy of a VDM that recreates the CIFS environment at a destination.
However, the VDM failover replication does not include the replication of the file
systems mounted to a VDM. Data file systems are replicated as separate replication
sessions. The replication failover procedure verifies that the remote VNX Data Mover
has one or more network interface named the same as those configured on the
replicating VDM. This check is also applied for all interfaces attached to the VDM
regardless of which protocol they are bound to. The procedure verifies also that the
required NIS, LDAP, and DNS naming domains for the VDM are correctly configured in
the NIS, LDAP, and DNS resolvers on the destination Data Mover.
In case of a crash recovery, the VDM loaded on the failover site sends the STATD
notification messages to the NFS clients that held locks before the crash of the

15 Virtual Data Movers on EMC VNX
production site. The clients are then able to restore their locks during the NLM grace
period window offered by the failover Data Mover.
Source VDMs can be involved in replicating up to four destination VDMs concurrently.
A destination VDM must be in a read-only state (called mounted, as opposed to
'loaded' state, which is a read-write state). A destination VDM can be a source for up
to three other replications. Since a given VDM can act as both a source and
destination for multiple replications, it supports a one-to-many and cascade
configuration. Replication of VDMs requires you have the same interface name on the
destination side for the VDM at attach o for network connectivity.
The Properties of the VDM show the replication on that VDM if the VDM is a part of a
replication session.
In Unisphere, Navigate to Storage > Shared Folders > CIFS.
Click the VDMs tab and go to the Properties of the VDM you would like to see.
Figure 3 shows the Replication Storage tab for a VDM.



Figure 3: Replication Storage tab for VDM
Replicating a VDM for NFS with ESX datastore

When replicating a VDM for NFS file systems that are used for ESX datastores, you
must use the following procedure to ensure that the VMs do not go offline.

Note: The ESX server will lose access to the datastore if these steps are not followed
exactly in the given order.

This procedure assumes that your NFS data store is connected to VNX through an IP
address.

Fail over the Production File System that is mounted on the VDM

16 Virtual Data Movers on EMC VNX
Down the Interface on the source
Fail over the VDM
Up the Interface on the destination
CAUTION: If the VDM is failed over before the user file system, the ESX server receives
an error NFS3ERR_STALE (Invalid File Handle). The ESX client then considers the file
system to be down.
Note: To avoid this error, when the Production File System that is mounted on a VDM
is failed over, the VDM interface is set down using the ser ver _i f conf i g command
on the source. The VDM is then failed over. As the VDM restarts on the replication
site, the VDM interface can be set up using the server_ifconfig command.
VDM states and functionalities
Configure a VDM from the loaded state to any of the following states:
Mounted
Temporarily unloaded (tempunloaded)
Permanently unloaded (permunloaded)
Table 2: VDM states and functionalities
VDM state CIFS active User file systems VDM reloaded at boot Failover Failback
Loaded Yes Accessible Yes No Yes
Mounted No Inaccessible No Yes No
Temporarily
unloaded
No Inaccessible Yes No No
Permanently
unloaded
No Inaccessible No No No

Create a fully functioning and active VDM

The default state upon VDM creation is loaded. In the loaded state, the VDM is fully
functional and active. The CIFS servers are running and serving data. A VDM must be
in the loaded state to allow most configuration changes, such as the addition of CIFS
servers or NFS endpoints.

You can load a VDM on only one physical Data Mover at a time. If you need to move
the VDM, the network interfaces used by its CIFS servers must be available on the
destination Data Mover. VDMs within a VDM are not supported.

Before creating a VDM, consider the following:

If you do not specify the state, the VDM is created in the loaded (default) state.

17 Virtual Data Movers on EMC VNX
If you do not specify a name for the VDM or its root file system, a default name is
assigned. See Planning or tools for more information about VDM naming
conventions.
The VDM root file system is created from an existing storage pool. The selection of
the storage pool is based on whether the physical Data Mover is in a mirrored
configuration. In a mirrored configuration, a mirrored storage pool is selected.
The default size of the root file system is 128 MB (as for a physical Data Mover).

Restrict access to the VDM associated file systems
Unlike a state change from loaded to mounted, a state change from loaded to
unloaded (tempunloaded or permunloaded) requires the VDM to have no mounted
file systems. The VDM root file system is not deleted and is available to be mounted.
This transition to the unloaded state is required when you want to stop all activity on
the VDM, but do not want to delete the VDM root file system with its configuration
data.
Changing the VDM state from loaded to mounted, tempunloaded, or permunloaded
shuts down the CIFS servers and NFS endpoints or both in the VDM, making the file
systems inaccessible to the clients through the VDM. When unloading a VDM from a
physical Data Mover, you must specify whether you intend for the unload operation to
be permanent or temporary.
When you move a VDM from one physical Data Mover to another, the Control Station
performs the following operations:
Unloads the VDM from the source Data Mover
Unmounts all file systems on that VDM
Loads the VDM and mounts all file systems on the target Data Mover
Before you move a VDM, consider the following:
For every CIFS server in a VDM, you must ensure that that CIFS servers interfaces
are available and identically named on the physical Data Mover to which you are
moving the VDM.
Consider the amount of time that it will take to update the network information
needed to make the VDM available after the VDM has been moved to a physical
Data Mover with a different IP address. Network information might include, but is
not limited to, static files,(DNS), or (WINS).
Create a VDM as a replication destination
VNX Replicator automatically creates a destination VDM in a correct state. However,
there are scenarios that require creating a destination VDM manually. For such
scenarios, create a target VDM in the mounted state. A mounted VDM is inactive and

18 Virtual Data Movers on EMC VNX
any associated CIFS servers or NFS endpoints are unavailable. No data changes can
be made except those replicated from the source.
You can replicate data from the VDM root file system on the source side to the VDM
root file system on the destination side. In the event of failover when using VNX
Replicator

, the destination side might be called on to act as the source. VDM file
systems on the destination side include the configuration data needed to change the
VDM state from unmounted to loaded.
Note: The state change is performed by the VNX Replicator. Do not attempt to switch it
manually.

19 Virtual Data Movers on EMC VNX
Use cases and test results
Disaster Recovery with Replication, Checkpoints, and VDMs

Note: The tasks listed in the VDM state and functionalities section were carried out by
the system administrators.
The most common use case scenario for VDMs is described below:
Company ABC experienced a substantial growth in the demands for their services in a
year and also projected a growth of approximately 40 percent over the next few years.
To support this growth, the company planned to expand the business to a location
outside of Boston (ABC's primary site). ABC has established another data center in
Miami (secondary site) and designated it to be the disaster recovery site.
Management realizes the importance of having a data recovery plan for their CIFS
environment. They need to ensure that data is always available by asynchronously
replicating over IP to the data center in Miami.
Company ABC has the following requirements:
Leverage the SnapSure and VDM technology for data recovery at the
destination site.
Monitor the status and proper function of this replication to the destination
site.
Give the destination site read/write permissions to temporarily service the
orphaned source-site users in the event of a disaster.
Resume using the source site when it becomes available. after a failover.
Testing was done to ensure that data is always available by asynchronously
replicating over IP to the data center in Miami.
Boston
To support the disaster recovery plan, the administrator in Boston must first confirm
that VDMs are in the loaded state. Most often, VDMs are in the loaded state because
the CIFS shares or NFS exports on these VDMs are already active and data is
accessible by all employees. Each server in a CIFS or NFS environment has associated
file systems and supporting attributes.
In addition, checkpoints have been created by the active checkpoint schedules on
the source site to allow users and system administrators to perform file-level restores.
By using checkpoints, replication might also recover from a situation in which the
source and destination file systems fall out of synchronization.
Figure 4 shows the configuration of the Boston (primary site). The \\eng_ne server is
contained within the Engineering VDM and can see only the file systems for that

20 Virtual Data Movers on EMC VNX
specific VDM. There is only one root file system (root_fs_vdm_Engineering) for each
VDM. The names for the source-user file systems are:

Eng_User
Eng_Shared

The share names for the source-user file systems are:

User_Files
Shared_Files

Figure 4: Configuration of primary site
Miami
The VNX administrator needs to configure replication sessions for Boston VDMs and
the two file systems. VNX Replicator creates correctly sized and appropriately
mounted VDMs and data file systems on Miami VNX. Additionally, the VNX
administrator needs to create and start the schedule for checkpoints that is running
at the same intervals as in Boston.
This architecture meets the companys requirements, which are as follows:
The administrator has set up this disaster recovery replication to the
destination site by replicating the VDM. This gives all users access to the CIFS
servers and user file systems.
Monitor the status and proper function of the replication pairs by monitoring
the status of the VDM replication.
In the event of a catastrophic disaster, the VNX administrator should execute a
VNX Replicator failover on all replication sessions from the Miami VNX. VNX

21 Virtual Data Movers on EMC VNX
Replicator will then switch VDMs to a loaded state and remount data file
systems in a read-write mode.
Note: The checkpoint schedule will continue to run without the knowledge of the VNX
Replicator failover.


Figure 5: Disaster recovery replication

22 Virtual Data Movers on EMC VNX

Interoperability

VDMs interoperate with other VNX features as follows:

After a VDM is created, CIFS servers and shares can be loaded into the VDM.
Separating CIFS servers allows more flexibility to replicate content between
CIFS servers.
The VNX Home Directory feature can leverage VDM capabilities to replicate
home directory environments between VNX cabinets. If the source site is lost
in a disaster scenario or taken down for maintenance, the destination site can
be brought up and all of the home directory services function as they did
before the event.

23 Virtual Data Movers on EMC VNX
Limitations

The following limitations currently apply to VDMs:

The VDM supports only the CIFS and NFSv3 protocols over TCP. All other file
protocols such as FTP, SFTP, FTPS, and NFSv4 are not supported. The NFS clients
must support NFSv3 over TCP to connect to a NFS endpoint.
The VDM for NFS multidomain feature needs to be administered by using the CLI.
GUI is not supported.
The VDM feature does not support iSCSI, FTP, and SFTP.
IP replication failover support for Local Groups must include VDMs.
The Replay cache (XID cache) is shared among all VDMs.


24 Virtual Data Movers on EMC VNX
Conclusion
Unisphere GUI enables the user to create and effortlessly manage VDMs. VDMs
require initial configuration, but do not need daily management. VDMs can greatly
enhance storage system functionality. They help simplify workload management by
consolidating file systems and CIFS and NFS servers or both into groupings so they
can be migrated between Data Movers quickly and easily. VDMs can also provide a
very good addition to the Data Recovery play with the assistance of replicating the
VDMs to a remote VNX.

25 Virtual Data Movers on EMC VNX
References
The following documents located on Powerlink at the paths mentioned below provide
additional information:
Configuring Virtual Data Movers for VNX, Product Documentation:
Home > Support > Technical Documentation and Advisories > Hardware/Platforms Documentation > VNX Series > General
Reference
Using VNX Replicator, Product Documentation:
Home > Support > Technical Documentation and Advisories > Hardware/Platforms Documentation > VNX Series >
Maintenance/Administration
Home > Support > Technical Documentation and Advisories > Hardware/Platforms Documentation > VNX Series >
Configuring NFS on VNX

You might also like