You are on page 1of 667

FalconStor CDP/NSS

ADMINISTRATION GUIDE

FalconStor CDP/NSS Administration Guide


Version 7

FalconStor Software, Inc. 2 Huntington Quadrangle, Suite 2S01 Melville, NY 11747 Phone: 631-777-5188 Fax: 631-501-7633 Web site: www.falconstor.com

Copyright 2001-2012 FalconStor Software. All Rights Reserved. FalconStor Software, IPStor, DynaPath, HotZone, SafeCache, TimeMark, TimeView, and ZeroImpact are either registered trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Windows is a registered trademark of Microsoft Corporation. All other brand and product names are trademarks or registered trademarks of their respective owners. FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The reader should in all cases consult FalconStor Software to determine whether any such changes have been made. This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2 ;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending.
72312.7.0

CDP/NSS Administration Guide

Contents
Introduction
Network Storage Server (NSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Continuous Data Protector (CDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

FalconStor Management Console


Launch the console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 Connect to your storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Configure your server using the configuration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Step 1: Enter license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Step 2: Setup network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Step 3: Set hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 FalconStor Management Console user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Discover storage servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 Protect your storage servers configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 Manage licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Set server properties (Updated May 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 Manage accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Change the root users password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 Check connectivity between the server and console . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Add an iSCSI User or Mutual CHAP User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Apply software patch updates (updated April 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Server patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Console patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49 Perform system maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50 Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Physical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Prepare devices to become logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Rename a physical device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Use IDE drives with CDP/NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Rescan adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Import a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59 Test physical device throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 Manage multiple paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 Repair paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 Logical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62 Logical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63 Enable write caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 SAN Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . .65
CDP/NSS Administration Guide 1

Contents

Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66 Change the ACSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67 Grant access to a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67 Console options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 Create a custom menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

Storage Pools
Manage storage pools and the devices within storage pools . . . . . . . . . . . . . . . . . . . . .70 Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Set properties for a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72

Logical Resources
Types of SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Thin devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77 Service-Enabled devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79 Create SAN resources - Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Prepare devices to become SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Create a virtual device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Add virtual disks for data storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 Create a SAN Client for VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 Create a Service-Enabled Device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . .92 Assign a SAN resource to one or more clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 Discover devices from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Solaris clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 Expand a virtual device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Expand a Service-Enabled Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Grant access to a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 Unassign a SAN resource from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 Delete a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106

CDP/NSS Server
Start the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 Stop the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 Log into the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 Use Telnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 Check CDP/NSS processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110 Check physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113 Check activity statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114 Remove a physical storage device from a storage server . . . . . . . . . . . . . . . . . . . . . .115 Configure iSCSI storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115 Configuring iSCSI software initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115 Configuring iSCSI hardware HBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116 Uninstall a storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
CDP/NSS Administration Guide 2

Contents

iSCSI Clients
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119 Configure iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Enable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Configure your iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Add your iSCSI client in the FalconStor Management Console . . . . . . . . . . . . . . . . . .121 Create storage targets for the iSCSI client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126 Restart the iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127 Windows iSCSI clients and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127 Disable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127

Logs and Reports


Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128 Sort information in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 Filter information stored in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 Refresh the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130 Print/Export Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131 Set report properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131 Create an individual report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 View a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136 Export data from a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136 Schedule a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 E-mail a scheduled report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 Report types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 Client Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 Disk Space Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 Disk Usage History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 Fibre Channel Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145 Physical Resources Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146 Physical Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147 Physical Resource Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 Resource IO Activity Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 SCSI Channel Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 SCSI Device Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152 SAN Client Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 SAN Client/Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154 SAN Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155 SAN Resource Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156 Server Throughput and Filtered Server Throughput Report . . . . . . . . . . . . . . . . .156 Storage Pool Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159 User Quota Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160 Report types - Global replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161 Create a global replication report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161 View global report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161

CDP/NSS Administration Guide

Contents

Fibre Channel Target Mode


Fibre Channel over Ethernet (FCoE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163 Fibre Channel target mode - configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . .163 Configure Fibre Channel hardware on server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 Downstream Persistent binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 Configure Fibre Channel clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168 Enable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170 Disable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170 Verify the Fibre Channel WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170 Set QLogic ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171 Set NPIV ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172 Set up your failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 Add Fibre Channel clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174 Associate World Wide Port Names (WWPN) with clients . . . . . . . . . . . . . . . . . . . . . . .175 Assign virtualized resources to Fibre Channel Clients . . . . . . . . . . . . . . . . . . . . . . . . .176 View new devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177 Install and configure DynaPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177 Spoof an HBA WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178

SAN Clients
Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . .189 Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190

Security
System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191 Data access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191 Account management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192 Security recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192 Storage network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193 Physical security of machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193 Disable ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193

Failover
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194 Shared storage failover sample configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .197 Failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198 General failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198 General failover requirements for iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . .199 Shared storage failover requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
CDP/NSS Administration Guide 4

Contents

FC-based Asymmetric failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 Pre-flight checklist for failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201 Connectivity failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201 Default failover behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202 Storage device path failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203 Storage device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203 Storage server or device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204 Failover restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205 Failover setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205 Recreate the configuration repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216 Power Control options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216 Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219 Failover Information report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219 Failover network failure status report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220 Recover from failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220 Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220 Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222 Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222 Recover from a cross-mirror disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223 Re-synchronize Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224 Remove Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224 Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224 Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .224 Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229 Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . .229 Convert a failover configuration into a mutual failover configuration . . . . . . . . . . .230 Exclude physical devices from health checking . . . . . . . . . . . . . . . . . . . . . . . . . . .230 Change your failover intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231 Verify physical devices match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231 Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232 Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232 Manually start a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232 Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . .232 Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233 Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .234 Power cycle servers in a failover setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235 Mirroring and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236 TimeMark/CDP and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236 Throttle and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236 HotZone and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236 Enable HotZone using local storage with failover . . . . . . . . . . . . . . . . . . . . . . . . .237

Performance
SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239 Configure SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240 Create a cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240 Global Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .244 SafeCache for groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245
CDP/NSS Administration Guide 5

Contents

Check the status of your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . .245 Configure SafeCache properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 Disable a SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246 Read Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246 Prefetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246 Configure HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .247 Check the status of HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251 Configure HotZone properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253 Disable HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253

Mirroring
Synchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254 Asynchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255 Mirror requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Enable mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Create cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 Check mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264 Swap the primary disk with the mirrored copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264 Promote the mirrored copy to become an independent virtual drive . . . . . . . . . . . . . .264 Recover from a mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266 Replace a disk that is part of an active mirror configuration . . . . . . . . . . . . . . . . . . . . .266 Expand the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267 Manually synchronize a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267 Set mirror throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268 Set alternative read mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269 Set mirror resynchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269 Rebuild a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271 Suspend/resume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271 Change mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272 Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272 Remove a mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273 Mirroring and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273

Snapshot Resource
Create a Snapshot Resource (Updated April 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . .274 Snapshot Resource policy behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281 Check status of a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282 Protect your Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 Options for Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 Snapshot Resource shrink and reclamation policies . . . . . . . . . . . . . . . . . . . . . . . . . .284 Enable Reclamation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284 Global reclamation policy and retention schedule . . . . . . . . . . . . . . . . . . . . . . . . .286 Disable Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287 Check reclamation status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288 Shrink Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288
CDP/NSS Administration Guide 6

Contents

Shrink a snapshot resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290 Use Snapshot to copy a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290 Check Snapshot Copy status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295 Create a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295 Groups with TimeMark/CDP enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 Groups with SafeCache enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 Groups with replication enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 Grant access to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297 Add resources to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297 Remove resources from a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299

TimeMarks and CDP


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .301 Enable TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302 Check TimeMark status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308 Check CDP journal status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309 Protect your CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .310 Add a tag to the CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .310 Add a comment or change priority of an existing TimeMark . . . . . . . . . . . . . . . . . . . . .310 Manually create a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 Copy a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312 Recover data using the TimeView feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314 Remap a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321 Delete a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321 Remove TimeView Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322 Set TimeView Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323 Rollback or roll forward a drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324 Change your TimeMark/CDP policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325 TimeMark retention policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326 Delete TimeViews in batch mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329 Suspend/resume CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329 Delete TimeMarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329 Disable TimeMark and CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330 Replication and TimeMark/CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330

NIC Port Bonding


Enable NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .331 Remove NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334 Change IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334

Replication
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335 Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335 Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
CDP/NSS Administration Guide 7

Contents

How replication works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336 Delta replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336 Continuous replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336 Configure Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337 Setup (updated February 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337 Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347 Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349 Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349 Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350 Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350 Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351 Configure Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352 Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352 Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352 Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353 Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . .353 Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354 Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . .355 Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355 Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357 Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357 Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357 Set the replication throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .358 Add a Target Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359 Manage Throttle windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .361 Manage Link Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363 Add link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364 Edit link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364 Delete link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364 Set replication synchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365 Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365 Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . .366 Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .366 Repair a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367 Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367 Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368 Expand the size of the primary disk (updated February 2012) . . . . . . . . . . . . . . . . . . .369 Replication with other CDP or NSS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370 Replication and TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370 Replication and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370 Replication and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370 Replication and Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370

Near-line Mirroring
Near-line mirroring requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372 Setup Near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372 Enable Near-line Mirroring on multiple resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380
CDP/NSS Administration Guide 8

Contents

Whats next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380 Check near-line mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .381 Near-line recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382 Recover data from a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382 Recover data from a near-line replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .384 Recover from a near-line replica TimeMark using forceful role reversal . . . . . . . . . . . .387 Swap the primary disk with the near-line mirrored copy . . . . . . . . . . . . . . . . . . . . . . . .390 Manually synchronize a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390 Rebuild a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390 Expand a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .391 Expand a service-enabled disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393 Suspend/resume near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394 Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394 Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394 Remove a near-line mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395 Recover from a near-line mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . .396 Replace a disk that is part of an active near-line mirror (Updated Jan.2012) . . . . . . . .397 Set Recovery Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397

ZeroImpact Backup
Configure ZeroImpact backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398 Back up a CDP/NSS logical resource using dd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401 Restore a volume backed up using ZeroImpact Backup Enabler . . . . . . . . . . . . . . . . .402

Multipathing
Load distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .404 Preferred paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .404 Path management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405

Command Line Interface


Install and configure the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .407 Use the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .407 Common arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .408 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .409

SNMP Integration
SNMPTraps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424 Implement SNMP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425 Microsoft System Center Operations Manager (SCOM) . . . . . . . . . . . . . . . . . . . . . . . .426 HP Network Node Manager (NNM) i9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427 HP OpenView Network Node Manager 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428 Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428 Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428 View statistics in NNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .429
CDP/NSS Administration Guide 9

Contents

CA Unicenter TNG 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430 Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430 Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430 View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431 View statistics in TNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431 Launch the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . .431 IBM Tivoli NetView 6.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432 Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432 Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432 View statistics in Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433 BMC Patrol 3.4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434 Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434 Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434 View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435 View statistics in Patrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435 Advanced SNMP topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436 The snmpd.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436 Use an SNMP configuration for multiple storage servers . . . . . . . . . . . . . . . . . . .436 IPSTOR-MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437

Email Alerts
Configure Email Alerts (Updated January 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458 Modify Email Alerts properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469 Email format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .470 Limiting repetitve Emails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .470 Script/program trigger information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .470

BootIP
Set up BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474 Create a boot image for a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . . .475 Initialize the configuration of the storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476 Enable the BootIP from the FalconStor Management Console . . . . . . . . . . . . . . . . . .476 Use DiskSafe to clone a boot image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476 Set BootIP properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477 Set the Recovery Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477 Set the Recovery password from the iSCSI user management . . . . . . . . . . . . . .477 Set the authentication and Recovery password from iSCSI client properties . . . .477 Remote boot the diskless computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478 For Windows 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478 For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478 Use the Sysprep tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479 For Windows 2003: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479 Use the Setup Manager tool to create the Sysprep.inf answer file . . . . . . . . . . . .480 For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .481 Create a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482
CDP/NSS Administration Guide 10

Contents

Create a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483 Assign a TimeView to a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483 Add a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483 Assign a TimeView to the SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484 Recover Data via Remote boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484 Remotely boot the Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486 Remotely install CentOS to an iSCSI disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486 Remote boot from the FalconStor Management Console . . . . . . . . . . . . . . . . . . .486 Remote boot from the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487 BootIP and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487 Remote boot and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487

Troubleshooting / FAQs
Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488 NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490 Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490 FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490 Multipathing method: MPIO vs. MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491 BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .493 SCSI adapters and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .495 Fibre Channel target mode and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496 Power control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497 iSCSI Downstream Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498 Protecting data in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498 Protecting data in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .499 Protecting data in an AIX environment (updated May 2012) . . . . . . . . . . . . . . . . .499 Protecting data in an HP-UX environment (updated May 2012) . . . . . . . . . . . . . .499 Logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500 Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500 Jumbo frames support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502 Diagnosing client connectivity issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502 Windows Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503 Windows client debug information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503 Clients with iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505 Clients with Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506 Linux SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506 Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .507 Storage server X-ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .507 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509 Cross-mirror failover on a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .511 TimeMark Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512 Snapshot Resource policy (Updated April 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512 SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
CDP/NSS Administration Guide 11

Contents

Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513 Service-Enabled Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513 Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .514 UNIX SAN Client error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .586 Command Line Interface (CLI) error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .588

Port Usage SMI-S Integration


SMI-S Terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614 Enable SMI-S (updated June 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614 Use the SMI-S Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615 Launch the Command Central Storage console . . . . . . . . . . . . . . . . . . . . . . . . . .615 Add FalconStor Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615 View FalconStor Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 View Storage Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 View LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 View Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 View Masking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .617

RAID Management for VS-Series Appliances (Updated 12/1/11)


Prepare for RAID management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .619 Preconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .620 Unconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .620 Launch the RAID Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622 Discover storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622 Future storage discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624 Display a storage profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625 Rename storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626 Refresh the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626 Configure controller connection settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627 View enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628 Individual enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628 Manage controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630 Individual controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631 Manage disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632 Interactive enclosure images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632 Individual disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634 Configure a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635 Remove a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635 Manage RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .636 Create a RAID array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637 Create a Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638 Individual RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639 Rename the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641 Delete the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641
CDP/NSS Administration Guide 12

Contents

Check RAID array actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642 Replace a physical disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642 Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .645 Define LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .645 Remove LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .647 Rename LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .647 Delete Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .648 Logical Unit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .649 Unmapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .649 Mapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651 Upgrade RAID controller firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .652 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653 Filter the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653 Clear the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653 Monitor storage from the FalconStor Management console . . . . . . . . . . . . . . . . . . . . .654 Storage information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .654 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655

Index

CDP/NSS Administration Guide

13

Introduction
As business IT operations grow in size and complexity, many computing environments are stressed in attempting to keep up with the demand to store and access data. Information and the effective management of the corresponding storage infrastructure are critical to a company's success. Reliability, availability, and disaster recovery capabilities are all key factors in the successful management and protection of data. FalconStor Continuous Data Protector (CDP) and Network Storage Server (NSS) solutions address the growing need for data management, protection, preservation, and integrity.

Network Storage Server (NSS)


FalconStor Network Storage Server (NSS) enables storage virtualization, optimization, and efficiency across heterogeneous storage from any storage system, providing consolidation, business continuity, and automated disaster recovery (DR NSS enables high availability and data assurance, provides instant recovery with data integrity for all applications, and protects investments across mixed storage environments. Business continuity is a critical aspect of operations and revenue generation. As businesses evolve, storage infrastructures increase in complexity. Some resources remain under-utilized while others are over-utilized - an inefficient use of power, capacity, and money. FalconStor solutions allow organizations to consolidate storage resources for simple and centralized management with high availability. Complete application awareness provides quick and 100% transactionally consistent data recovery. Automated DR technology simplifies operations. An open, flexible architecture enables organizations to leverage existing IT resources to create an integrated, multi-tiered, cost-efficient storage environment that ensures business continuity FalconStor NSS includes FalconStor TimeMark snapshots that work with Snapshot Agents for databases and messaging applications, providing 100% transactional integrity for instant recovery to known points in time - helping IT meet recovery point objectives (RPO) and recovery time objectives (RTO). Data managed by FalconStor NSS may be efficiently replicated via IP using the FalconStor Replication option for real-time disaster recovery (DR) protection. Thin Provisioning helps automate storage resource allocation and capacity management while virtualization provides centralized management for large, heterogeneous storage environments. The storage appliance is the central component of the network. It is the storage device that connects to hosts via industry standard iSCSI (or Fibre Channel) protocols. Before you undertake the activities described in this guide, make sure the appliance has already been racked, connected, and the initial power-on instructions have been completed for the appliance according to the FalconStor Hardware QuickStart Guide that was shipped with the appliance.

14

Introduction

Also make sure Web Setup has been completed according to the instructions in the FalconStor NSS Software QuickStart Guide, which was also shipped with the appliance. Once you have connected your NSS hardware, you can discover all storage servers on your storage subnet by selecting Tools --> Discover. For details, refer to Connect to your storage server in the FalconStor Management Console section.

Continuous Data Protector (CDP)


FalconStor Continuous Data Protector (CDP) advanced data protection solutions allows organizations to customize and define protection policies per business application, maximizing IT business operations and profitability. Protecting data from loss or corruption requires thorough, effective planning. Comprehensive data protection solutions from FalconStor provide unified backup and disaster recovery (DR) for continuous data availability. Organizations can recover emails, files, applications, and entire systems within minutes, locally and remotely. Application-level integration ensures quick, 100% transactionally consistent recovery to any point in time. WAN-optimized replication maximizes network efficiency. By fully automating the resumption of servers, storage, networks, and applications in a pre-determined, coordinated process, embedded DR automation technology stages the recovery of complete services - thus facilitating service-oriented disaster recovery. CDP automates disaster recovery for physical and virtual servers, allows rapid recovery of files, databases, systems, and entire sites while reducing the cost and complexity associated with recovery. Once you have connected your CDP hardware to your network and set your network configuration via Web Setup, you are ready to protect your data. The Host-based CDP method uses a host-based device driver to mirror existing user volumes/LUNs to the CDP appliance. For information on protecting your data in a Windows or Linux environment, refer to the DiskSafe User Guide. For Unix platforms, such as HP-UX, the native OS volume manager is used to mirror data to the CDP appliance. For information on protecting your data in an AIX, HPUX, or Solaris environment, refer to the related vendor user guide. Protection can also be set using the FalconStor Management Console. Refer to the FalconStor Management Console section. On the CDP appliance, TimeMark and CDP journaling can be configured to create recovery points to protect the mirrored disk. Replication can also be used for disaster recovery protection. FalconStor Snapshot Agents are installed on the host machines to ensure transactional level integrity of each snapshot or replica.

15

Introduction

Architecture
NSS FalconStor NSS is available in multiple form factors. Appliances with internal storage are available in various sizes for easy deployment to remote sites or offices. Two FalconStor NSS devices can be interconnected for mirroring and active/active failover, ensuring HA operations. FalconStor NSS gateway appliances can be connected to any external storage arrays, allowing you to leverage the storage systems you have in place. FalconStor NSS can also be purchased as software (software appliance kits) to install on servers. FalconStor CDP can be deployed in several ways to best fit your organizations needs. FalconStor CDP is available in multiple configurations suitable for remote offices, branch offices, data centers, and remote DR sites. Appliances with internal storage for both physical and virtual servers are available in various sizes for easy deployment to remote sites or offices. Gateway appliances can be connected to any existing external storage array, allowing you to use and reuse the storage systems you already have in place. FalconStor CDP can also be purchased as a software appliance kit to install on servers or as a virtual appliance that integrates with virtual server technology. FalconStor CDP can use both a host-based approach and a fabric-based approach to capture and track data changes. For a host-based model, a FalconStor DiskSafe Agent runs on the application server to capture block-level changes made to a system or data disk without impacting application performance. It mirrors the data to a back-end FalconStor CDP appliance, which handles all of the data protection operations. All journaling, snapshot processing, mirroring, and replication occur on the out-of-band FalconStor CDP appliance, so that primary storage I/O remains unaffected. In the fabric-based model, a pair of FalconStor CDP Connector write-splitting appliances are placed into a FC or iSCSI SAN fabric. FalconStor CDP gateway appliances function similarly to switches: they split data writes off to one or more out-of-band FalconStor CDP appliances that provide data protection functionality. The pair of FalconStor connector appliances is always configured in a high availability (HA) cluster to provide fault tolerance.

CDP

16

Introduction

Components
The primary components of the CDP/NSS Storage Network are the storage server, SAN clients, and the FalconStor Management Console. These components all sit on the same network segment, the storage network. Server The storage server is a dedicated network storage server. The storage server is attached to the physical SCSI and/or Fibre Channel storage devices on one or more SCSI or Fibre Channel busses. The job of the storage server is to communicate data requests between the clients and the logical (SAN) resources (logically mapped storage devices on the storage network) via Fibre Channel or iSCSI. SAN Clients SAN Clients are the actual file and application servers. They are sometimes referred to as IPStor SAN Clients because they utilize the storage resources via the storage server. You can have iSCSI or Fibre Channel SAN Clients on your storage network. SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally attached devices to the SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though the SCSI devices are actually located at the storage server. Console The FalconStor Management Console is the administration tool for the storage network. It is a Java application that can be used on a variety of platforms and allows IPStor administrators to create, configure, manage, and monitor the storage resources and services on the storage network. Physical resources are the actual devices attached to this storage server. These can be hard disks, tape drives, device libraries, and RAID cabinets. All resources defined on the storage server, including physical SAN Resources (virtual drives, and Service-Enabled Devices), Replica Resources, and Snapshot Groups. Clients do not gain access to physical resources; they only have access to logical resources. This means that an administrator must configure each physical resource to one or more logical resources so that they can be assigned to the clients. Logical Resources consists of sets of storage blocks from one or more physical hard disk drives. This allows the creation of Logical Resources that contain a portion of a larger physical disk device or an aggregation of multiple physical disk devices. Understanding how to create and manage Logical Resources is critical to a successful storage network. See Logical Resources for more information.

Physical Resource Logical Resource

17

Introduction

Acronyms
Acronym
ACL ACSL ALUA API BDC BMR CCM CCS CDP CDR CHAP CIFS CLI DAS FC FCoE GUI GUID HBA HCA IMA I/O IPMI iSCSI JBOD LAN LUN MIB MPIO Access Control List Adaptor, Channel, SCSI ID, LUN Asymmetric Logical Unit Access Application Programming Interface Backup Domain Controller Bare Metal Recovery Central Client Manager Command Central Storage Continuous Data Protector Continuous Data Replication Challenge Handshake Authentication Protocol Common Internet File System Command Line Interface Direct Attached Storage Fibre Channel Fibre Channel over Ethernet Graphical User Interface Globally Unique Identifier Host Bus Adapter. Host Channel Adapter. Intelligent Management Administrator Input / Output Intelligent Platform Management Interface Internet Small Computer System Interface Just a Bunch Of Disks Local Area Network Logical Unit Number. Management Information Base Microsoft Multipath I/O

Definition

18

Introduction

Acronym
NFS NIC NPIV NSS NTFS NVRAM OID PDC POSIX RAID RAS RPC SAN SCSI SDM SED SMI-S SNMP SRA SSD VAAI VLAN VSA VSS WWNN WWPN Network File System Network Interface Card N_Port ID Virtualization Network Storage Server NT File System

Definition

Non-volatile Random Access Memory Object Identifier Primary Domain Controller Portable Operating System Interface Redundant Array of Independent Disks RAS (Reliability, Availability, Service) Remote Procedure Call Storage Area Network Small Computer System Interface SAN Disk Manager Service-Enabled Device Storage Management Initiative Specification Simple Network Management Protocol Snapshot Resource Area Solid State Disk vStorage APIs for Array Integration Virtual Local Area Network Volume Set Addressing Volume Shadow Copy Service World Wide Node Number World Wide Port Number

19

Introduction

Terminology
ApplianceBased protection Appliance-based protection, or In-band protection refers to a storage server that is placed in the data path between an application host and its storage. This allows CDP/NSS to provision the disk back to the application host while allowing data protection services. The process of rebuilding a computer after a catastrophic failure. The normal bare metal restoration process is: install the operating system from the product disks, install the backup software (so you can restore your data), and then restore your data. A Java console that provides central management of client-side applications (DiskSafe, snapshot agents) and monitors client storage. CCM allows you to manage your clients in groups, enhancing accuracy and consistency of policies across grouped servers. For example, Exchange groups, SharePoint groups. For additional information, refer to the Central Client Manager User Guide. Additional term for a storage server/ CDP appliance that is providing Continuous Data Protection. The Command Line Interface (CLI) is a simple interface that allows client machines to perform some of the more common functions currently performed by the FalconStor Management Console. Administrators can use the CLI to automate many tasks, as well as integrate CDP/NSS with their existing management tools. The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path must be set up for Windows clients in order to be able to use the CLI. Refer to the Command Line Interface section for details. Cross-Mirror Fail over (For virtual appliances only). A non-shared storage failover option that provides high availability without the need for shared storage. Used with virtual appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. This option removes the requirements of shared storage between two partner storage server nodes. For additional information on using this feature for your virtual appliances, refer to Cross-mirror failover requirements. The CDP DiskSafe Agent is host-based replication software that delivers block-level data protection with centralized management for Microsoft Windows-based servers as part of the CDP solution. The DiskSafe Agent delivers real-time and periodic mirroring for both DAS and SAN storage to complement the CDP Journaling feature, TimeMark Snapshots, and Replication. DynaPath is a load balancing/path redundancy application that ensures constant data availability and peak performance across the SAN by performing Fibre Channel load-balancing, transparent failover, and fail-back services. DynaPath creates parallel active storage paths that transparently reroute server traffic without interruption in the event of a storage network problem. For additional information, refer to the DynaPath User Guide.

Bare Metal Recovery (BMR)

Central Client Manager (CCM)

CDP Gateway

Command Line Interface

DiskSafe Agent

DynaPath

20

Introduction

E-mail Alerts

Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.) so that system administrators are able to take corrective measures within the shortest amount of time, ensuring optimum service uptime and IT efficiency. For additional information, refer to the Email Alerts section. Comprehensive, graphical administration tool to configure all data protection services, set properties, and manage storage. For more information, refer to the FalconStor Management Console section. FileSafe is a software application that protects your data by backing up files and folders to another location. Data is backed up to a location called a repository. The repository can be local (on your computer or on a USB device), remote (on a shared network server or NAS resource), or on a storage server where the FileSafe Server option is licensed and enabled. For more information, see the FileSafe User Guide. The Globally Unique Identifier (GUID) is a unique 128-bit number that is used to identify a particular component, application, file, database entry, and/or user. Usually a Fibre Channel zone that is comprised of an application server's initiator port and a CDP/NSS target port. More more information, refer to Zoning. Host-based protection refers to DiskSafe and FileSafe, where the locally attached disk is mirrored to a CDP-provisioned disk with data protection services. A CDP/NSS option that automatically re-maps data from frequently used areas of disks to higher performance storage devices in the infrastructure, resulting in enhanced read performance for the application accessing the storage. This feature is not available for CDP connector appliances. For additional information, refer to the HotZone section. The HyperTrac Backup Accelerator (HyperTrac) works in conjunction with CDP and NSS to increase tape backup speed, eliminate backup windows, and off load processing from application servers. HyperTrac for VMware enhances the functionality of VMware Consolidated Backup (VCB) by allow TimeViews of the production virtual disk to be used as the source of the VCB snapshot. Unlike the traditional HyperTrac model, the TimeViews are not mounted directly to the storage server. HyperTrac for Hyper-V enables mounting production TimeViews for backup via Microsoft Hyper-V machines. For more information, refer to the HyperTrac User Guide.

FalconStor Management Console FileSafe

GUID

Host Zone

Host-based protection HotZone

HyperTrac

IPMI

Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server. iSCSI clients are the file and application servers that access CDP/NSS SAN Resources using the iSCSI protocol.

iSCSI Client

21

Introduction

iSCSI Target Logical Resources MIB

A storage target for the client. Logically mapped devices on the storage server. They are comprised of physical storage devices, known as Physical Resources. A Management Information Base (MIB) is an ASCII text file that describes SNMP network elements as a list of data objects. It is a database of information, laid out in a tree structure, with MIB objects as the leaf nodes, that you can query from an SNMP agent. The purpose of the MIB is to translate numerical strings into humanreadable text. When an SNMP device sends a Trap, it identifies each data object in the message with a number string called an object identifier (OID). Refer to the SNMP Integration section for additional information. FalconStor MicroScan is a patented de-duplication technology that minimizes the amount of data transferred during replication by eliminating inefficiencies at the application and file system layer. Data changes and replicated are replicated at the smallest possible level of granularity, reducing bandwidth and associated storage costs for disaster recovery (DR), or any time data is replicated from one source to another. MicroScan is an integral part of the replication option for CDP and NSS solutions. N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical N_Port, this allows us to have an initiator, target and standby occupying the same physical port This is not supported when using a non-NPIV driver. All Fibre Channel switches must support NPIV as NPIV (point-to-point) mode is enabled by default.

MicroScan

NPIV

NIC Port Bonding

NIC Port Bonding is a load-balancing/path-redundancy feature (available for Linux) that enables your storage server to load-balance network traffic across two or more network connections creating redundant data paths throughout the network. The Object Identifier (OID) is the unique number written as a sequence of sub identifiers in decimal notation. For example, 1.3.6.1.4.1.2681.1.2.102. It uniquely identifies data objects that are the subjects of an SNMP message. When your SNMP device sends a Trap or a GetResponse, it transmits a series of OIDs, paired with their current values. A feature that enables pre-fetching of data for clients. This allows clients to read ahead consecutively, which can result in improved performance because the storage server will have the data ready from the anticipatory read as soon as the next request is received from the client. This will reduce the latency of the command and improve the sequential read benchmarks in most cases. For additional information, refer to the Prefetch section. An intelligent, policy-driven, disk-based staging mechanism that automatically remaps "hot" (frequently used) areas of disks to high-speed storage devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). For additional information, refer to the Read Cache section.

OID

Prefetch

Read Cache

22

Introduction

RecoverTrac

FalconStor RecoverTrac is a disaster recovery tool that maps servers, applications, networking storage, and failover procedures from source sites to recovery sites, automating the logistics involved in resuming business operations at the recovery site. While RecoverTrac extends the functionality of FalconStor CDP/NSS solutions, the application operates in all environments, independent of server, network, application, or storage vendor. FalconStor recovery agents (available from the FalconStor Customer Service portal.) offer recovery solutions for your database and messaging systems. FalconStor Message Recovery for Microsoft Exchange (MRE) and Message Recovery for Lotus Notes/ Domino (MRN) expedite mailbox/message recovery by enabling IT administrators to quickly recover individual mailboxes from point-in-time snapshot images of their messaging server. FalconStor Database Recovery for Microsoft SQL Server expedites database recovery by enabling IT administrators to quickly recover a database from point-in- time snapshot images of their SQL database. For details, refer to the Recovery Agents User Guide. The process by which a SAN Resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN Resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk. The replication option works with both CDP and NSS solutions to replicate data over any existing infrastructure. In addition, it can be used for site migration, remote site consolidation for backup, and similar tasks. Using a TOTALLY Open storagecentric approach, replication is configured and managed independently of servers, so it integrates with any operating system or application for cost-effective disaster recovery (DR). For additional information, refer to the Replication section.

Recovery Agents

Replication

Replication Scan

A scan comparing the primary and replica disk for differences. If primary and replica disk are known to have similar data (bit by bit, not file by file) then a manual scan is recommended. The initial scan is automatically triggered and all subsequent scans must be manually triggered (right-click on a device and select Replication > Scan). TimeMark retention allows you to set TimeMark preservation patterns. The TimeMark retention schedule can be set by right-clicking on the server, and selecting Properties --> TimeMark Maintenance tab. This option offers improved performance by using high-speed storage devices as a persistent (non-volatile) read/write cache. The persistent cache can be mirrored for added protection. This option is not available for CDP connector appliances. For additional information, refer to the SafeCache section. Provides storage for file and application servers (called SAN Clients). When a SAN Resource is assigned to a SAN client, a virtual adapter is defined for that client. The SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the

Retention

SafeCache

SAN Resource

23

Introduction

configuration of actual SCSI storage devices and adapters, allowing the operating system and applications to treat them like any other SCSI device. For information on creating a SAN resource, refer to the Create SAN resources - Procedures section. ServiceEnabled Device Service-Enabled Devices are hard drives or RAID LUNs with existing data that can be accessed by CDP or NSS to make use of all key CDP/NSS storage services (mirroring, snapshot, etc.). This can be done without any migration/copying, without any modification of data, and with minimal downtime. Service-Enabled Devices are used to migrate existing drives into the SAN. The FalconStor Storage Management Initiative Specification (SMI-S) Provider for CDP and NSS storage enables CDP and NSS users to have central management of multi-vendor storage networks for more efficient utilization. CDP and NSS solutions use the SMI-S standard to expose the storage systems it manages to the SMI-S Client. A typical SMI-S Client can discover FalconStor devices through this interface. It utilizes CIM-XML while is a WBEM protocol that uses XML over HTTP to exchange Common Information Model (CIM) information. For additional information, refer to the SMI-S Integration section. A snapshot of an entire device allows us to capture data at any given moment in time and move it to either tape or another storage medium, while allowing data to be written to the device. You can perform a snapshot to capture a point-in-time image of your data volumes (virtual drives) using minimal storage space. For additional information, refer to the Snapshot Resource section. Application-aware Snapshot Agents provide complete data protection for active databases such as Microsoft SQL Server, Oracle, Sybase, and DB2, and messaging applications such as Microsoft Exchange and Lotus Notes. These agents work with both CDP and NSS to ensure that snapshots are taken with full transactional integrity. For details, refer to the Snapshot Agents User Guide. Simple Network Management Protocol (SNMP) is an Internet-standard protocol for managing devices on IP networks. For additional information, refer to the SNMP Integration section. A physical connection between two servers. Version 7.0 and later requires a Storage Cluster Interlink Port for failover setup. For additional information regarding the Storage Cluster Interlink, refer to the Failover section. For virtual resources, Thin Provisioning allows you to use your storage space more efficiently by allocating a minimum amount of space for the virtual resource. Then, when usage thresholds are met, additional storage is allocated as necessary. Thin Provisioning may be applied to primary storage, replica storage (at the disaster recovery [DR] site), and mirrored storage. For additional information, refer to the Thin devices section. TimeMark technology works with CDP and NSS to enable you to create scheduled and on-demand point-in-time delta snapshot copies of data volumes. TimeMark includes the FalconStor TimeView feature, which creates an accessible, mountable image of any snapshot. This provides a tool to freely create multiple and
24

SMI-S

Snapshot

Snapshot Agent

SNMP

Storage Cluster Interlink Port

Thin Provisioning

TimeMark

Introduction

instantaneous virtual copies of an active data set. The TimeView images can be assigned to multiple application servers with read/write access for concurrent, independent processing, while the original data set is actively accessed and updated by the primary application server. For additional information, refer to the TimeMarks and CDP section. TimeView An extension of the TimeMark option that allows you to mount a virtual drive as of a specific point in time. For additional information, refer to the Recover data using the TimeView feature section. Asynchronous notification from agent to manager. Includes current sysUpTime value, an OID identifying the type of trap and optional variable bindings. Destination addressing for traps is determined in an application specific manner typically through trap configuration variables in the MIB. For additional information, refer to the SNMPTraps section. An event that tells your CDP/NSS-enabled application when it is time to perform a snapshot of a virtual device. FalconStors Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact Backup options all trigger snapshots. A VAAI- aware storage device is able to understand commands from hypervisor resources and perform storage functions. CDP/NSS version 7.00 supports VAAI when assigning NSS LUNs to a vSphere 5 host. VAAI is automatically enabled; no additional configuration is required. Zoning which uses the WWPN in the configuration. The WWPN remains the same in the zoning configuration regardless of the port location. If a port fails, you can simply move the cable from the failed port to another valid port without having to reconfigure the zoning. Allows you to perform a local raw device tape backup/restore of your virtual drives. This eliminates the need for the application server to play a role in backup and restore operations.

Trap

Trigger

VAAI

WWN Zoning

ZeroImpact Backup Enabler

Web Setup
Once you have physically connected the appliance, powered it on, and the following steps have been performed via the Web Setup installation and server setup, you are ready to begin using your CDP or NSS storage server. This step may have already been completed for you. Refer to the Software Quick Start Guide for details regarding each of the following steps: 1. Configure the Appliance The first time you connect, you will be asked to: Select a language. (If the wrong language is selected, click your browser back button or go to: //10.0.0.2/language.php to return to the language selection page. Read and agree to the FalconStor End User License Agreement.
25

Introduction

(Storage appliances only) Configure your RAID system. Enter the network configuration for your appliance

2. Manage License Keys Enter the server license keys. 3. Check for Software Updates Click the Check for Updates button to check for updated agent software. Click the Download Updates button to download the selected client software. 4. Install Management Software and Guides 5. Install Client Software and Guides 6. Configure Advanced Features Advanced features allow you to add storage capacity via Fibre Channel or iSCSI or disable web services if your business policy requires web services to be disabled. If you encounter any problems while configuring your appliance, contact FalconStor technical support via the web at: www.falconstor.com/supportrequest.

26

FalconStor Management Console


The FalconStor Management Console is the administration tool for the storage network. It is a Java application that can be used on a variety of platforms and allows administrators to create, configure, manage, and monitor the storage resources and services on the storage server network as well as run/view reports, enter licensing information, and add/delete administrators. The FalconStor Management Console software can be installed on each machine connected to a storage server. The console is also available via download from your storage server appliance. If you cannot install the FalconStor Management Console on every client, you can launch a web-based version of the console from your browser and enter the IP address of the CDP/NSS server.

Launch the console


To launch an installed version of the console in a Microsoft Windows environment, select Start --> Programs --> FalconStor --> IPStor --> IPStor Console. In a Linux and other UNIX environment, execute the following: cd /usr/local/ipstorconsole ./ipstorconsole Notes: If your screen resolution is 640 x 480, the splash screen may be cut off while the console loads. The console might not launch on certain systems with display settings configured to use 16 colors. The console needs to be run from a directory with write access. Otherwise, the host name information and message log file retrieved from the storage server will not be able to be saved to the local directory. As a result, the console will display event messages as numbers and console options will not be able to be saved. You must be signed on as the local administrator of the machine on which you are installing the Windows console package.

To launch a web-based version of the console, open a browser from any machine and enter the IP address of the CDP/NSS server (for example: http://10.0.0.2) and the console will launch. If you have Web Setup, select the Go button next to Install Management Software and Guides and click the Launch Console link. In the future, to skip going through Web Setup, open a browser from any machine and enter the IP address of the storage server followed by :81, for example: http:// 10.0.0.2:81/ to launch the console. The computer running the browser must have Java Runtime Environment (JRE) version 1.6.
CDP/NSS Administration Guide 28

FalconStor Management Console

Connect to your storage server


1. Discover all storage servers on your storage subnet by selecting Tools --> Discover. 2. Connect to a storage server. You can connect to an existing storage server, by right-clicking on it and selecting Connect. Enter a valid user name and password (both are case sensitive). To connect to a server that is not listed, right-click on the Servers object and select Add, enter the name of the server, a valid user name and password. When you connect to a server for the first time, a configuration wizard is launched to guide you through the set up process. You may see a dialog box notifying you of new devices attached to the server. Here, you will see all devices that are either unassigned or reserved devices. At this point you can either prepare the device (reserve it for a virtual or ServiceEnabled Device) and/or create a logical resource. Once you are connected to a server, the server icon will change to show that you are connected: If you connect to a server that is part of a failover configuration, you will automatically be connected to both servers. Note: The FalconStor Management Console remembers the servers to which the console has successfully connected. When you close and restart the console, the servers display in the tree but you are not automatically connected to them.

CDP/NSS Administration Guide

29

FalconStor Management Console

Configure your server using the configuration wizard


The configuration wizard guides you through entering license keycodes and setting up your network configuration. If this is the first time you are connecting to your CDP or NSS server, you will see one of the following:

You will only see step 4 if IPStor detected IPMI when the server booted up.

Step 1: Enter license keys


Click the Add button and enter your keycodes. Be sure to enter keycodes for any options you have purchased. Each FalconStor option requires that a keycode be entered before the option can be configured and used. Refer to Manage licenses for more information. Note: After completing the configuration wizard, if you need to add license keycodes, you can right-click on your CDP/NSS appliance and select License.

Step 2: Setup network


Enter information about your network configuration. If you need to change storage server IP addresses, you must make these changes using System Maintenance --> Network Configuration in the console. Using yast or other third-party utilities will not update the information correctly. Refer to Network configuration for more information. Note: After completing the configuration wizard, if you need to change these settings, you can right-click on your CDP/NSS appliance and select System Maintenance --> Network Configuration.

CDP/NSS Administration Guide

30

FalconStor Management Console

Step 3: Set hostname


Enter a valid name for your storage appliance. Valid characters are letters, numbers, underscore, or dash.

You will need to restart the server if you change the hostname. Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices.

CDP/NSS Administration Guide

31

FalconStor Management Console

FalconStor Management Console user interface


The FalconStor Management Console displays the configuration for the storage servers on your storage network. The information is organized in a familiar Explorerlike tree view. The tree allows you to navigate the various storage servers and their configuration objects. You can expand or collapse the display to show only the information that you wish to view. To expand an item that is collapsed, you can click on the symbol next to the item. To collapse an item, click on the symbol next to the item. Double-clicking on the item will also toggle the expanded/collapsed view of the item. You need to connect to a server before you can expand it. When you highlight an object in the tree, the right-hand pane contains detailed information about the object. You can select one of the tabs for more information. The console log located at the bottom of the window displays information about the local version of the console. The log features a drop-down box that allows you to see activity from this console session.

Search for objects in the tree

The console has a search feature that helps you find any physical device, virtual device, or client on any storage server. To search: 1. Highlight a storage server in the tree. 2. Select Edit menu --> Find. 3. Select the type of object to search for and the search criteria. Once you select an object type, a list of existing objects appears. If you highlight one, you will be taken directly to that object in the tree. Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI, LUN), or GUID (Globally Unique Identifier). Once you click the Search button, you will be taken directly to that object in the tree.

Storage server status and configuration

The console displays the configuration and status of the storage server. Configuration information includes the version of the CDP or NSS software and base operating system, the type and number of processors, amount of physical and swappable memory, supported protocols, and network adapter information. The Event Log tab displays system events and errors.

CDP/NSS Administration Guide

32

FalconStor Management Console

Alerts

The console displays all critical alerts upon login to the server. Select the Display only the new alerts next time if you only want to see new critical alerts the next time you log in. Selecting this option indicates acknowledgement of the alerts.

Discover storage servers


CDP/NSS can automatically discover all storage servers on your storage subnet. Storage servers running CDP or NSS will be recognized as storage servers. To discover the servers: 1. Select Tools --> Discover 2. Enter your network criteria.

Protect your storage servers configuration


FalconStor provides several convenient ways to protect your CDP or NSS configuration. This is useful for disaster recovery purposes, such as if a storage server is out of commission but you have the storage disks and want to use them to build a new storage server. You should create a configuration repository even on a standalone server. Continuously save configuration You can create a configuration repository that maintains a continuously updated version of your storage system configuration. The status of the configuration repository is displayed on the console under the General tab. In the case of a failure of the configuration repository, the console displays the time of the failure along with the last successful update. This feature works seamlessly with the FalconStor Failover option to provide business continuity in the event that a storage server fails. For additional redundancy, the configuration repository can be mirrored to another disk. To create a configuration repository, make sure there is at least 10 GB of available space. 1. Highlight a storage server in the tree. 2. Right-click on the server and select Options --> Enable Configuration Repository. 3. Select the physical device(s) for the Configuration Repository resource. 4. Confirm all information and click Finish to create the repository. You will now see a Configuration Repository object in the tree under Logical Resources. To mirror the repository, right-click on it and select Mirror --> Add. Refer to the CDP/NSS System Recovery Guide for details on repairing or replacing your CDP/NSS server.

CDP/NSS Administration Guide

33

FalconStor Management Console

Manage licenses
To license CDP/NSS and its options, make sure you have obtained your CDP/NSS keycode(s) from FalconStor or its representatives. Once you have the license keycodes, follow the steps below: 1. In the console, right-click on the server and select License.

The License Summary window is informational only and displays a list of the options supported for this server. You can enter keycodes for your purchased options on the Keycodes Detail window. 2. Press the Add button on the Keycodes Detail window to enter each keycode. Note: If multiple administrators are logged into a storage server at the same time, license changes made from one console will take effect in other console only when the administrator disconnects and then reconnects to the server. 3. If your licenses have not been registered yet, click the Register button on the Keycodes Detail window. You can register online if you have an Internet connection. To register offline, you must save the registration information to a file on your hard drive and then email it to FalconStors registration server. When you receive a reply, save the attachment to your hard drive and send it to the registration server to complete the registration.

CDP/NSS Administration Guide

34

FalconStor Management Console

Note: Registration information file names can only use alphanumeric characters and must have a .dat extension. You cannot use a single digit as the name. For example, company1.dat is valid (1.dat is not valid).

CDP/NSS Administration Guide

35

FalconStor Management Console

Set server properties (Updated May 2012)


To set properties for a specific server: 1. Right-click on the server and select Properties.

The tabs you see will depend upon your storage server configuration. 2. If you have multiple NICs (network interface cards) in your server, enter the IP addresses using the Server IP Addresses tab. If the first IP address stops responding, the CDP/NSS clients will attempt to communicate with the server using the other IP addresses you have entered in the order they are listed. Notes: In order for the clients to successfully use an alternate IP address, your subnet must be set properly so that the subnet itself can redirect traffic to the proper alternate adapter. You cannot assign two or more NICs within the same subnet. The client becomes aware of the multiple IP addresses when it initially connects to the server. Therefore, if you add additional IP addresses in the console while the client is running, you must rescan devices (Windows clients) or restart the client (Linux/Unix clients) to make the client aware of these IP addresses.

CDP/NSS Administration Guide

36

FalconStor Management Console

3. On the Activity Database Maintenance tab, indicate how often the SAN data should be purged.

The Activity Log is a database that tracks all system activity, including all data read, data written, number of read commands, write commands, number of errors etc. This information is used to generate SAN information for the CDP/ NSS reports.

CDP/NSS Administration Guide

37

FalconStor Management Console

4. On the SNMP Maintenance tab, indicate which types of messages should be sent as traps to your SNMP manager.

Five levels are available: None (Default) No messages will be sent. Critical - Only critical errors that stop the system from operating properly will be sent. Error Errors (failure such as a resource is not available or an operation has failed) and critical errors will be sent. Warning Warnings (something occurred that may require maintenance or corrective action), errors, and critical errors will be sent. Informational Informational messages, errors, warnings, and critical error messages will be sent.

CDP/NSS Administration Guide

38

FalconStor Management Console

5. On the iSCSI tab, set the iSCSI portal that your system should use as default when creating an iSCSI target.

If you have multiple NICs, when you create an iSCSI target, this IP address will be selected by default for you. 6. If necessary, change settings for mirror resynchronization and replication on the Performance tab.

CDP/NSS Administration Guide

39

FalconStor Management Console

The settings on this tab affect system performance. The defaults should be optimal for most configurations. You should only need to change the settings for special situations, such as if your mirror is remotely located. Mirror Synchronization Throttle - Set the default value for the individual mirror device to use (since throttle is disabled by default for individual mirror devices). Each mirror device will be able to synchronize up to the value set here (in KB per second). If you select 0 (zero), all mirror devices will use their own throttle value (if set), otherwise there is no limit for the device. Select the Start initial synchronization when mirror is added option to have synchronization begin immediately for newly created mirrors. The synchronize out-of-sync mirror policy does not apply in this case. If the Start initial synchronization when mirror is added option is not selected, the mirror begins synchronization based on the policy configured. Synchronize Out-of-Sync Mirrors - Determine how often the system should check and attempt to resynchronize active out-of-sync mirrors, how often it should retry synchronization if it fails to complete, and whether or not to include replica mirrors. These setting will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that resynchronization policy will apply instead. This is the mirror policy that applies to all individual mirrors that contain the following settings: Check and synchronize out-of-sync mirrors every [n][unit] - Check the mirror status at this interval and trigger a mirror synchronization when the mirror is not synchronized. Up to [n] mirrors at each interval - Indicate the number of mirrors that can be synchronized concurrently. This rule does not apply to userinitiated operations, such as synchronize, resume, and rebuild. This rule also does not apply when the Start initial synchronization when mirror is added option is enabled. Retry synchronization for each resource up to [n] times when synchronization failed - Indicate the number of times that an out-of-sync mirror will retry to synchronize the mirror at the interval set by the Check and synchronize out-of-sync mirrors every rule. Once the mirror fails to synchronize the number of times specified, a manual synchronization will be required to initiate the mirror synchronization again. Include replica mirrors in the automatic synchronization process Enable this option to include replica mirrors in the automatic synchronization process. This option is disabled by default, which means the mirror policy will not apply to any replica device with mirror on the server. In this case, a manual synchronization is required to re-sync the replica mirror. When this option is enabled, then the mirror policies will apply to the replica mirror. Replication Throttle - Click the Configure Throttle button to launch the Configure Target Throttle screen, allowing you to set, modify, or delete replication throttle settings. Refer to Set the replication throttle for additional information. Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small
CDP/NSS Administration Guide 40

FalconStor Management Console

random updates to the disk. The global MicroScan option sets a default in all replication setup wizards. MicroScan can still be enabled/disabled for each individual replication via the wizard regardless of the global MicroScan setting. 7. Optional: Select the Auto Save Config tab and enter information to save your storage server system configuration for journaling purposes. This option cannot be used to restore your system configuration. Refer to the CDP-NSS System Recovery Guide for information regarding restoring your system.

You can set your system to automatically replicate your system configuration to an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the storage server configuration prior to replication. The target server you specify in the Ftp Server Name field must have FTP server installed and enabled. The Target Directory is the directory on the FTP server where the files will be stored. The directory name you enter here (such as ipstorconfig) is a directory on the FTP server (for example ftp\ipstorconfig). You should not enter an absolute path like c:\ipstorconfig. The Username is the user that the system will log in as. You must create this user on the FTP site. This user must have read/write access to the directory named here. In the Interval field, determine how often to replicate the configuration. Depending upon how frequently you make configuration changes to CDP/NSS, set the interval accordingly. In the Number of Copies field, enter the maximum copies to keep. The oldest copy will be deleted as each new copy is added.

CDP/NSS Administration Guide

41

FalconStor Management Console

8. On the Location tab, you can enter a specific physical location of the machine. You can also select an image (smaller than 500 KB) to identify the server location. Once the location information is saved, the new tab displays in the FalconStor Management Console for that server. 9. On the TimeMark Maintenance tab, you can set a global reclamation policy.

Manage accounts
Only the root user can manage users and groups or reset passwords. You will need to add an account for each person who will have administrative rights in CDP/NSS. You will also need to add a user account for clients that will be accessing storage resources from a host-based application (such as FalconStor DiskSafe or FileSafe). To make account management easier, users can be grouped together and handled simultaneously. To manage users and groups: 1. Right-click on the server and select Accounts.

A list of all existing users and administrators are listed on the Users tab and a list of all existing groups is listed on the Groups tab.

CDP/NSS Administration Guide

42

FalconStor Management Console

2. Select the appropriate option. Note: You cannot manage accounts or reset a password when a server is in failover state. Add a user To add a user: 1. Click the Add button.

2. Enter the name for this user. The username must adhere to the naming convention of the operating system running on your storage server. Refer to your operating systems documentation for naming restrictions. 3. Enter a password for this user and then re-enter it in the Confirm Password field. For iSCSI clients and host-based applications, the password must be between 12 and 16 characters. The password is case sensitive. 4. Specify the type of account. Users and administrators have different levels of permissions in CDP/NSS. IPStor Admins can perform any CDP/NSS operation other than managing accounts. They are also authorized for CDP/NSS client authentication. IPStor Users can manage virtual devices assigned to them and can allocate space from the storage pool(s) assigned to them. They can also create new SAN resources, clients, and groups as well as assign resources to clients, and join resources to groups, as long as they are authorized. IPStor Users can only view resources to which they are assigned. IPStor Users are also authorized for CDP/NSS client authentication. Any time an IPStor User creates a new SAN resource, client, or group, access rights will automatically be granted for the user to that object. 5. (IPStor Users only) If desired, specify a quota. Quotas enable the administrator to place manageable restrictions on storage usage as well as storage used by groups, users, and/or hosts.

CDP/NSS Administration Guide

43

FalconStor Management Console

A user quota limits how much space is allocated to this user for auto-expansion. Resources managed by this user can only auto-expand if the users quota has not been reached. The quota also limits how much space a host-based application, such as DiskSafe, can allocate. 6. Click OK to save the information. Add a group To add a group: 1. Select the Groups tab. 2. Click the Add button.

3. Enter a name for the group. You cannot have any spaces or special characters in the name. 4. If desired, specify a quota. The quota limits how much space is allocated to each user in this group. The group quota overrides any individual user quota that may be set. 5. Click OK to save the information. Add users to groups Each user can belong to only one group. You can add users to groups on both the Users and Groups tabs. On the Users tab, you can highlight a single user and click the Membership button to add the user to a group.

CDP/NSS Administration Guide

44

FalconStor Management Console

On the Groups tab, you can highlight a group and click the Membership button to add multiple users to that group.

You will see this dialog from the Users tab.

You will see this dialog from the Groups tab.

Set a quota

You can set a quota for a user on the Users tab and you can set a quota for a group on the Groups tab. The quota limits how much space is allocated to each user. If a user is in a group, the group quota will override the user quota.

Reset a password

To change a password, select Reset Password. You will need to enter a new password and then re-type the password to confirm. You cannot change the root users password from this dialog. Use the Change Password option below.

Change the root users password


This option lets you change the root users CDP/NSS password if you are currently connected to a server. 1. Right-click on the server and select Change Password.

2. Enter your old password, the new one, and then re-enter it to confirm.

CDP/NSS Administration Guide

45

FalconStor Management Console

Check connectivity between the server and console


You can check if the console can successfully connect to the storage server by rightclicking on a server and selecting Connectivity Test. By running this test, you can determine if your network connectivity is good. If it is not, the test may fail at some point. You should then check with your network administrator to determine the problem.

Add an iSCSI User or Mutual CHAP User


As a root user, you can add, delete or reset the CHAP secret of an iSCSI User or a mutual CHAP user. Other users (i.e. IPStor administrator or IPStor user) can also change the CHAP secret of an iSCSI user if they know the original CHAP secret. To add an iSCSI user or Mutual CHAP User from an iSCSI server: 1. Right-click on the server and select iSCSI Users from the menu. 2. Select Users. The iSCSI User Management screen displays.

From this screen, you can select an existing user from the list to delete the user or reset the Chap secret. 3. Click the Add button to add a new iSCSI user.

CDP/NSS Administration Guide

46

FalconStor Management Console

The iSCSI User add dialog screen displays.

4. Enter a unique user name for the new iSCSI user. 5. Enter and confirm the password and click OK. The Mutual CHAP level of security allows the target and the initiator authenticate each other. A separate secret is set for each target and for each initiator in the storage area network (SAN). You can select Mutual CHAP Users (Right-click on the iSCSI server --> iSCSI Users --> Mutual CHAP User) to manage iSCSI Mutual CHAP Users. The iSCSI Mutual CHAP User Management screen displays allowing you to delete users or reset the Mutual CHAP secret.\

CDP/NSS Administration Guide

47

FalconStor Management Console

Apply software patch updates (updated April 2012)


This section describes how to apply server and console patches in a standalone or failover environment.

Server patches
You can apply maintenance patches to your storage server through the console. Note: Server upgrade patches must be applied directly on the server and cannot be applied or rolled back via the console. Apply patch standalone server To apply a patch on a standalone server: 1. Download the patch onto the computer where the console is installed or a location accessible from that machine. Patches can be downloaded from the FalconStor customer support portal (support.falconstor.com). 2. Highlight a storage server in the tree. 3. Select Tools menu --> Add Patch. 4. Confirm that you want to continue. 5. Locate the patch file and click Open. The patch will be copied to the server and installed. 6. Check the Event Log to confirm that the patch installed successfully. Apply patch failover configuration To apply a patch on servers in a failover configuration and avoid unnecessary failover: 1. Make sure both servers are healthy and are not in a failover state. 2. From the console, log in to server B, select Failover --> Start Takeover A to manually take over server A. Verify that takeover has completed successfully before continuing. 3. Apply the patch on server A. Refer to the section above about applying a patch on a standalone server for details. 4. Check the Event Log to confirm that the patch installed successfully. 5. Make sure server A is ready by checking the result of the command sms -v. 6. From the console, on server B, select Failover --> Stop Takeover A to fail back.

CDP/NSS Administration Guide

48

FalconStor Management Console

7. Repeat steps 2-6, substituting B for A, to apply the patch on server B. Rollback patch To remove (uninstall) a patch and restore the original files: 1. Highlight a storage server in the tree. 2. Select Tools menu --> Rollback Patch. 3. Confirm that you want to continue. 4. Select the patch and click OK. 5. Check the Event Log to confirm that the patch uninstalled successfully.

Console patches
Windows console You need an account with administrator privileges to install the full Windows console package. 1. Close any console that is running. 2. Run the Windows executable file to uninstall the current version of the console. You might need to select the Run as administrator option to launch the program based on your login account. 3. Re-run the Windows executable file to install the new version. Java console 1. Close any console that is running. 2. Go to the Bin sub-directory of the console installation folder. 3. Copy the existing console jar file to another folder and add the date to the name so that the file can be used as a backup. 4. Copy the new jar file to the Bin directory, making sure it has the same name as the existing jar file. Java Web Start console 1. Close any console that is running. 2. Apply the server patch containing the new Web Start jar files on the server side.

CDP/NSS Administration Guide

49

FalconStor Management Console

Perform system maintenance


The FalconStor Management Console gives you a convenient way to perform system maintenance for your storage server. Note: The system maintenance options are hardware-dependent. Refer to your hardware documentation for specific information. Network configuration If you need to change storage server IP addresses, you must make these changes using Network Configuration. Using YaST or other third-party utilities will not update the information correctly. 1. Right-click on a server and select System Maintenance --> Network Configuration.

Domain name - Internal domain name. Append suffix to DNS lookup - If a domain name is entered, it will be appended to the machine name for name resolution. DNS - IP address of your DNS server. Default gateway - IP address of your default gateway. NIC - List of Ethernet cards in the server. Select the NIC you wish to modify from the drop-down list. Enable Telnet - Enable/disable the ability to Telnet into the server. Enable FTP - Enable/disable the ability to FTP into the server. The storage server must have the "pure-ftpd" package installed in order to use FTP.

CDP/NSS Administration Guide

50

FalconStor Management Console

Allow root to log in to telnet session - Log in to your telnet session using root. Network Time Protocol - Allows you to keep the date and time of your storage server in sync with Internet NTP servers. Click Config NTP to enter the IP addresses of up to five Internet NTP servers. 2. Click Config to configure each Ethernet card.

If you select Static, you must add addresses and net masks. MTU - Set the maximum transfer unit of each IP packet. If your card supports it, set this value to 9000 for jumbo frames. Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If you then change the MTU back to 9000, the performance will not increase until the server is restarted. Set hostname Right-click on a server and select System Maintenance --> Set Hostname to change your hostname. You must restart the server if you change the hostname. Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices. Restart IPStor Right-click on a server and select System Maintenance --> Restart IPStor to restart the Server processes. Right-click on a server and select System Maintenance --> Restart Network to restart your local network configuration.

Restart network

CDP/NSS Administration Guide

51

FalconStor Management Console

Reboot

Right-click on a server and select System Maintenance --> Reboot to reboot your server. Right-click on a server and select System Maintenance --> Halt to turn off the server without restarting it. Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server. If CDP/NSS detects IPMI when the server boots up, you will see several IPMI options on the System Maintenance --> IPMI sub-menu, Monitor, and Filter. Monitor - Displays the hardware information that is presented to CDP/NSS. Information is updated every five minutes but you can click the Refresh button to update more frequently.

Halt

IPMI

You will see a red warning icon component.

in the first column if there is a problem with a . An Alert tab will

In addition, you will see a red exclamation mark on the server appear with details about the error.

Filter - You can filter out components you do not want to monitor. This may be useful for hardware you do not care about or erroneous errors, such as when you do not have the hardware that is being monitored. You must enter the Name of the component being monitored exactly as it appears on the hardware monitor above.

CDP/NSS Administration Guide

52

FalconStor Management Console

CDP/NSS Administration Guide

53

FalconStor Management Console

Physical Resources

Physical resources are the actual devices attached to this storage server. SCSI adapters supported include SAS, FC, FCoE, and iSCSI. The SCSI adapters tab displays the adapters attached to this server and the SCSI Devices tab displays the SCSI devices attached to this server. These devices can include hard disks, tape libraries, and RAID cabinets. For each device, the tab displays the SCSI address (comprised of adapter number, channel number, SCSI ID, LUN) of the device, along with the disk size (used and available). If you are using FalconStors Multipathing, you will see entries for the alternate paths as well. The Storage Pools tab displays a list of storage pools that have been defined, including the total size and number of devices in each storage pool. The Persistent Binding tab displays the binding of each storage port to its unique SCSI ID. When you highlight a physical device, the Category field in the right-hand pane describes how the device is being used. Possible values are: Reserved for virtual device - A hard disk that has not yet been assigned to a SAN resource or Snapshot area. Used by virtual device(s) - A hard disk that is being used by one or more SAN resources or Snapshot areas. Reserved for Service-Enabled Device - A hard disk with existing data that has not yet been assigned to a SAN resource. Used by Service-Enabled Device - A hard disk with existing data that has been assigned to a SAN resource. Unassigned - A physical resource that has not been reserved yet.

CDP/NSS Administration Guide

54

FalconStor Management Console

Not available for IPStor - A miscellaneous SCSI device that is not used by the storage server (such as a scanner or CD-ROM). System - A hard disk where system partitions exist and are mounted (i.e. swap file, file system installed, etc.).

Physical resource icons


The following table describes the icons that are used to describe physical resources: Icon Description The D icon is indicates that the port is both an initiator AND a target. The T icon indicates that this is a target port. The I icon indicates that this is an initiator port. The V icon indicates that this disk has been virtualized or is reserved for a virtual disk. The S icon indicates that this is a Service-Enabled Device or is reserved for a Service-Enabled Device. The a icon indicates that this device is used in the logical resource that is currently being highlighted in the tree. The D icon indicates that an adapter using NPIV when it's enabled in dual-mode. Failover and Cross-mirror icons: The physical disk appearing in color indicates that it is local to this server. The V indicates that the disk is virtualized for this server. If there were a Q on the icon, it would indicate that this disk is the quorum disk that contains the configuration repository. The physical disk appearing in black and white indicates that it is a remote physical disk. The F indicates that the disk is a foreign disk.

Prepare devices to become logical resources


You can use one of the FalconStor disk preparation options to change the category of a physical device. This is important to do if you want to create a logical resource using a device that is currently unassigned.

CDP/NSS Administration Guide

55

FalconStor Management Console

The storage server detects new devices when you connect to it. When they are detected you will see a dialog box notifying you of the new devices. At this point you can highlight a device and press the Prepare Disk button to prepare it. The Physical Devices Preparation Wizard will help you to virtualize, serviceenable, unassign, or import physical devices.

At any time, you can prepare a single unassigned device by doing the following: Highlight the device, right-click, select Properties and select the device category. (You can find all unassigned devices under the Physical Resources/Adapters node of the tree view.) For multiple unassigned devices, highlight Physical Resources, right-click and select Prepare Disks. This launches a wizard that allows you to virtualize, unassign, or import multiple devices at the same time.

Rename a physical device


When a device is renamed on a server in a failover pair, the device gets renamed on the partner server also. However, it is not possible to rename a device when the server has failed over to its partner. 1. To rename a device, right-click on the device and select Rename.

2. Type the new name and press Enter.

CDP/NSS Administration Guide

56

FalconStor Management Console

Use IDE drives with CDP/NSS


If you have an IDE drive that you want to virtualize and use as storage, you must create a block device from it. To do this: 1. Right-click on Block Devices (under Physical Devices) and select Create Disk. 2. Select the device and specify a SCSI ID and LUN for it. The defaults are the next available SCSI ID and LUN. 3. Click OK when done. This virtualizes the device. When it is finished, you will see the device listed under Block Devices. You can now create logical resources from this device. Unlike a regular SCSI virtual device, block devices can be deleted. Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices.

Rescan adapters
1. To rescan all adapters, right-click on Physical Resources and select Rescan.

CDP/NSS Administration Guide

57

FalconStor Management Console

If you only want to scan a specific adapter, right-click on that adapter and select Rescan.

If you want to discover new devices without scanning existing devices, click the Discover New Devices radio button and then check the Discover new devices only without scanning existing devices check box. You can then specify additional scan details. 2. Determine what you want to rescan. If you are discovering new devices, set the range of adapters, SCSI IDs, and LUNs that you want to scan.

CDP/NSS Administration Guide

58

FalconStor Management Console

Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a list of LUNs. Note that this SCSI command is not supported by all devices. (If VSA is enabled and the actual LUN is beyond 256, you will need to use this option to discover them.) LUN Range - It is only necessary to use the LUN range if the Use Report Lun option does not work for your adapter. Stop scan when a LUN without a device is encountered - This option (used with LUN Range) will scan LUNs sequentially and then stop after the last LUN is found. Use this option only if all of your LUNs are sequential. Auto detect FC HBA SCSI ID - Select this option to enable auto detection of SCSI IDs with persistent binding. This will scan QLogic HBAs to discover devices beyond the scan range specified above. Read partition from inactive path when all the paths are inactive. - Select this option to force a status check of the from a path that is not in use.

Import a disk
You can import a foreign disk into a CDP or NSS appliance. A foreign disk is a virtualized physical device containing FalconStor logical resources previously set up on a different storage server. You might need to do this if a storage server is damaged and you want to import the servers disks to another storage server. When you right-click on a disk that CDP/NSS recognizes as foreign and select the Import option, the disks partition table is scanned and an attempt is made to reconstruct the virtual drive out of all of the segments. If the virtual drive was constructed from multiple disks, you can highlight Physical Resources, right-click and select Prepare Disks. This launches a wizard that allows you to import multiple disks at the same time. As each drive is imported, the drive is marked offline because it has not yet found all of the segments. Once all of the disks that were part of the virtual drive have been imported, the virtual drive is re-constructed and is marked online. Importing a disk preserves the data that was on the disk but does not preserve the client assignments. Therefore, after importing, you must either reassign clients to the resource. Notes: The GUID (Global Unique Identifier) is the permanent identifier for each virtual device. When you import a disk, the virtual ID, such as SANDisk00002, may be different from the original server. Therefore, you should use the GUID to identify the disk. If you are importing a disk that can be seen by other storage servers, you should perform a rescan before importing. Otherwise, you may have to rescan after performing the import.

CDP/NSS Administration Guide

59

FalconStor Management Console

Test physical device throughput


You can test the following for your physical devices: Sequential throughput Random throughput Sequential I/O rate Random I/O rate Latency

To check the throughput for a device: 1. Right-click on the device (under Physical Resources). 2. Select Test from the menu. The system will test the device and then display the throughput results on a new Throughput tab.

Manage multiple paths to a device


SCSI aliasing works with the FalconStor Multipathing option to eliminate a potential point of failure in your storage network by providing multiple paths to your storage devices using multiple Fibre Channel switches and/or multiple adapters and/or storage devices with multiple controllers. In a multiple path configuration, CDP/NSS automatically detects all paths to the storage devices. If one path fails, CDP/NSS automatically switches to another. Refer to the Multipathing chapter for more information.

Repair paths to a device


Repair is the process of removing one or more physical device paths from the system and then adding them back. Repair may be necessary when a device is not responsive which can occur if a storage controller has been reconfigured or if a standby alias path is offline/disconnected. If a path is faulty, adding it back may not be possible. To repair paths to a device:

CDP/NSS Administration Guide

60

FalconStor Management Console

1. Right-click on the device and select Repair.

If all paths are online, the following message will be displayed instead: There are no physical device paths that can be repaired. 2. Select the path to the device that needs to be repaired. If the path is still missing after the repair or the entire physical device is gone from the console, the path could not be repaired. You should investigate the cause, correct the problem, and then rescan adapters with the Discover New Devices option.

CDP/NSS Administration Guide

61

FalconStor Management Console

Logical Resources

Logical resources are all of the resources defined on the storage server, including SAN resources, and groups. SAN Resources SAN logical resources consist of sets of storage blocks from one or more physical hard disk drives. This allows the creation of logical resources that contain a portion of a larger physical disk device or an aggregation of multiple physical disk devices. Clients do not gain access to physical resources; they only have access to logical resources. This means that an administrator must configure each physical resource to one or more logical resources so that they can be assigned to the clients.

CDP/NSS Administration Guide

62

FalconStor Management Console

When you highlight a SAN resource, you will see a small icon next to each device that is being used by the resource. In addition, when you highlight a SAN resource,

you will see a GUID field in the right-hand pane.

The GUID (Global Unique Identifier) is the permanent identifier for this virtual device. The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because, in the event of a disaster, this identifier will be important if you need to rebuild your system and import this disk. Groups Groups are multiple drives (virtual drives and Service-Enabled drives) that will be assembled together for SafeCache or snapshot synchronization purposes. For example, when one drive in the group is to be replicated or backed up, the entire group will be snapped together to maintain a consistent image.

Logical resource icons


The following table describes the icons that are used to show the status of logical resources: Icon Icon alert / warning description Virtual device offline (or has incomplete segments) Mirror is out of sync Mirror is suspended TimeMark rollback failed Replication failed One or more supporting resources is not accessible (SafeCache, CDP, Snapshot resource, HotZone, etc.)

CDP/NSS Administration Guide

63

FalconStor Management Console

Icon

Icon alert / warning description Replica in disaster recovery state (after forcing a replication reversal) Cross-mirror need to be repaired on the virtual appliance Primary replica is no longer valid as a replica Invalid replica

Enable write caching


You can leverage a third party disk subsystem's built-in caching mechanism to improve I/O performance. Write caching allows the third party disk subsystem to utilize its internal cache to accelerate I/O. To write cache a resource, right-click on it and select Write Cache --> Enable.

Replication

The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the servers icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in". When you highlight the Replication object, the right-hand pane displays a summary of replication to/from each server. For each replica disk, you can promote the replica or reverse the replication. Refer to the Replication chapter for more information about using replication.
CDP/NSS Administration Guide 64

FalconStor Management Console

SAN Clients

Storage Area Network (SAN) Clients are the file and application servers that utilize the storage resources via the storage server. Since SAN resources appear as locally attached SCSI devices, the applications, such as file services, databases, web and email servers, do not need to be modified to utilize the storage. On the other hand, since the storage is not locally attached, there may be some configuration needed to locate and mount the required storage. The SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally attached devices to the SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though the devices are actually located at the storage server site. When you highlight a specific SAN client, the right-hand pane displays the Client ID, type, and authentication status, as well as information about the client machine. The Resources tab displays a list of SAN resources that are allocated to this client. The adapter, SCSI ID and LUN are relative to this CDP/NSS SAN client only; other clients that may have access to the SAN resource may have different adapter SCSI ID and LUN information.

Add a client from the FalconStor Management Console


1. In the console, right-click on SAN Clients and select Add. 2. Enter a name for the SAN Client, select the operating system, and indicate whether or not the client machine is part of a cluster.

CDP/NSS Administration Guide

65

FalconStor Management Console

If the clients machine name is not resolvable, you can enter an IP address and then click Find to discover the machine. 3. Determine if you want to limit the amount of space that can be automatically assigned to this client. The quota represents the total allowable space that can be allocated for all of the resources associated with this client. It is only used to restrict certain types of resources (such as Snapshot Resource and CDP Resource) that expand automatically. This prevents them from allocating storage space indefinitely. Instead, they can only expand if the total size of all the resources associated with the client does not exceed the pre-defined quota for that client. 4. Indicate if you want to enable persistent reservation. This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes. Note: If you are using AIX SAN Client cluster nodes, this option should be cleared. 5. Select the clients protocol(s). If you select iSCSI, you must indicate if this is a mobile client. You will then be asked to select the initiator that this client uses and add/select users who can authenticate for this client. Refer to Add iSCSI clients for more information. If you select Fibre Channel, you will have to select WWPN initiators. You will then be asked to select Volume Set Addressing. Refer to Add Fibre Channel clients for more information. 6. Confirm all information and click Finish to add this client

Add a client for FalconStor host applications


If you are using FalconStor client/agent software, such as snapshot agents, or HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide or the appropriate agent user guide for details regarding adding clients via FalconStor Intelligent Management Agent (IMA). FalconStor client/agent software allows you to add a storage server directly in IMA/ SDM or the SAN Client. For example, if you are using HyperTrac, the first time you start HyperTrac, the system scans and imports all storage servers identified by IMA/SDM or the SAN Client. These storage servers are then listed in the HyperTrac the console. Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client.

CDP/NSS Administration Guide

66

FalconStor Management Console

Change the ACSL


You can change the ACSL (adapter, channel, SCSI, LUN) for a SAN resource assigned to a SAN client if the device is not currently attached to the client. To change, right-click on the SAN resource under the SAN Client object (you cannot do this from the SAN resources object) and select Properties. You can enter a new adapter, SCSI ID, or LUN. Note: For Windows clients: One SAN resource for each client must have a LUN of 0. Otherwise, the operating system will not see the devices assigned to the SAN client. In addition, for the Linux OS, the rest of the LUNs must be sequential.

Grant access to a SAN Client


By default, only the root user and IPStor admins can manage SAN resources, groups, or clients. While IPStor users can create new SAN Clients, if you want an IPStor user to manage an existing SAN Client, you must grant that user access. To do this: 1. Right-click on a SAN Client and select Access Control. 2. Select which user can manage this SAN Client. Each SAN Client can only be assigned to one IPStor user. This user will have rights to perform any function on this SAN Client, including assigning, adding protocols, and deletion.

CDP/NSS Administration Guide

67

FalconStor Management Console

Console options
To set options for the console, select Tools --> Console Options.

You can make the following changes. Remember password for session - If the console is already connected to a server, when you attempt to open a second, third, or subsequent server, the console will use the credentials that were used for the last successful connection. If this option is unchecked, you will be prompted to enter a password for every server you try to open. Automatically time out servers after nn minute(s) - The console will collapse a server that has been idle for the number of minutes you specify. If you need to access the server again, you will have to reconnect to it. The default is 10 minutes. Enter 00 minutes to disable the timeout. Update statistics every nn second(s) - The console will update statistics by the frequency you specify. Automatically refresh the event log every nn second(s) - The console will update the event log by the frequency you specify, only when you are viewing it. Console Log Options - The console log (ipstorconsole.log) is kept on the local machine and stores information about the local version of the console. The console log is displayed at the very bottom of the console screen. The options affect how information for each console session will be maintained: Overwrite log file - Overwrite the information from the last console session when you start a new session. Append to log file - Keep all session information. Do not write to log file - Do not maintain a console log.

CDP/NSS Administration Guide

68

FalconStor Management Console

Create a custom menu


You can create a menu in the FalconStor Management Console from which you can launch external applications. This can add to the convenience of FalconStors centralized management paradigm by allowing administrators to start all of their applications from a single place. The Custom menu will appear in your console along with the normal menu (between Tools and Help). To create a custom menu, select Tools --> Set up Custom Menu. Then click Add and

enter the information needed to launch this application.

Menu Label - The application title that will be displayed in the Custom menu. Command - The file (usually an.exe) that launches this application. Command Argument - An argument that will be passed to the application. If you are launching an Internet browser, this could be a URL. Menu Icon - The graphics file that contains the icon for this application. This will be displayed in the Custom menu.

CDP/NSS Administration Guide

69

CDP/NSS Administration Guide CDP/NSS Administration Guide

Storage Pools
A storage pool is a group of one or more physical devices. Creating a storage pool enables you to provide all of the space needed by your clients in a very efficient manner. You can create and manage storage pools in a variety of ways, including: Tiers - Performance levels, cost, or redundancy Device categories - Virtual, Service-Enabled Types - Primary storage, Journal, CDR, Cache, HotZone, virtual headers, Snapshot, TimeView, and configuration. Specific application use - FalconStor DiskSafe, etc.

For example, you can classify your storage by tier (low-cost, high-performance, high-redundancy, etc.) and assign it based on these classifications. Using this example, you may want to have your business critical applications use storage from the high-redundancy or high-performance pools while having your less critical applications use storage from other pools. Storage pools work with all automatic allocation mechanisms in CDP/NSS. This capacity-on-demand functionality automatically allocates storage space from a specific pool when storage is needed for a specific use. As your storage needs grow, you can easily extend your storage capacity by adding more devices to a pool and then creating more logical resources or allocating more space to your existing resources. The additional space is immediately and seamlessly available.

Manage storage pools and the devices within storage pools


Only root users and IPStor administrators can manage storage pools. The root user and the IPStor Administrator have full privileges for storage pools. The root user or the IPStor Administrator must create the pools first and then the IPStor Users can manage them. IPStor Users can create virtual devices and allocate space from the storage pools assigned to them but they cannot create, delete, or modify storage pools. The storage pool management rights of each type of user are summarized in the table below: Type of User Root IPStor Administrator IPStor User Can create/delete pools? Yes Yes No Can add/remove storage from pools Yes Yes No

Refer to the Account management section for additional information regarding user access rights.
CDP/NSS Administration Guide 70

Storage Pools

Create storage pools


Physical devices must be prepared (virtualized, service-enabled) before they can be added into a storage pool. Each storage pool can only contain the same type of physical devices. Therefore, a storage pool can contain only virtualized drives or only service-enabled drives. A storage pool cannot contain mixed types. Physical devices that have been allocated for a logical resource can still be added to a storage pool. To create a storage pool: 1. Right-click on Storage Pools and select New.

2. Enter a name for the storage pool. 3. Indicate which type of physical devices will be in this storage pool. Each storage pool can only contain the same type of physical devices. 4. Select the devices that will be assigned to this storage pool or you can leave the storage pool empty for later use. Physical devices that have been allocated for any logical resource can still be added to a storage pool. 5. Click OK to create the storage pool.

CDP/NSS Administration Guide

71

Storage Pools

Set properties for a storage pool


To set properties: 1. Right-click on a storage pool and select Properties.

On the General tab you can change the name of the storage pool and add/delete devices assigned to this storage pool. 2. Select the Type tab to designate how each storage pool should be allocated.

CDP/NSS Administration Guide

72

Storage Pools

The type affects how each storage pool should be allocated. When you are in a CDP/NSS creation wizard, the applicable storage pool(s) will be presented for selection. However, you can still select from another storage pool type if needed. All Types can be used for any type of resource. Storage is the preferred storage pool to create SAN resources and their corresponding replicas. Snapshot is the preferred storage pool for snapshot resources. Cache is the preferred storage pool for SafeCache resources. HotZone is the preferred storage pool for HotZone resources. Journal is the preferred storage pool for CDP resources and CDP resource mirrors. CDR is the preferred storage pool for continuous data replicas. VirtualHeader is the preferred storage pool for the virtual header that is created for a Service-Enabled Device SAN Resource. Configuration is the preferred storage pool to create the configuration repository for failover. TimeView is the preferred storage pool for TimeView images. ThinProvisioning is the preferred storage pool for thin disks. Allocation Block Size allows you to specify the minimum size that will be allocated when a virtual resource is created or expanded. Using this feature is highly recommended for thin disks (ThinProvisioning selected as the type for this storage pool) for several reasons. The maximum number of segments that is supported per virtual device is 1024. When Allocation Block Size is not enabled, thin disks are expanded in increments of 10 GB. With frequent expansion, it is easy to reach the maximum number of segments. Using Allocation Block Size with the largest block size feasible for your storage can prevent devices from reaching the maximum. In addition, larger block sizes mean more consecutive space within each block, limiting disk fragmentation and improving performance for thin disks. The default for the Allocation Block Size is 16 GB and the possible choices are 1, 2, 4, 8, 16, 32, 64, 128, and 256 GB. If you enable Allocation Block Size for resources other than thin disks, ServiceEnabled Devices, or any copy of a resource (replica, mirror, snapshot copy, etc), you should be aware that the allocation will round up to the next multiple when you create a resource. For example, if you have the Allocation Block Size set to 16 GB and you attempt to create a 20 GB virtual device, the system will create a 32 GB device. If you do not enable Allocation Block Size, you can specify any size when creating/expanding devices. You may want to do this for disks that are not thin disks since they do not expand as often and will rarely reach the maximum number of segments. When specifying an Allocation Block Size, your physical disk should be evenly divisible by the number you specify so that all space can be used. For example, if you have a 500 GB disk and you select 128 GB as the block size, the system
CDP/NSS Administration Guide 73

Storage Pools

will only be able to allocate three blocks of 128 GB each (128*3=384) from that disk because the remaining 116 GB is not enough to allocate. When you look at the Available Disk Space statistics in the console, this remaining 116 GB will be excluded. 3. Select the Tag tab to set a tag string to limit client side applications to specific storage pools.

When an application requests storage with a specific tag string, only the storage pools with the same tag can be used. You can have your own internal application that has been programmed to use a tag. 4. Select the Security tab to designate which users and administrators can manage this storage pool.

Each storage pool can be assigned to one or more User or Group. The assigned users can create virtual devices and allocate space from the storage pools assigned to them but they cannot create, delete, or modify storage pools.

CDP/NSS Administration Guide

74

CDP/NSS Administration Guide

Logical Resources
Once you have physically attached your physical SCSI or Fibre Channel devices to your storage server you are ready to create Logical Resources to be used by your CDP/NSS clients. This configuration can be done entirely from the FalconStor Management Console. Logical Resources are logically mapped devices on the storage server. They are comprised of physical storage devices, known as Physical Resources. Physical resources are the actual SCSI and/or Fibre Channel devices (such as hard disks, tape drives, and RAID cabinets) attached to the server. Clients do not have access to physical resources; they have access only to Logical Resources. This means that physical resources must be defined as Logical Resources first, and then assigned to the clients so they can access them. SAN resources provide storage for file and application servers (called SAN Clients). When a SAN resource is assigned to a SAN client, a virtual adapter is defined for that client. The SAN resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the configuration of actual SCSI storage devices and adapters, allowing the operating system and applications to treat them like any other SCSI device. Understanding how to create and manage Logical Resources is critical to a successful CDP/NSS storage network. Please read this section carefully before creating and assigning Logical Resources.

CDP/NSS Administration Guide

75

Logical Resources

Types of SAN resources


SAN resources can be of the following types: virtual devices and Service-Enabled Devices.

Virtual devices
IPStor technology gives CDP and NSS the ability to aggregate multiple physical storage devices (such as JBODs and RAIDs) of various interface protocols (such as SCSI or Fibre Channel) into logical storage pools. From these storage pools, virtual devices can be created and provisioned to application servers and end users. This is called storage virtualization. Virtual devices are defined as sets of storage blocks from one or more physical hard disk drives. This allows the creation of virtual devices that can be a portion of a larger physical disk drive, or an aggregation of multiple physical disk drives. Virtual devices offer the added capability of disk expansion. Additional storage blocks can be appended to the end of existing virtual devices without erasing the data on the disk. Virtual devices can only be assembled from hard disk storage. It does not work for CD-ROM, tape, libraries, or removable media. When a virtual device is allocated to an application server, the server thinks that an actual SCSI storage device has been physically plugged into it. Virtual devices are assigned to virtual adapter 0 (zero) when mapped to a client. If there are more than 15 virtual devices, a new adapter will be defined. Virtualization examples The following diagrams show how physical disks can be mapped into virtual devices. SAN Resources Physical Devices

Virtual Device: SCSI ID = any. Adapter number does not need to match. Sectors are mapped, combining sectors from multiple physical disks.

Adapter = 0 SCSI ID = 1 sectors 019999

Adapter = 1 SCSI ID = 3 sectors 09999

Adapter = 1 SCSI ID = 4 sectors 09999

CDP/NSS Administration Guide

76

Logical Resources

The diagram above shows a virtual device being created out of two physical disks. This allows you to create very large virtual devices for application servers with large storage requirements. If the storage device needs to grow, additional physical disks may be added to increase the size of a virtual device. Note that this will require that the client application server resize the partition and file system on the virtual device.

SAN Resources

Physical Devices

Virtual Disk: SCSI ID = any Adapter number does not need to match Sectors are mapped, a single physical device maps to multiple virtual devices

Adapter = 1 SCSI ID = 5 sectors 04999

Adapter = 2 SCSI ID = 3 sectors 04999

Adapter = 1 SCSI ID = 6 sectors 04999

Adapter = 2 SCSI ID = 3 sectors 50009999

The example above shows a single physical disk split into two virtual devices. This is useful when a single large device exists, such as a RAID, which could be shared among multiple client application servers. Virtual devices can be created using various combining and splitting methods, although you will probably not create them in this manner in the beginning. You may end up with devices like this after growing virtual devices over time.

Thin devices
Thin Provisioning allows storage space to be assigned to clients dynamically, on a just-enough and just-in-time basis, based on need. This avoids under-utilization of storage by applications while allowing for expansion in the long-term. The maximum size of a disk (virtual SAN resource) with Thin Provisioning enabled is limited to 67,108,596 MB. You can expand a thin disk up to the maximum size of 67,108,596 MB. When expanded, the mirror on it automatically expands also. A replica on a thin disk will be able to use space on other virtualized devices as long as there is available space. If space is not available for expansion, the Thin Provisioned disk on primary will be prevented from expanding and a message will display on the console indicating why expansion is not possible. The minimum permissible size of a thin disk is 10 GB. Once the threshold is met, the thin disk expands in 10 GB increments.

CDP/NSS Administration Guide

77

Logical Resources

With Thin Provisioning, a single pool of storage can be provisioned to multiple client hosts. Each client sees the full size of its provisioned disk while the actual amount of storage used is much smaller. Because so little space is actually being used, Thin Provisioning allows resources to be over-allocated, meaning that more storage can be provisioned to hosts than actually exists.

Because each client sees the full size of its provisioned disk, Thin Provisioning is the ideal solution for users of legacy databases and operating systems that cannot handle dynamic disk expansion. The mirror of a disk with Thin Provisioning enabled is another disk with Thin Provisioning enabled. When a thin disk is expanded, the mirror also automatically expands. If the mirrored disk is offline, storage cannot be added to the thin disk manually. If the mirror is offline when the threshold is reached and automatic storage addition is about to occur, the offline mirror is removed. Storage is automatically added to the Thin Provisioned disk, but the mirror must be recreated manually. A replica on a thin disk can use space on other virtualized devices as long as space is available. If there is no space available for expansion, the thin disk on the primary will be prevented from expanding and a message will display on the console.
Note: When using Thin Provisioning, it is recommended that you create a disk with an initial size that is at least 15% the maximum size of the disk. Some write operations, such as creating a file system in Linux, may scatter their writes across the span of a disk.

CDP/NSS Administration Guide

78

Logical Resources

You can check the status of the thin disk from the FalconStor Management Console by highlighting the thin disk and clicking the General tab.

The usage percentage is displayed in green as long as the available sectors are greater than 120% of the threshold (in sectors). It is displayed in Blue when available sectors are less than 120% of the threshold (in sectors) but still greater than the threshold (in sectors). The usage percentage is displayed in Red when the available sectors are less than the threshold (in sectors).
Note: Do not perform disk defragmentation on a Thin Provisioned disk. Doing so may cause data from the used sectors of the disk to be moved into non-used sectors and result in unexpected thin-provisioned disk space increase. In fact, any disk or filesystem utility that might scan or access any unused sector could also cause a similar unexpected space usage increase.

Service-Enabled devices
Service-Enabled Devices are hard drives with existing data that can be accessed by CDP/NSS to make use of all key CDP/NSS storage services (mirroring, snapshot, etc.), without any migration/copying, without any modification of data, and with minimal downtime. Service-Enabled Devices are used to migrate existing drives into the SAN.

CDP/NSS Administration Guide

79

Logical Resources

Because Service-Enabled Devices are preserved intact, and existing data is not moved, the devices are not virtualized and cannot be expanded. Service-Enabled Devices are all maintained in a one-to-one mapping relationship (one physical disk equals one logical device). Unlike virtual devices, they cannot be combined or split into multiple logical devices.

Create SAN resources - Procedures


SAN resources are created in the FalconStor Management Console.
Note: After you make any configuration changes, you may need to rescan or restart the client in order for the changes to take effect. After you create a new virtual device, assign it to a client, and restart the client (or rescan), you will need to write a signature, create a partition, and format the drive so that the client can use it.

Prepare devices to become SAN resources


You can use one of FalconStors disk preparation options to change the category of a device. This is important if you want to create a logical resource using a device that is currently unassigned. CDP and NSS appliances detect new devices as you connect to them (or when you execute the Rescan command). When new devices are detected, a dialog box displays notifying you of the discovered devices. At this point you can highlight a device and press the Prepare Disk button to prepare it. At any time, you can prepare a single unassigned device by following the steps below: Highlight the device and right-click Select Properties Select the device category. (You can find all unassigned devices under the Physical Resources/Adapters node of the tree view.) For multiple unassigned devices, highlight Physical Resources, right-click and select Prepare Disks. This launches a wizard that allows you to virtualize, unassign, or import multiple devices at the same time.

Create a virtual device SAN resource


You can create a virtual device SAN resource by following the steps below. Each storage server supports a maximum of 1024 SAN resources. 1. Right-click on SAN Resources and select New.

CDP/NSS Administration Guide

80

Logical Resources

2. Select Virtual Device.

3. Select the storage pool or physical device(s) from which to create this SAN resource.

You can create a SAN resource from any single storage pool. Once the resource is created from a storage pool, additional space (automatic or manual expansion) can only be allocated from the same storage pool. You can select List All to see all storage pools, if needed.
CDP/NSS Administration Guide 81

Logical Resources

Depending upon the resource type, you may have the option to select to Use Thin Provisioning for more efficient space allocation. 4. Select the Use Thin Provisioning checkbox to allocate a minimum amount of space for a virtual resource. When usage thresholds are met, additional storage is allocated as necessary.

5. Specify the fully allocated size of the resource to be created. For NSS, the default initial size is 1 GB and the default allocation is 10 GB. For CDP, the default initial size is 16 GB and the default allocation is 16 GB. A disk with Thin Provisioning enabled can be configured to replicate to a SAN resource or to another disk with Thin Provisioning enabled. From the client side, it appears that the full disk size is available. Thin provisioning is supported for the following resource types: SAN Virtual SAN Virtual Replica SAN resources can replicate to a disk with Thin Provisioning as long as the size of the SAN resource is 10GB or greater.

CDP/NSS Administration Guide

82

Logical Resources

6. Select how you want to create the virtual device.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates a virtual device using an available device. Batch lets you create multiple SAN resources at one time. These SAN resources will all be the same size.

CDP/NSS Administration Guide

83

Logical Resources

If you select Custom, you will see the following windows:


Select either an entirely unallocated or partially unallocated device. Only one device can be selected at a time from this dialog. To create a virtual device SAN resource from multiple physical devices, you will need to add the devices one at a time. After selecting the parameters for the first device, you will have the option to add more devices.

Indicate how much space to allocate from this device.

Click Add More if you want to add another physical device to this SAN resource. If you select to add more devices, you will go back to the physical device selection screen where you can select another device.

CDP/NSS Administration Guide

84

Logical Resources

If you select Batch, you will see a window similar to the following:

Indicate how to name each resource. The SAN Resource Prefix is combined with the starting number to form the name of each SAN resource. You can deselect the Use default ID for Starting Number option to restart numbering from one. In the Resource Size field, indicate how much space to allocate for each resource. Indicate how many SAN resources to create in the Number of Resources field.

CDP/NSS Administration Guide

85

Logical Resources

7. (Express and Custom only) Enter a name for the new SAN resource.

The Express screen is shown above and the Custom screen is shown below:

Note:

The name is not case sensitive. The Set this as the resource name (not prefix) option does not append the name with the virtual ID number.

CDP/NSS Administration Guide

86

Logical Resources

8. Confirm that all information is correct and then click Finish to create the virtual device SAN resource. 9. (Express and Custom only) Indicate if you would like to assign the new SAN resource to a client.

If you select Yes, the Assign a SAN Resource Wizard will be launched.
Note: After you assign the SAN resource to a client, you may need to restart the client. You will also need to write a signature, create a partition, and format the drive so that the client can use it.

CDP/NSS Administration Guide

87

Logical Resources

Add virtual disks for data storage


The FalconStor CDP and NSS Virtual Appliance supports up to 10 TB of space for storage virtualization, depending upon the storage source. Before you create the virtual disks for the virtualization storage, you should know the block size of the datastore volume, and the maximum size of one virtual disk size controlled by the volume block size. It is recommended that you select an 8 MB block size when creating a VMFS datastore on your VMware ESX servers. If you create a virtual disk that exceeds the maximum size supported by its located volume, an "Insufficient disk space on datastore" error will display. You can resolve the error by changing to the disk size supported by the volume block.
Volume Block Size Maximum size of one virtual disk

1MB 2MB 4MB 8MB

256GB 512GB 1024GB 2048GB

You can check the block size of your volume via the VMware vSphere Client: 1. Launch the VMware vSphere Client, connect to the ESX server and log into the account with root privileges. 2. Click the ESX server in the inventory and then click the Configuration setting. 3. On the Configuration tab, click Storage under the Hardware list. Then right-click one of the datastores and click Properties.

CDP/NSS Administration Guide

88

Logical Resources

On the Volume Properties, you can see the Block Size and the Maximum File Size in the Format information.

The maximum file size depends on the block size of the VMFS storage. You cannot create a standard virtual disk that exceeds this maximum capacity; otherwise you will encounter an insufficient disk space on datastore pro).blem (even if the VMFS storage has enough capacity. 4. Add a new virtual disk by following the steps below. There is no need to power-off the virtual appliance to add the new virtual disk for storage virtualization usage when using the CDP or NSS Virtual Appliance. On the VMware vSphere Client, right-click the CDP or NSS Virtual Appliance: FalconStor-NSSVA and then click Edit Settings. On the Hardware tab, click the Add button. For Select Device Type, click Hard Disk and then click Next. For Select a Disk, click Create a new virtual disk and then click Next. When prompted to Specify Disk Capacity, Provisioning, and Location, enter the size of the new virtual disk. Make sure the value does not exceed the maximum file size supported by the volume. Check the Support clustering features such as Fault Tolerance option to force creation of an eagerzeroedthick disk.
Notes:

Do not select the Fault Tolerance option for the guest VM's vmdks at this step. Creating an EagerZeroThick disk is a time-consuming process. You may experience a significant waiting period,

CDP/NSS Administration Guide

89

Logical Resources

Browse to select a datastore with available free space to create the virtual disk. Click Next to set the disk mode as Independent Persistent on Specify Advanced Options. Review your choices and click Finish to complete the virtual disk creation setting. In the FalconStor-NSSVA (or FalconStor-CDPVA) Virtual Machine properties, you will see New Hard Disk (adding) in the hardware list, Click OK to save the setting and the new virtual disk will be created on the datastore. Repeat the steps above to add another virtual disk for virtualization storage.

5. Add a new device to the storage pool. The FalconStor CDP/NSS Virtual Appliance uses storage pools to manage storage usage and security. Each storage pool can contain one or more physical devices and can be used to consolidate the capacity from all storage pool members. you can also expand the capacity easily by adding a new device category to a newly added virtual disk and add it into a storage pool. All devices must be added into the storage pool for central resource management.

Refer to the Storage Pools chapter for more information regarding storage pools.

CDP/NSS Administration Guide

90

Logical Resources

Create a SAN Client for VMware ESX server


Follow the steps below to create a SAN client for a VMware ESX server for storage resource assignment. On the VMware ESX server, log into the console and use the vmkping command to test the IP network connection from the ESX server iSCSI software adapter to the CDP or NSS virtual appliance. In addition, you can add the CDP or NSS virtual appliance IP into the iSCSI server list of the iSCSI software adapter and check whether the iSCSI initiator name registered on the CDP or NSS virtual appliance. Adding the iSCSI server on ESX Software iSCSI Adapter 1. Launch VMware vSphere Client and connect to the ESX server. 2. Highlight the ESX server and click the Configuration tab. 3. Click the Storage Adapters and right-click the device under iSCSI Software Adapter. Select the iSCSI software adapter device and then click Properties. 4. On the iSCSI initiator (device name) Properties, check the iSCSI properties and record the iSCSI name, for example: iqn.1998-01.com.vmware:esx03. 5. Click the Dynamic Recovery tab, and then click the Add button. 6. On Send Targets, enter the IP address of the virtual appliance. It will take several minutes to complete the configuration. Once the IP address has been added into the iSCSI server list, click Close. Creating the SAN Client for the ESX server 1. Launch the FalconStor Management Console and connect to the NSS Virtual Appliance with IPStor administrator privileges. 2. Click and expand the NSSVA, then right click the SAN Clients node and select Add. 3. The Add Client Wizard launches. 4. Click Next to start the administration task. 5. When prompted to Select Client Protocols, click to enable the iSCSI protocol and click Next. The Create Default iSCSI target option is selected by default to create a iSCSI target automatically. 6. Select Target IP by enabling one or both networks providing the iSCSI service. 7. On the Set Client's initiator, the iSCSI initiator name of the ESX server displays if the iSCSI server was added successfully. Click to enable it and then click Next. 8. On Set iSCSI User Access, change it to Allow unauthenticated access or enter the CHAP secret (12 to 16 characters).
CDP/NSS Administration Guide 91

Logical Resources

9. On Enter the Generic Client Name, enter the Client IP address using the ESX server's IP address. 10. On Select Persistent Reservation Option, keep the default setting and click Next. 11. On Add the client, review all configuration settings and then click Finish to add the SAN client into the system. 12. Expand the SAN Clients. You will see the newly created SAN client for ESX server and the iSCSI Target. The screen below displays the SAN client and iSCSI target created for the ESX server connection.

Create a Service-Enabled Device SAN resource


1. Right-click on SAN Resources and select New.

CDP/NSS Administration Guide

92

Logical Resources

2. Select Service Enabled Device.

3. Select how you want to create this device.


Custom lets you select one physical device to use. Batch lets you create multiple SAN resources at one time.

4. Select the device that you want to make into a Service-Enabled Device.

CDP/NSS Administration Guide

93

Logical Resources

A list of the storage pools and physical resources that have been reserved for this purpose are displayed.

5. (Service-Enabled Devices only) Select the physical device(s) for the ServiceEnabled Devices virtual header.

Even though Service-Enabled Devices are used as is, a virtual header is created on another physical device to allow CDP/NSS storage services to be supported.

CDP/NSS Administration Guide

94

Logical Resources

6. Enter a name for the new SAN resource.

The name is not case sensitive. 7. Confirm that all of the information is correct and then click Finish to create the SAN resource. 8. Indicate if you would like to assign the new SAN resource to a client.

If you select Yes, the Assign a SAN Resource Wizard is launched.

CDP/NSS Administration Guide

95

Logical Resources

Assign a SAN resource to one or more clients


You can assign a SAN resource to one or more clients or you can assign a client to one or more SAN resources. While the wizard is initiated differently, the outcome is the same.
Note: (For AIX Fibre Channel clients running DynaPath) If you are re-assigning SAN resources to the same LUN, you must reboot the AIX client after unassigning a SAN resource.

1. Right-click on a SAN Resources object and select Assign. The wizard can also be launched from the Create SAN Resource wizard. Alternatively, you can right-click on a SAN Client and select Assign. 2. If this server has multiple protocols enabled, select the type of client to which you will be assigning this SAN resource. 3. Select the client to be assigned and determine client access rights. If you initiated the wizard by right-clicking on a SAN Client instead of a SAN resource, you will need to select the SAN resource(s) instead.
Read/Write - Only one client can access this SAN resource at a time. All others (including Read Only) will be denied access. This is the default. Read/Write Non-Exclusive - Two clients can connect at the same time with both read and write access. You should be careful with this option because if you have multiple clients writing to a device at the same time, you have the potential to corrupt data. This option should only be used by clustered servers, because the cluster itself prevents multiple clients from writing at the same time. Read Only - This client will have read only access to the SAN resource. This option is useful for a read-only disk. Notes:

In a Fibre Channel environment, we recommend that only one CDP/ NSS Client be assigned to a SAN resource (with Read/Write access). If two or more Fibre Channel clients attempt to connect to the same SAN resource, error messages will be generated each time the second client attempts to connect to the resource. If multiple Windows 2000 clients are assigned read-only access to the same virtual device, the only partition they can read from is FAT.

CDP/NSS Administration Guide

96

Logical Resources

For Fibre Channel clients, you will see the following screen:

For iSCSI clients, you will see the following screen:

You must have already created a target for this client. Refer to for more information. You can add any application server, even if it is currently offline.
Note: You must enter the clients name, not an IP address.

CDP/NSS Administration Guide

97

Logical Resources

4. If this is a Fibre Channel client and you are using Multipath software (such as FalconStor DynaPath), enter the World Wide Port Name (WWPN) mapping.

This WWPN mapping is similar to Fibre Channel zoning and allows you to provide multiple paths to the storage server to limit a potential point of network failure. You can select how the client will see the virtual device in the following ways:
One to One - Limits visibility to a single pair of WWPNs. You will need to select the clients Fibre Channel initiator WWPN and the servers Fibre Channel target WWPN. One to All - You will need to select the clients Fibre Channel initiator WWPN. All to One - You will need to select the servers Fibre Channel target WWPN. All to All - Creates multiple data paths. If ports are ever added to the client or server, they will automatically be included in the WWPN mapping.

CDP/NSS Administration Guide

98

Logical Resources

5. If this is a Fibre Channel client and you selected a One to n option, select which port to use as an initiator for this client.

6. If this is a Fibre Channel client and you selected an n to One option, select which port to use as a target for this client.

7. Confirm all of the information and then click Finish to assign the SAN resource to the client(s).

CDP/NSS Administration Guide

99

Logical Resources

The SAN resource will now appear under the SAN Client in the configuration tree view:

Discover devices from a client


Depending upon the operating system of the client, you may be required to reboot the client machine in order to be able to use the new SAN resource.

Windows clients
If an assigned SAN resource is larger than 3GB, formatting the resource as a FAT partition will not format properly.

Solaris clients
x86 vs SPARC If you create a virtual device and format it for Solaris x86, the device will fail to mount if you try to use that same virtual device under Solaris SPARC. When you create a new virtual device, it needs to be labeled (the drive metrics need to be specified) and a file system has to be put on the virtual device in order to mount it. Refer to the steps below.
Note: If the drive has already been labeled and you restart the client, you do not need to run format and label it again. Labeling a virtual disk for Solaris:

Label devices

1. From the command prompt, execute the following command: format A list of available disk selections will be displayed on the screen and you will be asked to specify which disk should be selected. If you are asked to specify a disk type, select Auto Configure. 2. Once the disk has been selected, you must label the disk. For Solaris 7 or 8, you will automatically be prompted to label the disk once you have selected it. 3. If you want to partition the newly formatted disk, type partition at the format prompt. You may accept the default partitions created by the format command or repartition the disk according to your needs. On Solaris x86, if the disk is not fixed with the fdisk partitions, the format command will prompt you to run fdisk first.

CDP/NSS Administration Guide

100

Logical Resources

For further information about the format utility, refer to the man pages. 4. To exit the format utility, type quit at the format prompt.
Creating a file system on a disk managed by the CDP/NSS software:

Warning: Make sure to choose the correct raw device when creating a file system. If in doubt, check with an administrator. 1. To create a new file system, execute the following command:

newfs /dev/rdsk/c2t0d0s2
where c2t0d0s2 is the name of the raw device. 2. To create a mount point for the new file system, execute the following command:

mkdir /mnt/ipstor1
where /mnt/ipstor1 is the name of the mount point you are creating. 3. To mount the disk managed by the CDP/NSS software, execute the following command:

mount /dev/dsk/c2t0d0s2 /mnt/ipstor1


where /dev/dsk/c2t0d0s2 is the name of the block device and /mnt/ipstor1 is the name of the mount point you created. For further information, refer to the man pages. Virtual device from a different server When assigning a virtual device from a different storage server, the SAN client software must be restarted in order to add the virtual device to the client machine. The reason for this is that when virtual devices are added from other storage servers, a new virtual SCSI adapter gets created on the client machine. Since Solaris does not allow new adapters to be added dynamically, the CDP/NSS client software needs to be restarted in order for the new adapter and device to be added to the system.

CDP/NSS Administration Guide

101

Logical Resources

Expand a virtual device


Since virtual devices do not represent actual physical resources, they can be expanded as more storage is needed. The virtual device can be increased in size by adding more blocks of storage from any unallocated space from the same server. Note that you will still need to repartition the virtual devices and adjust/create/resize any file-systems on the partition after the virtual device is expanded. Since partition and file-system formats are specific to the operating system that the client is running, the administrator must perform these tasks directly from the client. You can use tools like Partition Magic, Windows Dynamic Disk, or Veritas Volume Manager to add more drives to expand existing volume on-the-fly in real time (without application down time).
Notes:

We do not recommend expanding a virtual device (SAN) while clients are accessing the drives. At the end of this section is important information about Windows dynamic disks, Solaris clients, AIX clients, and Fibre Channel clients.

1. Right-click on a virtual device (SAN) and select Expand. 2. Select how you want to expand the virtual device.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates a virtual device using an available device.
CDP/NSS Administration Guide 102

Logical Resources

The Size to Allocate is the maximum space available on all available devices. If this drive is mirrored, this number will be half the full amount because the mirrored drive will need an equal amount of space. If you select Custom, you will see the following windows:

Select either an entirely unallocated or partially unallocated device. Only one device can be selected at a time from this dialog. To expand a virtual device from multiple physical devices, you will need to add the devices one at a time. After selecting the parameters for the first device, you will have the option to add more devices.

Indicate how much space to allocate from this device. Note: If this drive is mirrored, you can only select up to half of the available total space (from all available devices). This is because the mirrored drive will need an equal amount of space.

Click Add More if you want to select space from another physical device.

CDP/NSS Administration Guide

103

Logical Resources

3. Confirm that all information is correct and then click Finish to expand the virtual device. Windows Dynamic disks Expansion of dynamic disks using the Expand SAN Resource Wizard is not supported for clients using Fibre Channel. Due to the nature of dynamic disks, it is not safe to alter the size of the virtual device. However, dynamic disks do provide an alternative method to extend the dynamic volume. To extend a dynamic volume using SAN resources, use the following steps: 1. Create a new SAN resource and assign it to the CDP/NSS Client. This will become an additional disk which will be used to extend the dynamic volume. 2. Use Disk Manager to write the disk signature and upgrade the disk to "Dynamic. 3. Use Disk Manager to extend the dynamic volume. The new SAN resource should be available in the list box of the Dynamic Disk expansion dialog. Solaris clients The following procedure is valid for clients using Fibre Channel: 1. Use expand.sh to get the new capacity of the disk. This will automatically label the disk. 2. Use the format utility to add a new partition or, if your file system supports expansion, use your file systems utility to expand the file system. Windows clients (Fibre Channel) Linux clients (Fibre Channel) For Windows 2000 and 2003 clients, after expanding a virtual device you should rescan the physical devices from the Computer Manager to see the expanded area.

1. Use rmmod qla2x00 to remove the module. 2. Use insmod qla2x00 to install the module back again. 3. Use fdisk/dev/sda to create a second partition. The a in sda refers to the first disk. Use b, c, etc. for subsequent disks.

AIX clients

Expanding an CDP/NSS virtual disk will not change the size of the existing AIX volume group. To expand the volume group, a new disk has to be assigned and the extendvg command should be used to enlarge the size of the volume group.

CDP/NSS Administration Guide

104

Logical Resources

Expand a Service-Enabled Device


SED expansion must be done from the storage side first. Therefore, it is recommended that you check with the storage vendor regarding how to expand the underlying physical LUNs. It is also recommended that you schedule downtime (under most scenarios) to avoid an unexpected outages. A rescan from the FalconStor Management console is necessary to reflect the new size of the disk. To rescan, right-click on the physical adapter and select Rescan.

Grant access to a SAN resource


By default, only the root user and IPStor administrators can manage SAN resources, groups, or clients. While IPStor users can create new SAN resources, if you want an IPStor user to manage an existing SAN resource, you must grant that user access. To do this: 1. Right-click on a SAN resource and select Access Control. 2. Select which user can manage this SAN resource. Each SAN resource can only be assigned to one IPStor user. This user will have rights to perform any function on this SAN resource, including assigning, configuring for storage services, and deletion. If a SAN Resource is already assigned to a client, you cannot grant access to the SAN resource if the user is not already assigned to the client. You will have to unassign the SAN resource first, change the access for both the client and the SAN resource, and then reassign the SAN resource to the client. To check whether a san resource has user Access Control enabled, highlight the san resource in the FalconStor Management console and see if there is a value entered in the right panel.

CDP/NSS Administration Guide

105

Logical Resources

Unassign a SAN resource from a client


1. Right-click on the client or client protocol and select --> Unassign. 2. Select the resource(s) and click Unassign. Note that when you unassign a Linux client connected, the client may be temporarily disconnected from the server. If the client has multiple devices offered from the same server, the temporary disconnect may affect these devices. However, once I/O activities from those devices are detected, the connection will be restored automatically and transparently.

Delete a SAN resource


1. (AIX and HP-UX clients) Prior to removing a CDP/NSS device, make sure any logical volumes that were built on top have been removed. If the CDP/NSS device is removed while logical volumes exist, you will not be able to remove the logical volumes and the system will display error messages. 2. (Windows 2000/2003, Linux, Unix clients) You should disconnect/umount the client from the SAN resource(s) prior to deleting the SAN resource. 3. Detach the SAN resource from any client that is using it. For non-Windows clients, type ./ipstorclient stop from /usr/local/ipstorclient/bin. 4. In the Console, highlight the SAN resource, right-click and select Delete.

CDP/NSS Administration Guide

106

CDP/NSS Administration Guide

CDP/NSS Server
CDP and NSS storage servers are designed to require little or no maintenance. All day-to-day CDP/NSS administrative functions can be performed through the FalconStor Management Console. However, there may be situations when direct access to the Server is required, particularly during initial setup and configuration of physical storage devices attached to the Server or for troubleshooting purposes. If access to the servers operating system is required, it can be done either directly or remotely from computers on the SAN.

Start the CDP/NSS appliance


Execute the following commands to start the processes:
cd /usr/local/ipstor/bin ./ipstor start

If the server is already started, you can use ./ipstor restart to stop and then start the processes. When you start the server, you will see the processes start. Starting IPStor SNMPD Module Starting IPStor Configuration Module Starting IPStor Base Module Starting IPStor HBA Module Starting IPStor Authentication Module Starting IPStor Block Device Module Starting IPStor Server (FSNBase) Module Starting IPStor Server (Application) Module Starting IPStor Server (Upcall) Module Starting IPStor Target Module Starting IPStor iSCSI Target Module Starting IPStor iSCSI (Daemon) Starting IPStor Communication Module Starting IPStor CLI Proxy Module Starting IPStor Logger Module Starting IPStor Central Client Manager Module Starting IPStor Self Monitor Module Starting IPStor Failover Module [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK] [OK]

You will only see these modules if iSCSI Target Mode is enabled.

You will only see this module if failover is enabled.

CDP/NSS Administration Guide

107

CDP/NSS Server

Stop the CDP/NSS appliance


Warning: Stopping the storage server processes will shut down all access to the storage resources managed by the Server. This can halt processing on your application servers, or even cause them to crash, depending upon how they behave if a disk is unexpectedly shut off or removed. It is recommended that you make sure your application servers are not accessing the storage resources when you stop the storage server processes.

To shut down the processes, execute the following commands:


cd /usr/local/ipstor/bin ./ipstor stop

You should see the processes stop.

CDP/NSS Administration Guide

108

CDP/NSS Server

Log into the CDP/NSS appliance


You can log in from a keyboard/display connected directly to the Server. There is no graphical user interface (GUI) shell required. By default, only the root user has login privileges to the operating system. Other IPStor administrators do not. To log in, enter the username and the password for the root user.
Warning: Do not permit storage server login access by anyone except your most trusted system or storage administrators. Administrators with login access to the server have the ability to modify, damage or destroy data managed by the server.

Use Telnet
By default, IPStor administrators do not have telnet access to the server. The server is configured to deny all TCP/IP access, including telnet. To enable telnet: 1. Install the following rpm files on the machine: #rpm ivh xinetd-..rpm #rpm ivh telnet-..rpm 2. Enter the following command:
#vi /etc/xinetd.d/telnet

3. Then change disable=yes to disable=no.


#service xinetd restart

Linux Server only

To grant telnet access to another computer on the network: 1. Log into the Server directly (on the local console keyboard and display). 2. Change the etc/passwd file. For the appropriate administrator, change the line that looks like:
/dev/null:/dev/null

To:
Username:/homedirectory:/bin/bash

Where Username is an actual administrator name and homedirectory is the actual home directory.
Note: For a more secure session, you may want to use the program ssh, which is supplied by some versions of the Linux operating system. Please refer to the Linux manual that came with your operating system for more details about configuration.

CDP/NSS Administration Guide

109

CDP/NSS Server

Check CDP/NSS processes


You can type the following command from the shell prompt to check the IPStor Server processes:
cd /usr/local/ipstor/bin ./ipstor status

You should see something similar to the following:


You will only see the HBA Module for QLogic HBAs.

You will only see this process if iSCSI Target Mode is enabled.

You will only see this process if Failover is enabled

Status of IPStor SNMPD Module [RUNNING] Status of IPStor Base Module [RUNNING] Status of IPStor HBA Module [RUNNING] Status of IPStor Initiator Module [RUNNING] Status of IPStor Control Module [RUNNING] Status of IPStor Authentication Module [RUNNING] Status of IPStor Block Device Module [RUNNING] Status of IPStor Server (Compression) Module [RUNNING] Status of IPStor Server (FSNBase) Module [RUNNING] Status of IPStor Server (Upcall) Module [RUNNING] Status of IPStor Server (Transport) [RUNNING] Status of IPStor Server (Event) Module [RUNNING] Status of IPStor Server (Path Manager) Module [RUNNING] Status of IPStor Server (Application) [RUNNING] Status of IPStor Advanced Backup Module [RUNNING] Status of IPStor Target Module [RUNNING] Status of IPStor iSCSI Target Module [RUNNING] Status of IPStor iSCSI (Daemon) [RUNNING] Status of IPStor Communication Module [RUNNING] Status of IPStor CLI Proxy Module [RUNNING] Status of IPStor Logger Module [RUNNING] Status of IPStor Local Client (VBDI) [RUNNING] Status of IPStor SANBridge Daemon [RUNNING] Status of IPStor Anti Virus Daemon [RUNNING] SStatus of IPStor Self Monitor Module [RUNNING] Status of IPStor Failover Module [RUNNING]

CDP/NSS Administration Guide

110

CDP/NSS Server

The following table list the name and description of the CDP/NSS processes.
CDP/NSS processes
IPStor SNMPD

Description
An agent that processes SNMP requests and returns the information to the sender/requester i.e. SNMP management software. Provides backward compatibility to the service start up script for CDP/NSS. QLogic FC initiator modules provide configuration and interaction between the CDP/NSS server and the FC environment/storage. QLogic FC initiator modules provide configuration and interaction between the CDP/NSS server and the FC environment/storage. Security authentication module for connections. Generic block-to-SCSI driver that provides the SCSI interface for CDP/NSS to access non-SCSI block devices. Compression driver; It uses the LZO open-source compression algorithm. Provides basic IO services to the kernel modules. Handles interactions between kernel and user mode components. Provides support for replication. Provides message logging interface to the syslog. Manages the IO paths to the storage. Provides core IO services to the rest of the application. Provide a raw device interface from CDP/NSS virtualized disk for full, differential or incremental backup. Provides Fibre Channel target functionality. Provides iSCSI target functionality that links the network adapter to the I/O core User daemon which handles the login process to CDP/NSS iSCSI target initiated from an iSCSI initiator Handles console-to-server communication and manages overall system configuration information.

IPStor Configuration IPStor Base

IPStor HBA

IPStor Authentication IPStor Block Device storage server (Compression) storage server (FSNBase) storage server (Upcall) storage server (Transport) storage server (Event) storage server (Path Manager) storage server (Application) IPStor Advanced Backup IPStor Target IPStor iSCSI Target IPStor iSCSI (Daemon) IPStor Communication

CDP/NSS Administration Guide

111

CDP/NSS Server

CDP/NSS processes
IPStor CLI Proxy IPStor Logger IPStor Central Client Manager IPStor Local Client (VBDI) IPStor Self Monitor

Description
Facilitates communication between CLI utility and a CDP/NSS server. Provides the logging function for CDP/NSS reports. Provides integration with Central Client Manager. Block device driver that provides a block device interface to an CDP/NSS virtual device Self-monitor process which checks the server's own health.

CDP/NSS Administration Guide

112

CDP/NSS Server

Check physical resources


When adding physical resources or testing to see if the physical resources are present, the following command can be executed from the shell prompt in Linux:
cat /proc/scsi/scsi.

These commands display the SCSI devices attached to the IPStor Server. For example, you will see something similar to the following: [0:0:0:0] [0:0:1:0] [2:0:1:0] [2:0:2:0] [2:0:3:0] disk disk disk disk disk 3ware Logical Disk 0 1.2 /dev/sda 3ware Logical Disk 1 1.2 /dev/sdb IBM-PSG ST318203FC !# B324 IBM-PSG ST318203FC !# B324 IBM-PSG ST318304FC !# B335 -

CDP/NSS Administration Guide

113

CDP/NSS Server

Check activity statistics


There is a utility that is installed with CDP/NSS that allows you to view activity statistics for virtual and physical devices as well as for Fibre Channel target ports. This utility can also report pending commands for physical and virtual devices. To run this utility, type the ismon command on the storage server: This command displays all virtual resources (SAN, Snapshot, etc.) for this storage server. For each resource, the screen shows its size, amount of reads/writes in KB/ second, and number of read/write commands per second. Information on the screen is automatically refreshed every five seconds. You can change the information that is displayed or the way it is sorted. The following options are available by pressing the appropriate key on your server:
Option Description

c v p t u d V R r o A S W w E N P m l

Toggle incremental/cumulative mode Display information for virtual devices Display information for physical devices. You can launch ismon -p at the command prompt to view this information directly. Display information for each FC target mode. You can launch

ismon -t at the command prompt to view this information directly.


Page up Page down Sort by virtual device ID Sort by KB read Sort by read SCSI command Sort by other SCSI command Sort by ACSL Sort by virtual device size Sort by KB written Sort by write SCSI command Sort by SCSI command error Sort by virtual device name Sort by WWPN Display Max value fields (incremental mode only) Start logging
CDP/NSS Administration Guide 114

CDP/NSS Server

Option

Description

k K h q

Reload virtual device name alias Edit virtual device name alias View help page Quit

Remove a physical storage device from a storage server


1. Unassign and delete all SAN resources used by the physical storage device you are removing. 2. Remove all Fibre Channel zones between the storage and the storage server. 3. From the console, perform a rescan on the physical adapters. 4. After the rescan has finished and the devices are offline, right-click and select Delete.

Configure iSCSI storage


This section provides details regarding the requirements and procedures needed to prepare your CDP/NSS appliance to use dedicated iSCSI downstream storage, using either a software HBA (iscsi-initiator) or a hardware iSCSI HBA.

Configuring iSCSI software initiator


The iSCSI software initiator is provided with every CDP and NSS appliance and can be configured to use dedicated iSCSI downstream storage using the iscsiadm command line interface. The CDP/NSS iSCSI software initiator supports up to 32 initiator-target host connections. If you have n Ethernet port devices on the appliance, you are allowed 32 / n storage targets. An iSCSI hardware initiator does not have this limitation. In order for the iSCSI software initiator to be properly configured, it must be configured so it is aware of the individual interfaces it will use for connectivity to the downstream storage. 1. Create a blank default configuration for each Ethernet device on the CDP/NSS appliance using the iscsiadm command line interface.
iscsiadm -m iface -I iface-eth<device Number> -o new

For example, if you are using 4 Ethernet devices for an iSCSI connection, run the following command (using the iscsiadm commands):
iscsiadm -m iface -I iface-eth0 -o new iscsiadm -m iface -I iface-eth1 -o new
CDP/NSS Administration Guide 115

CDP/NSS Server

iscsiadm -m iface -I iface-eth2 -o new iscsiadm -m iface -I iface-eth3 -o new

2. Bind persistently each Ethernet device to a MAC address to ensure that the same device is always used for iSCSI connection. To do this, use the following command:
iscsiadm -m iface -I iface-eth0 -o update -n iface.hwaddress -v <MAC address>

3. Connect each Ethernet device to the iSCSI targets. 4. Discover targets that are accessible from your initiators using the following command:
iscsiadm -m discovery -t st -p 192.168.0.254

5. Log the iSCSI initiator to the target using the following command:
iscsiadm -m node -L

6. Confirm configured Ethernet devices are associated with targets by running the following command:
iscsiadm -m session

Command output example:


tcp: [1] 192.168.0.254:3260,0 <target iqn name>

7. Perform a rescan from the FalconStor Management Console to see all of the iSCSI devices.

Configuring iSCSI hardware HBA


Only QLogic iSCSI HBAs are supported on a CDP or NSS appliance. The QLogic SANSurfer command line interface "iscli" allows configuration of an iSCSI HBA. The iSCSI HBAs should be configured such that they are in the same subnet as the iSCSI storage. The iSCSI hardware initiator does not require any special configuration for multipath support; you can just connect multiple HBA ports to a downstream iSCSI target. The QLogic iSCSI HBA driver handles multipath traffic. 1. Run QLogic SANSurfer CLI to display the HBA configuration menu:
/opt/QLogic_Corporation/SANsurferiCLI/iscli

Note the information displayed in the menu header for the current HBA port. By default, the configuration for HBA 0, port 0 displays.

The Port Level Info & Operations menu displays.

CDP/NSS Administration Guide

116

CDP/NSS Server

2. To configure the selected HBA port, select option 4 - Port Level Info & Operations. Make sure to save your changes the previous port before selecting another port, otherwise your changes will be lost. 3. To change the IP address of the selected port, select option 2 - Port Network Setting Menu. The Port Network Setting Menu interface allows you to configure the IP address for the selected port. 4. To change target parameters for the selected HBA port, select option 7 - Target Level Info & Operations. The HBA Target Menu displays. 5. Discover iSCSI targets by selecting option 10 - Target Discovery Menu.
The HBA Target Discovery Menu displays.

6. To add a new target, select option 3 - Add a Send Target. Answer Yes when asked if you want the new send target to be auto-login and persistent. Otherwise the target will not persist through reboot and require manual intervention. Enter the IP address Indicate whether or not the send target requires CHAP authentication. Confirm the send target has been added by listing the send targets (option 1). 7. Save the configuration changes for the selected HBA port by selecting option 12 - Save changes and reset HBA. 8. Once all of the ports are configured, return to the HBA Target Menu and select option 11 - List LUN Information. All discovered and connected targets are listed. Select a target to view all LUNs associated with that target.

Uninstall a storage server


To uninstall a storage server: 1. Execute the following command: rpm e ipstor This command removes the installation of the storage server but leaves the /ipstor directory and its subdirectories.

CDP/NSS Administration Guide

117

CDP/NSS Server

2. To remove the /ipstor directory and its subdirectories, execute the rm rf ipstor command from the /usr/local directory:
Note: We do not recommend deleting the storage server files without using rpm e. However, to re-install the CDP/NSS software if the storage server was removed without using the rpm utility, or to install over an existing storage server installation, the following command should be executed:

rpm -i - - force <package name> To determine the package name, check the Server directory on the CDP/NSS installation media. This will force a re-installation of the software. Refer to the rpm man pages for more information.

CDP/NSS Administration Guide

118

CDP/NSS Administration Guide

iSCSI Clients
iSCSI clients are the file and application servers that access CDP/NSS SAN resources using the iSCSI protocol. Just as the CDP/NSS appliance supports different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), the CDP/NSS appliance is protocol-independent and supports multiple outbound target protocols, including iSCSI Target Mode. This chapter provides an overview for configuring iSCSI clients with CDP or NSS. iSCSI builds on top of the regular SCSI standard by using the IP network as the connection link between various entities involved in a configuration. iSCSI inherits many of the basic concepts of SCSI. For example, just like SCSI, the entity that makes requests is called an initiator, while the entity that responds to requests is called a target. Only an initiator can make requests to a target; not the other way around. Each entity involved, initiator or target, is uniquely identified. By default, when a client machine is added as an iSCSI client of a CDP or NSS appliance, it becomes an iSCSI initiator. The initiator name is important because it is the main identity of an iSCSI initiator. iSCSI target mode is supported for iSCSI initiators on various platforms, including Windows, VMware, and Linux. Refer to the Certification Matrix for all support information.

Requirements
The following requirements are valid for all iSCSI clients regardless of platform: You must install an iSCSI initiator on each of your client machines. iSCSI software/hardware initiator is available from many sources and needs to be installed and configured on all clients that will access shared storage. Refer to the FalconStor certification matrix for a list of supported iSCSI initiators. You should not install any storage server client software on the client unless you are using a FalconStor snapshot agent.

CDP/NSS Administration Guide

119

iSCSI Clients

Configure iSCSI clients


Refer to the following sections for an overview for configuring iSCSI clients with CDP/NSS. Enable iSCSI Configure your iSCSI initiator Create storage targets for the iSCSI client Add your iSCSI client in the FalconStor Management Console

Enable iSCSI
In order to add a client using the iSCSI protocol, you must enable iSCSI for your storage server. To do this, in the FalconStor Management Console, right-click on your storage server and select Options --> Enable iSCSI. As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is automatically created on your storage server. This is a special SAN client that does not correspond to any specific client machine. Using this client, you can create iSCSI targets that are accessible by any iSCSI client that connects to the storage server. Before an iSCSI client can be served by a CDP or NSS appliance, the two entities need to mutually recognize each other. The following sections take you through this process.

Configure your iSCSI initiator


You need to register your iSCSI client as an initiator to your storage server. This enables the storage server to see the initiator. To do this, you must launch the iSCSI initiator on the client machine and identify your storage server as the target server. You will have to enter the IP address or name (if resolvable) of your storage server. Refer to the documentation provided by your iSCSI initiator for detailed instructions about how to do this. Afterwards, you may need to start, or restart the initiator if it is a Unix client.

CDP/NSS Administration Guide

120

iSCSI Clients

Add your iSCSI client in the FalconStor Management Console


1. Right-click on SAN Clients and select Add. 2. Select the protocol for the client you want to add.

Note: If you have more than one IP address, a screen will display prompting you to select the IP address that the iSCSI target will be accessible over.

CDP/NSS Administration Guide

121

iSCSI Clients

3. Select the initiator that this client uses.

If the initiator does not appear, you may need to rescan. You can also manually add it, if necessary. 4. Select the initiator or select the client to have mobile access. Stationary iSCSI clients corresponds to specific iSCSI client initiators, and consequently, the client machine that owns the specific initiator names. Only a client machine with a correct initiator name can connect to the storage server to access the resources assigned to this stationary client.

CDP/NSS Administration Guide

122

iSCSI Clients

5. Add/select users who can authenticate for this client. The user name defaults to the initiator name. You will also need to enter the CHAP secret.

Click Advanced to add existing users to this target. For unauthenticated access, select Allow Unauthenticated Access. With unauthenticated access, the storage server will recognize the client as long as it has an authorized initiator name. With authenticated access, an additional check is added that requires the user to type in a username and password. More than one username/password pair can be assigned to the client, but they will only be useful when coming from the machine with an authorized initiator name. Select the Enable Mutual CHAP secret if you want the target and the initiator to authenticate to each other. A separate secret will be set for each target and each initiator.

CDP/NSS Administration Guide

123

iSCSI Clients

6. Enter the name of the client, select the operating system, and indicate whether or not the client machine is part of a cluster.

Note: It is very important that you enter the correct client name.

7. Click Find to locate the client machine. The IP address of the machine with the specified host name will be automatically filled in if the name is resolvable.

CDP/NSS Administration Guide

124

iSCSI Clients

8. Indicate if you want to enable persistent reservation.

This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes. 9. Confirm all information and click Finish.

10.

CDP/NSS Administration Guide

125

iSCSI Clients

Create storage targets for the iSCSI client


1. In the FalconStor Management Console, right-click on the iSCSI protocol object for an iSCSI client and select Create Target. 2. Enter a new target name for the client or accept the default.
Note: The Microsoft iSCSI initiator can only connect to an iSCSI target if the target name is no longer than 221 characters. It will fail to connect if the target name is longer than this.

3. Select the IP address(es) of the storage server to which this client can connect.

You can select multiple IPs if your iSCSI initiator has multipathing support (such as the Microsoft initiator version 2.0). If you specified a default portal (in Server Properties), that IP address will be selected for you. 4. Select an access mode.
Read/Write - Only one client can access this SAN resource at a time. All others (including Read Only) will be denied access. Read/Write Non-Exclusive - Two or more clients can connect at the same time with both read and write access. You should be careful with this option because if you have multiple clients writing to a device at the same time, you have the potential to corrupt data. This option should only be used by clustered servers, because the cluster itself prevents multiple clients from writing at the same time. Read Only - This client will have read only access to the SAN resource. This option is useful for a read-only disk.

5. Select the SAN resource(s) to be assigned to the client. If you have not created any SAN resources yet, you can assign them at a later time. You may need to restart the iSCSI initiator afterwards.
CDP/NSS Administration Guide 126

iSCSI Clients

6. Use the default starting LUN. Once the iSCSI target is created for a client, LUNs can be assigned under the target using available SAN resources. 7. Confirm all information and click Finish.

Restart the iSCSI initiator


In order for the client to be able to access its storage, you must restart the iSCSI initiator on Unix clients or log the client onto the target (Windows). It may be desirable to have a persistent target. Refer to the documentation provided by your iSCSI initiator for detailed instructions about how to do this.

Windows iSCSI clients and failover


The Microsoft iSCSI initiator has a default retry period of 60 seconds. You must change it to 300 seconds in order to sustain the disk for five minutes during failover so that applications will not be disrupted by temporary network problems. This setting is changed through the registry. 1. Go to Start --> Run and type regedit. 2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\ 4D36E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\

where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, ..... 3. Right-click Parameters and select Export to create a backup of the parameter values. 4. Double-click MaxRequestHoldTime. 5. Pick Decimal and change the Value data to 300. 6. Click OK. 7. Double-click EnableNOPOut 8. Set the Value data to 1. 9. Click OK. 10. Reboot Windows for the change to take effect.

Disable iSCSI
To disable iSCSI for a CDP or NSS appliance, right-click on the server node in the FalconStor Management Console, and select Options --> Disable iSCSI. Note that before disabling iSCSI, all iSCSI initiators and targets for this ICDP or NSS appliance must be removed.

CDP/NSS Administration Guide

127

CDP/NSS Administration Guide

Logs and Reports


The CDP/NSS appliance retains information about the health and behavior of the physical and virtual storage resources on the server. It maintains an Event log to record system events and errors. The appliance also maintains performance data on the individual physical storage devices and SAN resources, which can be filtered to produce various reports through the FalconStor Management Console.

Event Log
The Event Log details significant occurrences during the operation of the storage server. The Event Log can be viewed in the FalconStor Management Console when you highlight a Server in the tree and select the Event Log tab in the right pane. The following is a sample event log display You can double-click on an event to display additional information, such as the probable cause of the error and suggested action.:

CDP/NSS Administration Guide

128

Logs and Reports

The columns displayed are: Type


I: This is an informational message. No action is required. W: This is a warning message that states that something occurred that may require maintenance or corrective action. However, the storage server system is still operational. E: This is an error that indicates a failure has occurred such that a resource is not available, an operation has failed, or a licensing violation. Corrective action should be taken to resolve the cause of the error. C: This is a critical error that stops the system from operating properly. You will be alerted to all critical errors when you log into the server from the console.

Date Time ID Event Message

The date on which the event occurred. The time at which the event occurred. This is the message number. This is a text description of the event describing what has occurred.

Sort information in the Event Log


When you initially view the Event Log, all information is displayed in chronological order (most recent at the top). If you want to reverse the order (oldest at top) or change the way the information is displayed, you can click on a column heading to re-sort the information. For example, if you click on the ID heading, you can sort the events numerically. This can help you identify how often a particular event occurs.

Filter information stored in the Event Log


By default, all informational system messages, warnings, and errors are displayed. To filter the information that is displayed, right-click on a Server and select Event Log --> Filter.

CDP/NSS Administration Guide

129

Logs and Reports

Refresh the Event Log

Select which message types you want to include.

Select a category of messages to display.

Search for records that contain/do not contain specific text. Specify the maximum number of lines to display. Select a time or date range for messages.

You can refresh the current Event Log display by right-clicking on the Server and selecting Event Log --> Refresh.

Print/Export Event Log


You can print the Event Log to a printer or save it as a text file. These options are available (once you have displayed the Event Log) when you right-click on the Server and select the Event Log options.

CDP/NSS Administration Guide

130

Logs and Reports

Reports
FalconStor provides reports that offer a wide variety of information: Performance and throughput - By SAN Client, SAN resource, SCSI channel, and SCSI device. Usage/allocation - By SAN Client, SAN resource, Physical resource, and SCSI adapter. System configuration - Physical Resources. Replication reports - You can run an individual report for a single server or you can run a global report for multiple servers.

Individual reports are viewed from the Reports object in the console. Global replication reports are created from the Servers object.

Set report properties


Prior to setting up reports, review the properties you have set in the Activity Database Maintenance tab (Right click on the server and select Properties ' --> Activity Database Maintenance). Report feature polls log files to generate reports. The default maximum size is 50MB. If the size of the log database is over 50MB, older logs will be deleted to maintain the maximum 50MB limit. The default maximum days of log history to keep is 30. Log data older than 30 days will be deleted. If you are planning to create reports viewing data older than 30 days, you must increase this value. For example: if you generate a report viewing data for the past year but maximum log history is set only to 30 days, you will only get 30 days of data in the report.

CDP/NSS Administration Guide

131

Logs and Reports

Create an individual report


1. To create a report, right-click on the Reports object and select New.

2. Select a report. Depending upon which report you select, additional windows appear to allow you to filter the information for the report. Descriptions of each report appear on the following pages. 3. Select the reporting schedule. Depending upon which report you select, you can select to run the report for one time only, or select a daily, weekly, or monthly date range.

CDP/NSS Administration Guide

132

Logs and Reports

To create a one-time only report, click the For One Time Only radio button and click Next

If applicable, specify the date or date range for the report and indicate which SAN resources and Clients to use in the report. Selecting Past n days/weeks/months will create reports that generate data relative to the time of execution.
Include All SAN Resources and Clients Includes all current and previous configurations for this server (including SAN resources and clients that you may have changed or deleted). Include Current Active SAN Resources and Clients Only Includes only those SAN resource and clients that are currently configured for this server.

The Delta Replication Status report has a different dialog that lets you specify a range by selecting starting and ending dates.

CDP/NSS Administration Guide

133

Logs and Reports

To create a daily report, click the Daily radio button, give the schedule a name if desired and click Next.

Set the schedule frequency, duration, start time and click Next. To create a weekly report, click the Weekly radio button.

CDP/NSS Administration Guide

134

Logs and Reports

To create a monthly report, click the Monthly radio button.

4. If applicable, select the objects necessary to filter the information in the report. Depending upon which report you selected, you may be asked to select from a list of storage servers, SCSI adapters, SCSI devices, SAN clients, SAN resources, or replica resources. 5. If applicable, select which columns you want to display in the report and in which sort order. Depending upon which report you selected, you may be able to select which column fields to display on the report. All available fields are selected by default. You can also select whether you want the data sorted in ascending or descending order.

CDP/NSS Administration Guide

135

Logs and Reports

6. Enter a name for the report.

7. Confirm all information and click Finish to create the report.

View a report
When you create a report, it is displayed in the right-hand pane and is added beneath the Reports object in the configuration tree. Expand the Reports object to see the existing reports available for this Server.

When you select an existing report, it is displayed in the right-hand pane.

Export data from a report


You can save the data from the server and device throughput and usage reports. The data can be saved in a comma delimited (.csv) or tab delimited (.txt) text file. To export information, right-click on a report that is generated and select Export.

CDP/NSS Administration Guide

136

Logs and Reports

Schedule a report
Reports can be generated on a regular basis or as needed. Some tips to remember on scheduling are as follows: The start and end dates in the report scheduler are inclusive. When scheduling a monthly report, be sure to select a date that exists in every month. For example, if you select to run a report on the 31st day, the report will not be generated on months that do not have 31 days. When scheduling a report to run every n days in selected months, the first report is always generated on the first of the month and then every n number of days after. Therefore if you chose 30 days (n = 30) and there are not 30 days left in the month, the schedule will jump to the first day of the next month. Some reports allow you to select a range of dates from the day you are generating the report for the past n number of days. If you select for the past one day, the report will be generated for one day. When scheduling a daily report, it is best practice to schedule the report to run at the end of the day to capture the most amount of data. Daily report data accumulation begins at 12:00 am and ends at the scheduled run time.

CDP/NSS Administration Guide

137

Logs and Reports

E-mail a scheduled report


Scheduled reports can be sent to one or more e-mail addresses by selecting the Email option in the Report Wizard.

Select the E-mail option in the Report Wizard to enter e-mail addresses to have the report sent. Enter e-mail addresses, separated by semi-colons. You can also have the report sent to distribution groups, as long as the E-Mail server being used supports this feature.

Report types
The FalconStor reporting feature includes many useful reports including allocation, usage, configuration, and throughput reports. A description of each report follows.

Client Throughput Report


The SAN Resource tab of the Client Throughput Report displays the amount of data read/written between this client and SAN resource. To see information for a different SAN resource, select a different Resource Name from the drop-down box in the lower right hand corner. The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide

138

Logs and Reports

The following is a sample page from a Client Throughput Report:

Delta Replication Status Report


This report displays information about replication activity, including compression, encryption, MicroScan and protocol. It provides a centralized view for displaying real-time replication status for all disks enabled for replication. It can be generated for an individual disk, multiple disks, source server or target server, for any range of dates. This report is useful for administrators managing multiple servers that either replicate data or are the recipients of replicated data. The report can display information about existing replication configurations only or it can include information about replication configurations that have been deleted or promoted (you must select to view all replication activities in the database).

CDP/NSS Administration Guide

139

Logs and Reports

The following is a sample Delta Replication Status Report:

The Replication Status Summary tab displays a consolidated summary for multiple servers.

CDP/NSS Administration Guide

140

Logs and Reports

Disk Space Usage Report


This report shows the amount of disk space being used by each SCSI adapter. The Disk Space Usage tab displays a pie chart showing the following space usage amounts: Storage Allocated Space Snapshot Allocated Space Cache Allocated Space HotZone Allocated Space Journal Allocated Space CDR Allocated Space Configuration Allocated Space Total Free Space

A sample is displayed below:

The Data tab breaks down the disk space information for each physical device. The Utilization tab breaks down the disk space information for each logical device.

CDP/NSS Administration Guide

141

Logs and Reports

Disk Usage History Report


This report allows you to create a custom report with the statistical history information collected. You must have statistic log enabled to generate this report. The data is logged once a day at a specified time. The data collected is a representative sample of the day. In addition, if servers are set up in as a failover pair, the Disk usage history log must be enabled on the both servers in order for data to be logged during failover. In a failover state, the data logging time set on the secondary server is followed. Select the reporting period range, whether to include the disk usage information from the storage pools, and the sorting criteria. A sample is displayed below:

CDP/NSS Administration Guide

142

Logs and Reports

CDP/NSS Administration Guide

143

Logs and Reports

CDP/NSS Administration Guide

144

Logs and Reports

Fibre Channel Configuration Report


This report displays information about each Fibre Channel adapter, including type, WWPN, mode (initiator vs. target), and a list of all WWPNs with client information. The following is a sample Fibre Channel Configuration Report:

CDP/NSS Administration Guide

145

Logs and Reports

Physical Resources Configuration Report


This report lists all of the physical resources on this Server, including each physical adapter and physical device. To make this report more meaningful, you can rename the physical adapter (right-click on the adapter and select Rename). For example, instead of using the default name, you can use a name such as Target Port A. The following is a sample Physical Resources Configuration Report:

CDP/NSS Administration Guide

146

Logs and Reports

Physical Resources Allocation Report


This report shows the disk space usage and layout for each physical device. The following is a sample Physical Resources Allocation Report:

CDP/NSS Administration Guide

147

Logs and Reports

Physical Resource Allocation Report


This report shows the disk space usage and layout for a specific physical device. The following is a sample Physical Resource Allocation Report:

Resource IO Activity Report


The Resource IO Activity Report shows the input and output activity of selected resources. The report options and filters allow you to select the SAN resource and client to report on within a particular date/time range. You can view a graph of the IO activity for each SAN resource including errors, delayed IO, data, and configuration information. The Data tab shows the tabular data that was used to create the graph and the Configuration Information tab shows which SAN resources and Clients were included in the report.

CDP/NSS Administration Guide

148

Logs and Reports

The following is a sample of the Resource IO Activity Report.

CDP/NSS Administration Guide

149

Logs and Reports

The Resource IO Activity - data tab report results is displayed below:

SCSI Channel Throughput Report


The SCSI Channel Throughput Report shows the data going through each SCSI channel on the Server. This report can be used to determine which SCSI bus is heavily utilized and/or which bus is under utilized. If a particular bus is too heavily utilized, it may be possible to move one or more devices to a different or new SCSI adapter. Some SCSI adapters have multiple channels. Each channel is measured independently.

CDP/NSS Administration Guide

150

Logs and Reports

During the creation of the report, you select which SCSI channel to include in the report.

When this report is created, there are three tabs of information. The SAN Resource tab displays a graph showing the throughput of the channel. The horizontal axis displays the time segments. The vertical axis measures the total data transferred through the selected SCSI channel, in each time segment for both reads and writes. The System tab displays the CPU and memory utilization for the same time period as the main graph. The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide

151

Logs and Reports

The following is a sample SCSI Channel Throughput Report:

SCSI Device Throughput Report


The SCSI Device Throughput Report shows the utilization of the physical SCSI storage device on the Server. This report can show if a particular device is heavily utilized or under utilized. During the creation of the report, you select which SCSI device to include. The SAN Resource tab displays a graph showing the throughput of the SCSI device. The horizontal axis displays the time segments. The vertical axis measures the total data transferred through the selected SCSI device, in each time segment for both reads and writes. The System tab displays the CPU and memory utilization for the same time period as the main graph. The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide

152

Logs and Reports

The following is a sample SCSI Device Throughput Report:

SAN Client Usage Distribution Report


The Read Usage tab of the SAN Client Usage Distribution Report displays a bar chart that shows the amount of data read by Clients of the current Server. The chart shows three bars, one for each Client. The Read Usage % tab displays a pie chart showing the percentage for each Client. The Write Usage tab displays a bar chart that shows the amount of data written to the Clients. The chart shows three bars, one for each active Client. The Write Usage % tab displays a pie chart showing the percentage for each Client.

CDP/NSS Administration Guide

153

Logs and Reports

The following is a sample page from a SAN Client Usage Distribution Report:

SAN Client/Resources Allocation Report


For each Client selected, this report displays information about the resources assigned to the Client, including disk space assigned, type of access, and breakdown of physical resources. The following is a sample SAN Client / Resources Allocation Report:

CDP/NSS Administration Guide

154

Logs and Reports

SAN Resources Allocation Report


This report displays information about the resources assigned to each Client, including disk space assigned, type of access, and breakdown of physical resources. The following is a sample SAN Resources Allocation Report:

CDP/NSS Administration Guide

155

Logs and Reports

SAN Resource Usage Distribution Report


The Read Usage tab of the SAN Resource Usage Distribution Report displays a bar chart that shows the amount of data read from each SAN Resource associated with the current Server. The chart shows six bars, one for each SAN Resource (in order of bytes read). The Read Usage % tab displays a pie chart showing the percentage for each SAN resource. The Write Usage tab displays a bar chart that shows the amount of data written to the SAN resources. The Write Usage % tab displays a pie chart showing the percentage for each SAN resource. The following is a sample page from a SAN Resource Usage Distribution Report:

Server Throughput and Filtered Server Throughput Report


The Server Throughput Report displays the overall throughput of the Server. The Filtered Server Throughput Report takes a subset of clients and/or SAN resources and displays the throughput of that subset. When creating the Filtered Server Throughput Report, you can specify which SAN resources and which clients to include. When these reports are created, there are several tabs of information.

CDP/NSS Administration Guide

156

Logs and Reports

The SAN Resource tab displays a graph showing the throughput of the Server. The horizontal axis displays the time segments. The vertical axis measures the total data transferred in each time segment for both reads and writes. For example:

The System tab displays the CPU and memory utilization for the same time period as the main graph:

CDP/NSS Administration Guide

157

Logs and Reports

This helps the administrator identify time periods where the load on the Server is greatest. Combined with the other reports, the specific device, client, or SAN resource that contributes to the heavy usage can be identified. The Data tab shows the tabular data that was used to create the graphs:

The Configuration Information tab shows which SAN Resources and Clients were included in the report.

CDP/NSS Administration Guide

158

Logs and Reports

Storage Pool Configuration Report


This report shows detailed Storage Pool information. You can select the information to display in each column as well as the order. This includes: Device Name SCSI Address Sectors Total (MB) Used (MB) Available (MB)

The following is a sample Storage Pool Configuration Report

CDP/NSS Administration Guide

159

Logs and Reports

User Quota Usage Report


This report shows a detailed description of the amount of space used by each of the resources from the selected users on the current server. You can select the information to display in each column, the sort order and the user on which to report information. Report columns include: ID Resource Name Type Category Size (MB)

The following is a sample User Quota Usage Report.

CDP/NSS Administration Guide

160

Logs and Reports

Report types - Global replication


While you can run a replication report for a single server from the Reports object, you can also run a global report for multiple servers from the Servers object. From the Servers object, you can also create a report for a single server, consolidate existing reports from multiple servers, and create a template for future reports.

Create a global replication report


1. To run a global replication report, highlight the Servers object and select Replication Status Reports --> New. 2. When prompted, enter a date range for the report and indicate whether you want to use a saved template to create this report or if you are going to define this report as you go through the wizard. 3. Select which servers to include in the report. 4. Select which resources to include from each server. Be sure to select each primary server from the drop-down box to select resources. 5. Select what type of information you want to appear in the report and the order. Use the up/down arrows to order the information. 6. Set the sorting criteria for the columns. Click in the Sorting field to alternate between Ascending, Descending, or Not Sorted. You can also use the up/down arrows to change the sorting order of the columns. 7. Give the report a name and indicate where to save it. You can also save the current report template for future use. 8. Review all information and click Finish to create the report.

View global report


The group replication report will open in its own window. Here you can change what is displayed, change the sort order, export data, or print. Since you can select more columns than can fit on a page, when printing a report where many columns have been selected, it is recommended that you preview the report before printing. You may need to make sure the columns have not overlapped.

CDP/NSS Administration Guide

161

CDP/NSS Administration Guide

Fibre Channel Target Mode


Just as CDP and NSS supports different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), CDP and NSS appliances are protocol-independent and support multiple outbound target protocols, including Fibre Channel target mode. CDP/NSS support for the Fibre Channel protocol allows any Fibre Channel-enabled system to take advantage of FalconStors extensive storage capabilities such as virtualization, mirroring, replication, NPIV, and security. Support is offered for all Fibre Channel (FC) topologies including, Point-to-Point, and Fabric. This section provides configuration information for Fibre Channel target mode as well as the associated Fibre Channel SAN equipment (i.e. switch, T3, etc.). An application server can be an iSCSI Client, a Fibre Channel Client, or both. Using separate cards and switches, you can have all types of FalconStor Clients (FC, or iSCSI) on your storage network.

Fibre Channel target mode is supported on various platforms, including Windows, VMware, and Linux. Refer to the FalconStor Certification Matrix at Falconstor.com for all support information.

CDP/NSS Administration Guide

162

Fibre Channel Target Mode

Fibre Channel over Ethernet (FCoE)


NSS supports FCoE using QLogic QLE8152 and QLAE8142 Converged Network Adapters (CNAs) along with the CISCSO MDS 5010 FCoE switch. The storage server detects the installed CNAs. The CNA is seen as a regular Fibre Channel adapter with WWPN association.

Fibre Channel target mode - configuration overview


The installation and configuration of Fibre Channel target mode involves several steps. Detailed information for each step appears in subsequent sections. 1. Prepare your Fibre Channel hardware configuration. 2. Enable Fibre Channel target mode. 3. (If applicable) Set QLogic ports to target mode. 4. (Optionally) Set up your failover configuration. 5. Add Fibre Channel clients. 6. (Optionally) Associate World Wide Port Names (WWPN) with clients. 7. Assign virtualized resources to Fibre Channel Clients. 8. View new devices. 9. (Optionally) Install and configure DynaPath.

CDP/NSS Administration Guide

163

Fibre Channel Target Mode

Configure Fibre Channel hardware on server


CDP and NSS supports the use of QLogic HBAs for the storage server. For a list of all supported HBAs, refer to the certification matrix on the FalconStor website for a list of HBAs that are currently certified.

Ports
Your CDP/NSS appliance is equipped with several Fibre Channel ports. The ports that connect to storage arrays are commonly known as Initiator Ports. The ports that will interface with the backup servers' FC initiator ports will run in a different mode known as Target Mode.

Downstream Persistent binding


Persistent binding is automatically configured for all QLogic HBAs connected to storage device targets upon the discovery of the device (via a Console physical device rescan with the Discover New Devices option enabled). However, persistent binding will not be SET until the HBA is reloaded. You can reload HBAs by restarting CDP/NSS with the ipstor restart all command. After the HBA has been reloaded and the persistent binding has been set, you can change the target port ID through the console. To do this, right-click on Physical Resources or a specific adapter and select Target Port Binding.
Important: Do not change the target-port ID prior to setting persistent binding.

VSA
The Volume Set Addressing (VSA) allows an increased number of LUNs to be addressed on a target port. CDP/NSS supports up to 4096 LUN assignments per VSA client when VSA is enabled. For upstream, you can set VSA for the client at the time of creation or you can modify the setting after creation by right-clicking on the client. When VSA is enabled and the actual LUN is beyond 256, use the Report LUN option to discover them. Use the LUN range option only if Report LUN does not work for the adapter. If new devices are assigned (from the storage server) to a VSA-enabled storage server before loading up the CDP/NSS storage server, the newly assigned devices will not be discovered during start up. A manual rescan will be required. The VSA option must be disabled if you are using the FalconStor Management Console to set up a near-line mirror on a version 6.0 server. This also applies if you are setting up a near-line mirror from a version 6.0 server to a later server.

CDP/NSS Administration Guide

164

Fibre Channel Target Mode

Some storage devices (such as EMC Symmetric storage controller and older HP storage) use VSA (Volume Set Addressing) mode. This addressing method is used primarily for addressing virtual buses, targets, and LUNs.

Zoning
Two types of zoning can be configured on each switch: hard zoning (based on port number) and soft zoning (based on WWPNs). Soft zoning is zoning which is implemented in software and uses the WWPN in the configuration. By using filtering implemented in Fibre Channel switches, ports cannot be seen from outside of their assigned zones. The WWPN remains the same in the zoning configuration regardless of the port location. If a port fails, you can simply move the cable from the failed port to another valid port without having to reconfigure the zoning. CDP/NSS requires isolated zoning where one initiator is zoned to one target in order to minimize I/O interruptions by non-related FC activities, such as port login/out, reset, etc. With isolated zoning, each zone can contain no more than two ports or two WWPNs. This applies to both initiator zones (storage) and target zones (clients). For example, for the case of upstream (to client) zoning, if there are two client initiators and two CDP/NSS targets on the same FC fabric and if it is desirable for all four path combinations to be established, you should use four specific zones, one for each path (Client_Init1/IPStor_Tgt1, Client_Init1/IPStor_Tgt2, Client_Init2/ IPStor_Tgt1, and Client_Init2/IPStor_Tgt2). You cannot create a single zone that includes all four ports. The four-zone method is cleaner because it does not allow the two client initiators nor the two CDP/NSS target ports to see each other. This eliminates all of the potential issues such as initiators trying to log in to each other under certain conditions. The same should be done for downstream (to storage) zoning. If there are two CDP/ NSS initiators and two storage targets on the same fabric, there should be four zones (IPStor_Init1/Storage_Tgt1, IPStor_Init1/Storage_Tgt2, IPStor_Init2/ Storage_Tgt1, and IPStor_Init2/Storage_Tgt2). Make sure that storage devices are not zoned directly connected to the client. Instead, since CDP/NSS will be provisioning the storage to the clients, the target ports of the storage devices should be zoned to the CDP/NSS initiator ports while the clients are zoned to the CDP/NSS target ports. Make sure that from the storage units management GUI (such as SANtricity and NaviSphere), the LUNs are reassigned to the storage server as the host. CDP/NSS will either virtualize these LUNS (if they are newly created without existing data) or service-enable them (which preserves existing data). CDP/NSS can then define SAN resources out of these LUNS and further provision them to the clients as Service-Enabled Devices.

CDP/NSS Administration Guide

165

Fibre Channel Target Mode

Switches
For the best performance, if you are using 4 or 8 Gig switches, all of your cards should be 4 or 8 Gig cards. For example, the QLogic 2432 or 2462 4GB cards. Check the certification matrix on the FalconStor website to see a complete list of certified cards. NPIV (point-to-point) mode is enabled by default. Therefore, all Fibre Channel switches must support NPIV.

QLogic HBAs
Target mode settings The table below lists the recommended settings (changes are indicated in bold) for QLogic HBA target mode. These values are set in the fshba.conf file and will override those set through the BIOS settings of the HBA. For initiators, please consult the best practice guideline as published by the storage subsystem vendor. If an initiator is to be used by multiple storage brands, the best practice is to select a setting that best satisfies both brands. If this is not possible, consult FalconStor technical support for advice, or separate the conflicting storage units to their own initiator connections.
Name
frame_size loop_reset_delay adapter_hard_loop_id

Default
2 (2048byte) 0 0

Recommendation
2 (2048byte) 0 0 but set to 1 if using arbitrated loop topology 1 (point to point) but set to 0 if using arbitrated loop topology 0-124 Make sure that both primary target adapter and secondary standby adapter (the failover pair) are set to the SAME value.

connection_option

1 (point to point) 0

hard_loop_id

fibre_channel_tape_support data_rate

0 (disable) 2 (auto)

0 (disable) Based on the switch capability should be modified to either 0 (1 GB), 1 (2 GB), 2 (auto), or 3 (4GB) 255 256 1 (enable)

execution_throttle LUNs_per_target enable_lip_reset

255 256 1 (enable)

CDP/NSS Administration Guide

166

Fibre Channel Target Mode

Name
enable_lip_full_login enable_target_reset login_retry_count port_down_retry_count link_down_timeout extended_error_logging_flag interrupt_delay_timer iocb_allocation enable_64bit_addressing fibrechannelconfirm class2service acko responsetimer fastpost driverloadrisccode q12xmaxqdepth max_srbs q12xfailover q12xlogintimeout q12xretrycount q12xsuspendcount q12xdevflag q12xplogiabsentdevice busbusytimeout displayconfig retry_gnnft recoverytime

Default
1 (enable) 1 (enable) 8 8 45 0 (no logging) 0 512 0 (disable) 0 (disable) 0 (disable) 0 (disable) 0 (disable) 0 (disable) 1 (enable) 255 4096 0 20 seconds 20 10 0 0 (no PLOGI) 60 seconds 1 10 10 seconds

Recommendation
1 (enable) 1 (enable) 8 8 45 0 (no logging) 0 512 0 (disable) 0 (disable) 0 (disable) 0 (disable) 0 (disable) 0 (disable) 1 (enable) 255 (configurable via the console) 4096 0 20 seconds 20 10 0 0 (no PLOGI) 60 seconds 1 10 10 seconds 167

CDP/NSS Administration Guide

Fibre Channel Target Mode

Name
failbacktime bind

Default
5 seconds 0 (by Port Name) 16 2 10

Recommendation
5 seconds 0 (by Port Name)

qfull_retry_count qfull_retry_delay q12xloopupwait

16 2 10

Configure Fibre Channel clients


Persistent binding Fabric topology Persistent binding should be configured for all HBAs that support it. Check with the HBA vendor for specific persistent binding procedures. When setting up clients on a Fibre Channel network using a Fabric topology, we recommend that you set the topology that each HBA will use to log into your switch to Point-to-Point Only. If you are using a QLogic HBA, the topology is set through the QLogic BIOS: Configure Settings --> Extended Firmware settings --> Connection Option: Point-toPoint Only
Note: : For QLogic HBAs, it is recommend that you hard code the link speed of the HBA to be in line with the switch speed.

DynaPath

FalconStor DynaPath must be installed on the client to support CDP/NSS storage failover. Refer to the DynaPath User Guide for details. Native Linux DM-Multipath is recommended for Linux systems. If no version of FalconStor DynaPath exists for your Linux kernel, you must use Linux DMMultipath. Refer to the Linux DM-Multipath Configuration with CDP/NSS Best Practice Guide. VMware clients may require configuration modifications to be supported. Refer to Knowledge Base article number 663 for details. The native multipathing in HP-UX 11iv3 can survive CDP/NSS storage failover without DynaPath. However, the "Transient time period" value should be extended if the failover time takes more than 60 seconds. To adjust the "Transient time period" value, follow the procedure below: 1. Run ioscan -m dsf to get the persistent DSF [disk number] for the assigned device. # ioscan -m dsf

Linux

VMware

HP-UX 11iV3

CDP/NSS Administration Guide

168

Fibre Channel Target Mode

Persistent DSF

Legacy DSF(s)

======================================== /dev/rdisk/disk5 /dev/rdsk/c9t0d0 /dev/rdsk/c7t0d0 /dev/rdsk/c17t0d0 /dev/rdsk/c11t0d0 2. Run scsimgr get_info -D /dev/rdisk/disk# |grep "Transient time period" to see the timeout value. # scsimgr get_info -D /dev/rdisk/disk5 |grep "Transient time period" Transient time period = 60

3. Run scsimgr set_attr -D /dev/rdisk/disk# -a transient_secs=<timeout value> to set to desire timeout value. # scsimgr set_attr -D /dev/rdisk/disk5 -a transient_secs=120 Value of attribute transient_secs set successfully 4. Once the desire value is tested, run scsimgr save_attr -D /dev/rdisk/disk# -a transient_secs=<timeout value> to save the value # scsimgr save_attr -D /dev/rdisk/disk5 -a transient_secs=120 Value of attribute transient_secs saved successfully

CDP/NSS Administration Guide

169

Fibre Channel Target Mode

Enable Fibre Channel target mode


To enable Fibre Channel Target Mode: 1. In the console, highlight the storage server that has the FC HBAs. 2. Right-click on the Server and select Options --> Enable FC Target Mode. An Everyone_FC client will be created under SAN Clients. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone.

Disable Fibre Channel target mode


To disable Fibre Channel Target Mode: 1. Unassign all resources from the Fibre Channel client. 2. Remove the Fibre Channel client. 3. Switch all targets to initiator mode. 4. Disable FC mode by right-clicking on the Server and selecting Options --> Disable FC Target Mode. 5. Run the stop IPStor all command to stop 6. Power off the server. Optional: Remove the FC cards 7. Run the IPStor Configtgt command and select q for no Fibre Channel support.

Verify the Fibre Channel WWPN


The World Wide Port Name (WWPN) must be unique for the Fibre Channel initiator, target, and the client initiator. To verify: Right-click on the server and select Verify FC WWPN If duplicate WWPNs are found, a message will display advising you to check your Fibre Channel configuration to avoid data corruption.

CDP/NSS Administration Guide

170

Fibre Channel Target Mode

Set QLogic ports to target mode


By default, all QLogic point-to-point ports are set to initiator mode, which means they will initiate requests rather than receive them. Determine which ports you want to use in target mode and set them to become target ports so that they can receive requests from your Fibre Channel Clients. It is recommended that you have at least four Fibre Channel ports per server in initiator mode, one of which is attached to your storage device. You need to switch one of those initiators into target mode so your clients will be able to see the storage server. You will then need to select the equivalent adapter on the Secondary server and switch it to target mode.
Note: If a port is in initiator mode and has devices attached to it, that port cannot be set for target mode.

To set a port: 1. In the FalconStor Management Console, expand Physical Resources. 2. Right-click on a HBA and select Options --> Enable Target Mode. You will get a Loop Up message on your storage server if the port has successfully been placed in target mode. 3. When done, make a note of all of your WWPNs. It may be convenient for you to highlight your server and take a screenshot of the Console.

CDP/NSS Administration Guide

171

Fibre Channel Target Mode

Set NPIV ports to target mode


With a N_Port ID Virtualization (NPIV) HBA, each port can be both a target and an initiator (dual mode). When using a NPIV HBA, there are two WWPNs, the base port and the alias.
Notes:

You should not use the NPIV driver if you intend to directly connect a target port to a client host. With dual mode, clients will need to be zoned to the alias port (called Target WWPN). If they are zoned to the base port, clients will not see any devices. You will only see the alias port when that port is in target mode. NPIV allows multiple N_Port IDs to share a single physical N_Port. This allows us to have an initiator, target and standby occupying the same physical port. This type of configuration is not supported when not using NPIV. As a failover setup best practice, it is recommended that you do not put more than one standby WWPN on a single physical port.

Each NPIV port can be both a target and an initiator. To use target mode, you must enable target mode on a port. In order to use target mode, the port needs to be in NPIV mode. This was set automatically for you when you loaded the driver (./ipstor configtgt,select qlogicnpic). To set target mode: 1. In the Console, expand Physical Resources. 2. Right-click on a NPIV HBA and select Enable Target Mode. 3. Click OK to enable. You will see two WWPNs listed for the port.

CDP/NSS Administration Guide

172

Fibre Channel Target Mode

Set up your failover configuration


If you will be using the FalconStor Failover option, and you have followed all of the steps in this Fibre Channel target mode section, you are now ready to launch the Failover Setup Wizard and begin configuration. Refer to The Failover Option for more details. HBAs and failover Failover with multiple switches Asymmetric failover modes are supported with QLogic HBAs.

When setting up Fibre Channel failover using multiple Fibre Channel switches, we recommend the following: If multiple switches are connected via inter-switch link (ISL), then the primary storage servers Target Port and the secondary storage servers Port can be on different switches. If the switches are not connected via ISL, where they can be managed as one fabric, the primary storage servers Target Port and the secondary storage servers Standby Port must be on the same switch.

Failover limitations

When using failover in Fibre Channel environments, it is recommended that you use the same type of Fibre Channel HBAs for all CDP/NSS client hosts. When configuring HA (failover), avoid zoning client initiators to the base WWPN of the FC ports(s) on the secondary server that are dedicated to be the standby port(s) in an HA pair.

CDP/NSS Administration Guide

173

Fibre Channel Target Mode

Add Fibre Channel clients


Client software is only required for Fibre Channel clients running a FalconStor Snapshot Agent or for clients using multiple protocols. If you do not install the Client software, you must manually add the Client in the Console. To do this: 1. In the Console, right-click on SAN Clients and select Add. 2. Select Fibre Channel as the Client protocol. 3. Select WWPN initiators. See Associate World Wide Port Names (WWPN) with clients. 4. Select Volume Set Addressing.
Volume Set Addressing is used primarily for addressing virtual buses, targets, and LUNs. If your storage device uses VSA, you must enable it. Note that Volume Set Addressing is selected by default for HP-UX clients.

5. Enter a name for the SAN Client, select the operating system and indicate whether or not the client machine is part of a cluster. If the clients machine name is not resolvable, you can enter an IP address and then click Find to discover the machine. 6. Indicate if you want to enable persistent reservation. This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be cleared.

7. Confirm all information and click Finish to add this client.

CDP/NSS Administration Guide

174

Fibre Channel Target Mode

Associate World Wide Port Names (WWPN) with clients


Similar to an IP address, the WWPN uniquely identifies a port in a Fibre Channel environment. Unlike an IP address, the WWPN is vendor assigned and is hardcoded and embedded. Depending upon whether or not you are using a switched Fibre Channel environment, determining the WWPN for each port may be difficult. If you are using a switched Fibre Channel environment, CDP/NSS will query the switch for its Simple Name Server (SNS) database and will display a list of all available WWPNs. You will still have to identify which WWPN is associated with each machine. If you are not using a switched Fibre Channel environment, you can manually determine the WWPN for each of your ports. There are different ways to determine it, depending upon the hardware vendor. You may be able to get the WWPN from the BIOS during bootup or you may have to read it from the physical card. Check with your hardware vendor for their preferred method.

To simplify this process, when you enabled Fibre Channel, an Everyone client was created under SAN Clients. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone. For security purposes, you may want to assign specific WWPNs to specific clients. For the rest, you can use the Everyone client. Do the following for each client for which you want to assign specific virtual devices: 1. Highlight the Fibre Channel Client in the FalconStor Management Console. 2. Right-click on the Client and select Properties.

CDP/NSS Administration Guide

175

Fibre Channel Target Mode

3. Select the Initiator WWPN(s) belonging to your client. Here are some methods to determine the WWPN of your clients: - Most Fibre Channel switches allow administration of the switch through an Ethernet port. These administration applications have utilities to reveal or allow you to change the following: Configuration of each port on the switch, zoning configurations, the WWPNs of connected Fibre Channel cards, and the current status of each connection. You can use this utility to view the WWPN of each Client connected to the switch. - When starting up your Client, there is usually a point at which you can access the BIOS of your Fibre Channel card. The WWPN can be found there. - The first time a new Client connects to the storage server, the following message appears on the server screen: FSQLtgt: New Client WWPN Found: 21 00 00 e0 8b 43 23 52 4. If necessary, click Add to add WWPNs for the client. You will see the following dialog if there are no WWPNs in the servers list. This could occur because the client machines were not turned on or because all WWPNs were previously associated with clients.

Assign virtualized resources to Fibre Channel Clients


For security purposes, you can assign specific SAN resources to specific clients. For the rest, you can use the Everyone client. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone. To assign resources, right-click on a specific client or on the Everyone client and select Assign. If a client has multiple ports and you are using Multipath software (such as DynaPath), after you select the virtual device, you will be asked to enter the WWPN mapping. This WWPN mapping is similar to Fibre Channel zoning and allows you to provide multiple paths to the storage server to limit a potential point of network failure. You can select how the client will see the virtual device in the following ways:
One to One - Limits visibility to a single pair of WWPNs. You will need to select the clients Fibre Channel initiator WWPN and the servers Fibre Channel target WWPN.

CDP/NSS Administration Guide

176

Fibre Channel Target Mode

One to All - You will need to select the clients Fibre Channel initiator WWPN. All to One - You will need to select the servers Fibre Channel target WWPN. All to All - Creates multiple data paths. If ports are ever added to the client or server, they will automatically be included in the WWPN mapping.

View new devices


In order to see the new devices, after you have finished configuring your Fibre Channel Clients, you will need to trigger a device rescan or reboot the Client machine, depending upon the requirements of the operating system.

Install and configure DynaPath


During failover, the storage server is temporarily unavailable. Since the failover process can take a minute or so, the Clients need to keep attempting to connect so that when the Server becomes available they can continue normal operations. One way of ensuring that the Clients will retry the connection is to use FalconStor's DynaPath Agent. DynaPath is a load-balancing/path-redundancy application that manages multiple pathways from your Client to the switch that is connected to your storage servers. Should one path fail, DynaPath will tap the other path for all I/O operations. If you are not using the DynaPath agent, you may be able to use other third-party multi-pathing software or you may be able to configure your HBA driver to perform the retries. We recommend that the Clients retry the connection for a minimum of two minutes. If you are using DynaPath, it should be installed on each Fibre Channel Client that will be part of your failover configuration. Refer to your DynaPath User Guide for more details.

CDP/NSS Administration Guide

177

Fibre Channel Target Mode

Spoof an HBA WWPN


Your FalconStor software contains a unique feature that can spoof initiator port WWPNs. This feature can be used to pre-configure HBAs, making the process of rebuilding a server simpler and less time consuming. This feature can also be useful when migrating from an existing server to a new one. This feature can also create a potential problem if not used carefully. If the old HBA is somehow connected back to the same FC fabric, the result will be two HBAs with the same WWPN, which can cause in a fabric outage. It is strongly recommended that you take the following measures to minimize the chance of a WWPN conflict: 1. Physically destroy the old HBA if it was replaced for defect. 2. Use your HBA vendor's tool to reprogram and swap the WWPN of the two HBAs. 3. Avoid spoofing. This can be done if you plan extra time for the zoning change.
Notes:

Each HBA port must be spoofed to a unique WWPN. Spoofing and un-spoofing are disabled after failover is configured. You must spoof HBAs and enable target mode before setting up Fibre Channel failover. Spoofing can only be performed when QLogic HBAs are in initiator mode. After a QLogic HBA has been spoofed, and the HBA driver is restarted, the HBA can then be changed to target mode and have resources assigned through it. Since most switch software applications use an "Alias" to represent a WWPN, you need to change the WWPN of the Alias and all the zones can be preserved.

To configure HBAs for spoofing: 1. In the FalconStor Management Console, right-click on a specific adapter and select Spoof WWPN. 2. Enter the desired WWPN for the HBA and click OK. 3. Repeat steps 1-2 for each HBA that needs to be spoofed and exit the Console. 4. Reload the HBA driver by typing:
ipstor restart all

5. Log back into your storage server from the console. You will notice the WWPN of the initiator port now has the spoofed WWPN. 6. If desired, switch the spoofed HBA to target mode.

CDP/NSS Administration Guide

178

Fibre Channel Target Mode

CDP/NSS Administration Guide

179

Fibre Channel Target Mode

CDP/NSS Administration Guide

180

Fibre Channel Target Mode

CDP/NSS Administration Guide

181

Fibre Channel Target Mode

CDP/NSS Administration Guide

182

Fibre Channel Target Mode

CDP/NSS Administration Guide

183

Fibre Channel Target Mode

CDP/NSS Administration Guide

184

Fibre Channel Target Mode

CDP/NSS Administration Guide

185

Fibre Channel Target Mode

CDP/NSS Administration Guide

186

Fibre Channel Target Mode

CDP/NSS Administration Guide

187

Fibre Channel Target Mode

CDP/NSS Administration Guide

188

CDP/NSS Administration Guide

SAN Clients
Storage Area Network (SAN) Clients are the file and application servers that access SAN resources. Since SAN resources appear as locally attached SCSI devices, the applications, such as file services, databases, web and email servers, do not need to be modified to utilize the storage. On the other hand, since the storage is not locally attached, there is some configuration needed to locate and mount the required storage.

Add a client from the FalconStor Management Console


1. In the console, right-click on SAN Clients and select Add. 2. Enter a name for the SAN Client, select the operating system, and indicate whether or not the client machine is part of a cluster. If the clients machine name is not resolvable, you can enter an IP address and then click Find to discover the machine. 3. Determine if you want to limit the amount of space that can be automatically assigned to this client. The quota represents the total allowable space that can be allocated for all of the resources associated with this client. It is only used to restrict certain types of resources (such as Snapshot Resource and CDP Resource) that expand automatically. This prevents them from allocating storage space indefinitely. Instead, they can only expand if the total size of all the resources associated with the client does not exceed the pre-defined quota for that client. 4. Indicate if you want to enable persistent reservation. This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be cleared.

5. Select the clients protocol(s). If you select iSCSI, you must indicate if this is a mobile client. You will then be asked to select the initiator that this client uses and add/select users who can authenticate for this client. Refer to Add iSCSI clients for more information. If you select Fibre Channel, you will have to select WWPN initiators. You will then be asked to select Volume Set Addressing. Refer to Add Fibre Channel clients for more information. 6. Confirm all information and click Finish to add this client

CDP/NSS Administration Guide

189

SAN Clients

Add a client for FalconStor host applications


If you are using FalconStor client/agent software, such as snapshot agents, or HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide or the appropriate agent user guide for details regarding adding clients via FalconStor Intelligent Management Agent (IMA). FalconStor client/agent software allows you to add a storage server directly in IMA/ SDM or the SAN Client. For example, if you are using HyperTrac, the first time you start HyperTrac, the system scans and imports all storage servers identified by IMA/SDM or the SAN Client. These storage servers are then listed in the HyperTrac the console. Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client. Refer to UNIX SAN Client error codes in the Troubleshooting / FAQs section for information regarding UNIX SAN Client error codes you may encounter.

CDP/NSS Administration Guide

190

CDP/NSS Administration Guide

Security
CDP/NSS utilizes strict authorization policies to ensure proper access to storage resources on the FalconStor storage network. Since applications and storage resources are now separated, and it is possible to transmit storage traffic over a non-dedicated network, extra measures have been taken to ensure that data is only accessible to those authorized to use it. To accomplish this, CDP/NSS safeguards the areas of potential vulnerability: System management allowing only authorized administrators to modify the configuration of the CDP/NSS storage system. Data access authenticating and authorizing the Clients who access the storage resources.

System management
CDP/NSS protects your system by ensuring that only the proper administrators have access to the systems configuration. This means that the administrators user name and password are always verified against those defined on the storage server before access to the configuration is granted. While the server verifies the administrators login, the root user is the only one who can add or delete IPStor administrators. The root user can also change other administrators passwords and has privileges to the operating system. Therefore, the servers root user is the key to protecting your server and the root user password should be closely guarded. It should never be revealed to other administrators. As best practice, IPStor administrator accounts should be limited to trusted administrators that can safely modify the server configuration. Improper modifications of the server configuration can result in lost data if SAN resources are deleted or modified.

Data access
Just as CDP/NSS protects your system configuration by verifying each administrator as they login, CDP/NSS protects storage resources by ensuring that only the proper computer systems have access to the systems resources. For access by application servers, two things must happen, authentication and authorization.
Authentication is the process of establishing the credentials of a Client and creating a trusted relationship (shared-secret) between the client and server. This prevents other computers from masquerading as the Client and accessing the storage.

CDP/NSS Administration Guide

191

Security

Authentication occurs once per Client-to-Server relationship and occurs the first time a server is successfully added to a client. Subsequent access to a server from a client uses the authenticated shared secret to verify the client. Credentials do not need to be re-established unless the software is re-installed. The authentication process uses the authenticated Diffie-Hellman protocol. The password is never transmitted through the network, not even in encrypted form to eliminate security vulnerabilities.
Authorization is the process of granting storage resources to a Client. This is done through the console by an IPStor administrator or the servers root user. The client will only be able to access those storage resources that have been assigned to it.

Account management
Only the root user can manage users and groups or reset passwords. You will need to add an account for each person who will have administrative rights in CDP/NSS. You will also need to add a user account for clients that will be accessing storage resources from a host-based application (such as FalconStor DiskSafe or FileSafe). To make account management easier, users can be grouped together and handled simultaneously. To manage users and groups, right-click on the server and select Accounts. A list of all existing users and administrators are listed on the Users tab and a list of all existing groups is listed on the Groups tab. The rights of each are summarized in the table below:
Type of Administrator Create/ Delete Pools Add/Remove Storage from Pools Assigns Rights to IPStor Users Create/Modify/ Delete Logical Resources Assign Storage to Clients

Root IPStor Administrator IPStor User

x x

x x

x x

x x x - IPStor Users can only modify/ delete logical resources that they created.

x x x

For additional information regarding user access rights, refer to the Manage accounts section and Manage storage pools and the devices within storage pools.

Security recommendations
In order to maintain a high level of security, a CDP/NSS installation should be configured and used in the following manner:

CDP/NSS Administration Guide

192

Security

Storage network topology


For optimal performance, CDP/NSS does not encrypt the actual storage data that is transmitted between the server and clients. Encrypting and decrypting each block of data transferred involves heavy CPU overhead for both the server and clients. Since CDP/NSS transmits data over potentially shared network channels instead of a computers local bus, the storage data traffic can be exposed to monitoring by other devices on the same network. Therefore, a separate segment should be used for the storage network if a completely secure storage system is required. Only the CDP/NSS clients and storage servers should be on this storage network segment. If the configuration of your storage network does not maintain a totally separate segment for the storage traffic, it is still possible to maintain some level of security by using encryption or secure file systems on the host computers running the CDP/ NSS Client. In this case, data written to storage devices is encrypted, and cannot be read unless you have the proper decryption tool. This is entirely transparent to the CDP/NSS storage system; these tools can only be used at the CDP/NSS client as the storage server treats the data as block storage data.

Physical security of machines


Due to the nature of computer security in general, if someone has physical access to a server or client, the security of that machine is compromised. By compromised, we mean that a person could copy a password, decipher CDP/NSS or system credentials, or copy data from that computer. Therefore, we recommend that your servers and clients be maintained in a secure computer room with limited access. This is not necessary for the console, because the console does not leave any shared-secret behind. Therefore, the console can be run from any machine, but that machine should be a "safe", non-compromised machine, specifically one that you are sure does not have a Trojan horse-like program hidden that may be monitoring or recording key strokes. Such a program can collect your password as you type, thereby compromising your systems security. Of course, this is a general computer security concern which is not unique to CDP/NSS. In addition, you should be aware that there is no easy way to detect the presence of such malicious programs, even by using anti-virus software. Unfortunately, many people with programming knowledge are capable of creating these types of malicious programs, which will not have a signature that anti-virus software can identify. Therefore, you should never type in your password, or any password, in an environment you cannot trust 100%.

Disable ports
Disable all unnecessary ports. The only ports required by CDP/NSS are shown in Port Usage:

CDP/NSS Administration Guide

193

CDP/NSS Administration Guide

Failover
Overview
To support mission-critical computing, CDP/NSS-enabled technology provides high availability for the entire storage network, protecting you from a wide variety of problems, including: Connectivity failure Storage device path failure Storage device failure Storage server or device failure

The following illustrates a basic CDP/NSS configuration with potential points of failure and a high availability configuration, where FalconStors high availability options work with redundant hardware to eliminate the points of failure:

CDP/NSS Administration Guide

194

Failover

The Failover Option

The FalconStor failover option provides high availability for CDP and NSS operations by eliminating the down time that can occur should a storage server (software or hardware) or a storage device fail. There are two modes of failover: Shared storage failover - Uses a two-node failover pair to provide node level redundancy. This model requires a shared storage infrastructure and is typically Fibre Channel based. Non-shared storage failover (Cross-mirror failover) - Provides high availability without the need for shared storage. Used with appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. (Available in a Virtual Appliance environment.)

Best Practice

As a failover setup best practice, it is recommended that you do not put more than one standby WWPN on a single physical port. Both NSS/CDP nodes in a cluster configuration require the same number of physical Fibre Channel target ports to achieve best practice failover configurations. FalconStors Primary and Secondary servers are separate, independent storage servers that each have their own assigned clients. The primary storage server is the server that is being monitored by the secondary Storage server. In the event the primary fails, the secondary takes over. This is referred to as Active-Passive Failover. The terms Primary and Secondary are purely from the clients perspective since these servers may be configured to monitor each other. This is referred to as Mutual Failover or Failover. In that case, each server is primary to its own clients and secondary to the others clients. Each server normally services its own clients. In the event one server fails, the other will take over and serve the failed servers clients.

Primary/ Secondary Storage Servers

Failover/ Takeover

Failover/takeover is the process that occurs when the secondary server takes over the identity of the primary. In the case of cross-mirroring on a virtual appliance, failover occurs when all disks are swapped to the secondary server. Failover will occur under the following conditions: One or more of the storage server processes goes down. There is a network connectivity problem, such as a defective NIC or a loose network cable with which this NIC client is associated. (Shared storage failover) There is a storage path failure. The heartbeat cannot be retrieved. There is a power failure. One or more Fibre Channel target is down.

Recovery/ Failback

Recovery/Failback is the process that occurs when the secondary server releases the identity of the primary to allow the primary to restore its operation. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode. After recovering a virtual appliance cross-mirror failure, the secondary server swaps disks back to the primary server after the disks are re-synchronized.

CDP/NSS Administration Guide

195

Failover

Storage Cluster Interlink

A physical connection between two servers to mirror snapshot and SafeCache metadata between high-availability (HA) pairs. This enables rapid failover and reduces the time required to load snapshot and SafeCache data from the disk. Two Ethernet ports (sci0 and sci1) are reserved for this purpose.

Sync Standby Devices

This menu option is available from the console (Failover --> Sync Standby Devices) and is useful when the Storage Cluster Interlink connection in a failover pair is broken. Select this option to manually synchronize the standby device information on both servers once the Storage Cluster Interlink is reconnected. (Fibre Channel only) Asymmetric failover requires standby ports on the secondary server in case a target port on your primary server fails. For virtual appliances: Swap is the process that occurs with cross-mirroring when data functions are moved from a failed virtual disk on the primary server to the mirrored virtual disk on the secondary server. The disks are swapped back once the problem is resolved.

Asymmetric mode Swap

CDP/NSS Administration Guide

196

Failover

Shared storage failover sample configuration

This diagram illustrates a shared storage failover configuration. In this example, both servers are monitoring each other. Because both servers are actively serving their own clients, this configuration is referred to as an active-active or mutual failover configuration. When server A fails, server B takes over and serves the clients of server A in addition to its own clients.

CDP/NSS Administration Guide

197

Failover

Failover requirements
The following are the requirements for setting up a failover configuration: General failover requirements You must have two storage servers. The failover pair should be installed with identical Linux operating system versions. Version 7.0 and later requires a Storage Cluster Interlink Port for failover setup. This is a physical connection (also used as a hidden heartbeat IP) between two servers. If you wish to disable the Storage Cluster Interlink heartbeat functionality, contact Technical Support. Note: When USEQUORUMHEALTH is disabled and there are no clientassociated network interfaces, all network interfaces - including the Storage Cluster Interlink Port - must go down before failover can occur. When the Storage Cluster Interlink heartbeat functionality is disabled, it is no longer treated as a heartbeat IP connection for failover. Both servers must reside on the same network segment, because in the event of a failover, the secondary server must be reachable by the clients of the primary server. This network segment must have at least one other device that generates a network ping (such as a router, switch, or server). This allows the secondary server to detect the network in the event of a failure. You need to reserve an IP address for each network adapter in your primary failover server. The IP address must be on the same subnet as the secondary server and is used by the secondary server to monitor the primary server's health. In a mutual failover configuration, these IP addresses are used by the servers to monitor each other's health. The health monitoring IP address remains with the server in the event of failure so that the servers health can be continually monitored. Note: The storage server clients and the console cannot use the health monitoring IP address to connect to a server. You must use static IP addresses for your failover configuration. It is also recommended that the IP addresses of your servers be defined in a DNS server so they can be resolved. If you will be using Fibre Channel target mode or iSCSI target mode, you must enable it on both the primary and secondary servers before creating your failover configuration. The first time you set up a failover configuration, the secondary server must not have any replica resources. You must have at least one device reserved for a virtual device on each primary server with enough space to hold the configuration repository that will be created. The main repository should be established on a RAID5 or RAID1 file system for ultimate reliability. It is strongly recommended that you use some type of power control option for failover servers. If you are using an external hardware power controller for your failover pair, you should set it up before creating your failover configuration. Refer to Power Control options for more information.

CDP/NSS Administration Guide

198

Failover

General failover requirements for iSCSI clients

(Window iSCSI clients) The Microsoft iSCSI initiator has a default retry period of 60 seconds. You must change it to 300 seconds in order to sustain the disk for five minutes during failover so that applications will not be disrupted by temporary network problems. This setting is changed through the registry. 1. Go to Start --> Run and type regedit. 2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D6 E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\

where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, ..... 3. Right-click Parameters and select Export to create a backup of the parameter values. 4. Double-click MaxRequestHoldTime. 5. Pick Decimal and change the Value data to 300. 6. Click OK. 7. Reboot Windows for the change to take effect. Shared storage failover requirements Both servers must have at least one Network Interface Card (NIC) each (on the same subnet). Unlike other clustering software, the heartbeat co-exists on the same NIC as the storage network. The heartbeat does not require and should NOT be on a dedicated heartbeat interface and subnet. The failover pair must have connections to the same common storage; if storage cannot be seen by both servers, it cannot be accessed from both servers. However, the storage does not have to be represented the same way to both servers. Each server needs at least one path to each commonly-shared physical storage device, but there is no maximum and they do not need to be equal (i.e., server A has two paths while server B has four paths). Make sure to properly configure LUN masking on storage arrays so both storage server nodes can access the same LUNs. Storage devices must be attached in a multi-host SCSI configuration or attached on a Fibre loop or switched fabric. In this configuration, both servers can access the same devices at the same time (both read and write). (SCSI only) Termination should be enabled on each adapter, but not on the device, in a shared bus arrangement. If you will be using the FalconStor NIC Port Bonding option, you must set it up before creating a failover configuration. You cannot change or remove NIC Port Bonding once failover is set up. If you need to change NIC Port Bonding, you will have to remove failover first.

CDP/NSS Administration Guide

199

Failover

Cross-mirror failover requirements

Available only for virtual appliances. Each server must have identical internal storage. Each server must have at least two network ports (one for the required network cable). The network ports must be on the same subnet. Only one dedicated cross-mirror IP address is allowed for the mirror. The IP address must be 192.168.n.n. Only virtual devices can be mirrored. Service Enabled Devices and system disks cannot be mirrored. The number of physical disks on each machine must match and the disks must have matching ACSLs (adapter, channel, SCSI ID, LUN). When failover occurs, both servers may have partial storage. To prevent a possible dual mount situation, we strongly recommend that you use a hardware power controller, such as IPMI. Refer to Power Control options for more information. Prior to configuration, virtual resources can exist on the primary server as long as the identical ACSL is unassigned or unowned by the secondary server. After configuration, pre-existing virtual resources will not have a mirror. You will need to use the Verify & Repair option to create the mirror. During failover, the storage server is temporarily unavailable. Since the failover process can take a minute or so, clients need to keep attempting to connect so that when the server becomes available they can continue normal operations. One way of ensuring that clients will retry the connection is to use a FalconStor multi-pathing agent, such as DynaPath. If you are not using DynaPath because there is no corresponding Linux kernel, you may be able to use Linux DM-Multipath or you may be able to configure your HBA driver to perform the retries. It is recommended that clients retry the connection for a minimum of two minutes. Fibre Channel target ports are not required on either server for Asymmetric mode. However, if Fibre Channel is enabled on both servers, the primary server MUST have at least one target port and the secondary server MUST have a standby port. If Fibre Channel is disabled on both servers, neither server needs to have target/standby ports. If target ports are configured on a server, you must have at least the same number of initiators (or aliases depending on the adapter) on the other server. Asymmetric failover supports the use of QLogic HBAs.

FC-based Asymmetric failover requirements

CDP/NSS Administration Guide

200

Failover

Pre-flight checklist for failover


Prior to configuring failover, follow the steps below for both the primary and secondary NSS device: 1. Make sure all expected physical LUNs and their paths are detected properly under the Physical Resources node in the FalconStor Management Console. If any physical LUNs or paths are missing, rescan the appropriate adapter to discover all expected devices and paths. 2. Make sure the configuration repository exists. If it does not exist, you create it using the Enable Configuration Repository option or the Failover Setup Wizard will prompt you to create it during configuration. Refer to Protect your storage servers configuration for details. 3. Ensure that Service Enabled Devices have been configured for all physical LUNs that are reserved for SED. 4. If any physical LUNs are reserved for SED, but SED devices are not yet configured, change the property of these physical LUNs from "Reserved for Service Enabled Device" to "Unassigned". Without this step, you will have a device configuration mismatch in the Failover configuration wizard and will not be able to proceed. 5. Rescan all existing devices. 6. Make sure unique Storage Cluster Interlink (SCI) IP addresses have been set for sci0 and sci1 on each server. You can verify/modify the IP addresses from the console by right-clicking the server and selecting System Maintenance -> Configure Network. Refer to Network configuration for details. 7. Make sure there is a physical connection between the SCI ports on the HA servers. One network cable should connect the two sci0 ports and another network cable should connect the two sci1 ports.

Connectivity failure
A connectivity failure can occur due to a NIC, Fibre Channel HBA, cable, or switch/ router failure. You can eliminate potential points of failure by providing multiple paths to the storage server with multiple NICs, HBAs, cables, switches/routers. The client always tries to connect to the server with its original IP address (the one that was originally set in the client when the server was added to the client). You can re-direct traffic to an alternate adapter by permitting you to specify the alternate IP addresses for the storage server. This can be done in the console (right-click on the server and select Properties --> Server IP Addresses tab).

CDP/NSS Administration Guide

201

Failover

When you set up multiple IP addresses, the clients will attempt to communicate with the server using an alternate IP address if the original IP address stops responding.
Notes:

In order for failover to occur when there is a failure, the device driver must promptly report the failure. Make sure you have the latest driver available from the manufacturer. In order for the clients to successfully use an alternate IP address, your subnet must be set properly so that the subnet itself can redirect traffic to the proper alternate adapter. The client becomes aware of the multiple IP addresses when it initially connects to the server. Therefore, if you add additional IP addresses in the console while the client is running, you must rescan devices (Windows clients) or restart the client (Unix clients) to make the client aware of these IP addresses. In addition, if you recover from a network path failure, you will need to restart the client so that it can use the original IP address.

Default failover behavior


Default failover behavior is described below: Fibre Channel Target failure Fibre Channel Target failure: If a Fibre Channel target port links down, the partner server will immediately takeover. This is true regardless of the number of target ports on the NSS server. For example, the server can use multiple targets to provide virtual devices to the client. If a target loses connectivity, the client will still have alternate paths to access those devices. However, the default behavior is to failover. The default behavior can be modified by Technical Support. Network connection failure and iSCSI clients: By default CDP/NSS server failover will occur when a network connection goes down and that connection is also associated with the iSCSI target of a client. If multiple subnets are used to connect to the CDP or NSS server, the default behavior can be modified by Technical Support so that failover will not occur until all network connections are down.

Network connection failure

CDP/NSS Administration Guide

202

Failover

Storage device path failure


(Shared storage failover) A storage device path failure can occur due to a cable or switch/router failure. You can eliminate this potential point of failure by providing a multiple path configuration, using multiple Fibre Channel switches, and/or multiple adapters, and/ or storage devices with multiple controllers. In a multiple path configuration, all paths to the storage devices are automatically detected. If one path fails, there is an automatic switch to another path.
Note: Fibre Channel switches can demonstrate different behavior in a multiple path configuration. Before using this configuration with CDP or NSS, you must verify that the configuration can work on your server without the CDP or NSS software. To verify:

1. Use the hardware vendors utility or Linuxs cat /proc/scsi/scsi. command to see the devices after the driver is loaded. 2. Use the hardware vendors utility or Linuxs hdparm command to access the devices. 3. Unplug the cable from one device and use the utilities listed above to verify that everything is working. 4. Repeat the test by reversing which device is unplugged and verify that everything is still working.

Storage device failure


The FalconStor Mirroring and Cross-mirror failover options provide high availability by minimizing the down time that can occur if a physical disk fails. With mirroring, each time data is written to a designated disk, the same data is also written to another disk. This disk maintains an exact copy of the primary disk. In the event that the primary disk is unable to read/write data when requested to by a SAN Client, the data functions are seamlessly swapped to the mirrored copy disk.

CDP/NSS Administration Guide

203

Failover

Storage server or device failure


The FalconStor failover option provides high availability by eliminating the down time that can occur should a CDP or NSS appliance (software or hardware) fail. In the failover design, a storage server is configured to monitor another storage server. In the event that the server being monitored fails to fulfill its responsibilities to the clients it is serving, the monitoring server will seamlessly take over its identity so the clients will transparently fail over to the monitoring server. A unique monitoring system is used to ensure the health of the storage servers. This system includes a self-monitor and an intelligent heartbeat monitor. The self-monitor is part of all CDP/NSS appliances, not just the servers configured for failover and provides continuous health status of the server. It is part of the process that provides operational status to any interested and authorized parties, including the console and supported network management applications through SNMP. The self-monitor checks all storage server processes and connectivity to the servers storage devices. In a failover configuration, FalconStors intelligent heartbeat monitor continuously monitors the primary server through the same network path that the server uses to serve its clients. When the heartbeat is retrieved, the results are evaluated. There are several possibilities: All is well and no failover is necessary. The self-monitor detects a critical error in the IPStor Server processes that is determined to be fatal, yet the error did not affect the network interface. In this case, the secondary will inform the primary to release its CDP/NSS identity and will take over serving the failed servers clients. The self-monitor detects a storage device connectivity failure but cannot determine if the failure is local or applies to the secondary also. In that case the device error condition will be reported through the heartbeat.The secondary will check to see if it can successfully access the storage. If it can, it attempts to access all devices. If it can successfully access all devices, the secondary initiates a failover. If it cannot successfully access all devices, no failover occurs. If you are using the FalconStor Cross-mirror feature, a swap will occur.

Because the heartbeat uses the same network path that the server uses to serve its clients, if the heartbeat cannot be retrieved and there are iSCSI clients associated with those networks, the secondary server knows that the clients cannot access the server. This is considered a Catastrophic failure because the server or the network connectivity is incapacitated. In this case the secondary will immediately initiate a failover.

CDP/NSS Administration Guide

204

Failover

Failover restrictions
The following information is important to be aware of when configuring failover: JBODs are not recommended for failover. If you use a JBOD as the storage device for a storage server (configured in Fabric Loop), certain downstream failover scenarios, such as SCSI Aliasing, might not function properly. If a Fibre connection on the storage server is broken, the JBOD might hang and not respond to SCSI commands. SCSI Aliasing will attempt to connect using the other Fibre connection; however, since the JBOD is in an unknown state, the storage server cannot reconnect to the JBOD, causing CDP/NSS clients to disconnect from their resources. In a pure Fibre Channel environment, Network failure will not trigger failover.

Failover setup
You will need to know the IP address(es) of the primary server (and the secondary server if you are configuring a mutual failover scheme). You will also need the health monitoring IP address(es). It is a good idea to gather this information and find available IP addresses before you begin the setup. 1. In the console, right-click on an expanded server and select Failover --> Failover Setup Wizard. You will see a screen similar to the following that shows you a status of options on your server.

Any options enabled/installed on the primary storage server must also be enabled/installed on the secondary storage server. 2. If you have recently made device changes, rescan the servers physical adapters.

CDP/NSS Administration Guide

205

Failover

Before a failover configuration can be created, the storage system needs to know the ownership of each physical device for the selected server. Therefore, it is recommended that you allow the wizard to rescan the servers devices. If you have recently used the Rescan option to rescan the selected server's physical adapters, you can skip the server scanning process. 3. Select whether or not you want to use the Cross-mirror feature (available for virtual appliances only).

4. Select the secondary server and determine if the servers will monitor each other.
Shared storage failover

Select if you want both servers to monitor each other.

CDP/NSS Administration Guide

206

Failover Cross mirror failover (non-shared storage)

Click Find or manually enter IP address for the secondary server. Both IP addresses must start with 192.168.

5. (Cross-mirror only) Select the disks that will be used for the primary server.

System disks will not be listed. The disks you select will be used as storage by the primary server. The ones that are not selected will be used as storage by the secondary server.

CDP/NSS Administration Guide

207

Failover

6. (Cross-mirror only) Confirm the disks that will be used for the secondary server.

7. (Cross-mirror only) Confirm the physical device allocation.

8. Follow the wizard to create a configuration repository on this server. The configuration repository maintains a continuously updated version of your storage system configuration. For additional security, after your failover configuration is complete, you can enable mirroring on the configuration repository. It is also recommended that you create a configuration repository even if you have a standalone server. Be sure to use a different physical drive for the mirror.
Note: If you need to recreate the configuration repository for any reason, such as switching to another physical drive, you can use the Reconfigure option. Refer to Recreate the configuration repository for details.

9. Determine if there are any conflicts with the server you have selected.
CDP/NSS Administration Guide 208

Failover

If physical disks, pre-existing virtual disks, or service enabled disks cannot be seen by both primary and secondary storage servers, you will be alerted. If there are conflicts, a window similar to the following will display:

You will see mismatched devices listed here. For example, if you have a RAID array and one server sees all eight devices and the other server sees only four devices, you will see the devices listed here as mismatched. You must resolve the mismatch before continuing. For example, if the QLogic driver did not load on one server, you will have to load it before going on. Note that you can exclude physical devices from failover consideration, if desired. 10. Determine if you need to rescan this servers physical adapters. If you fixed any mismatched devices in the last step, you will need to rescan before the wizard can continue. If you are re-running the Failover wizard because you made a change to a physical device on one of the servers, you should rescan before continuing. If you had no conflicts and have recently used the Rescan option to rescan the selected server's physical adapters, you can skip the scanning process.
Note: If this is the first time you are setting up a failover configuration, you will get a warning message if there are any Replica resources on the secondary server. You will need to remove them and then restart the failover wizard.

11. If this is a mutual failover configuration, follow the wizard to create a configuration repository on the secondary server.

CDP/NSS Administration Guide

209

Failover

12. Verify the Storage Cluster Interlink Port IP addresses for failover setup.

The IP address fields are automatically populated with the IP address associated with sci0. If the IP addresses listed are incorrect, you will need to click Cancel to exit the failover setup wizard and modify the IP address. You can verify/modify the IP addresses from the console by right-clicking the server and selecting System Maintenance -> Configure Network. Refer to Network configuration for details. 13. Select at least one subnet that you want to configure from the list. If there are multiple subnets, use the arrows to set the order in which the heartbeat is to be checked.

CDP/NSS Administration Guide

210

Failover

By re-ordering the subnet list, failover can be avoided due to a failure on eth0. If you are using the Cross-mirror feature, you will not see the 192.168... crossmirror link that you entered earlier listed here. 14. Indicate if you want to use this network adapter.

This is the window you will see for a non-mutual failover.

Mutual failover configuration.

Select the IP addresses that clients will use to access the storage servers when using iSCSI, replication and for console communication.
Notes:

If you change the Server IP addresses while the console is connected using those IP addresses, then the Failover wizard will not be able to successfully create the configuration. If you uncheck the Include this Network Adapter for failover box, the wizard will display the next card it finds. You must choose at least one. For SAN resources, because failover can occur at any time, you should use only those IP addresses that are configured as part of the failover configuration to connect to the server.

CDP/NSS Administration Guide

211

Failover

15. Enter the health monitoring IP address you reserved for the selected network adapter.

This is the window you will see for a non-mutual failover.

You have to enter IP addresses for both servers in a mutual failover configuration.

The health monitoring IP address remains with the server in the event of failure so that the servers health can be continually monitored. Therefore it is recommended that you use static IP addresses. Select health monitoring heartbeat addresses which will be used exclusively by the storage servers to monitor each others health. These addresses must not be used for any other purpose. 16. If you want to use additional network adapter cards, repeat the steps above.

CDP/NSS Administration Guide

212

Failover

17. (Asymmetric mode only) For Fibre Channel failover, select the initiator on the secondary server that will function as a standby in case the target port on your primary server fails.

For QLogic HBAs, you will need to select a dedicated standby port for each target port used by clients. You should confirm that the adapter shown is not the initiator on your secondary server that is connected to the storage array, and also that it is not the target adapter on your secondary server. You can only pick a standby port once. The exception to this rule is when you are using NPIV. If you are configuring a mutual failover, you will need to set up the standby adapter for the secondary server as well. 18. Select which Power Control option the primary server is using.

Power Control options force the primary server to release its resources after a failure. Refer to Power Control options for more information.

CDP/NSS Administration Guide

213

Failover

HP iLO - This option will power down the primary server in addition to forcing the release of the servers resources and IP address. In order to use HP iLO, several packages must be installed on the server and you must have configured the controllers IP address to be accessible from the storage servers. In this dialog, enter the HP iLO ports IP address. Refer to HP iLO for more information.

For Red Hat 5, the following packages are automatically installed on each server (if you are using the EZStart USB key) in order to use HP iLO power control: perl-IO-Socket-SSL-1.01-1.fc6.noarch.rpm perl-Net-SSLeay-1.30-4.fc6.x86_64.rpm
RPC100 - This option will power down the primary server in addition to forcing the release of the servers resources and IP address. RPC100 is an external power controller available in both serial and parallel versions. Select the correct port, depending upon which version you are using. Refer to RPC100 for more information. IPMI - This option will reset the power of the primary server, forcing the release of the servers resources and IP address. In order to use IPMI, you must have created an administrative user via your IPMI configuration tool. The IP address cannot be the virtual IP address that was set for failover. Refer to IPMI for more information. APC PDU - This option will reset the power of the primary server, forcing the release of the servers resources and IP address. The APC PDU external hardware power controller must be set up before you can use it. In this dialog, enter the IP address of the APC PDU, the community name that was given Write+ access, and the port(s) that the failover partner is physically plugged into on the PDU. Use a space to separate multiple ports. Refer to APC PDU for more information.

For Red Hat 5, you will need to install the following packages on each server in order to use APC PDU: lm_sensors-2.10.7-9.el5.x86_64.rpm net-snmp-5.3.2.2-9.el5_5.1.x86_64.rpm net-snmp-libs-5.3.2.2-9.el5_5.1.i386.rpm net-snmp-libs-5.3.2.2-9.el5_5.1.x86_64.rpm net-snmp-utils-5.3.2.2-9.el5_5.1.x86_64.rpm 19. Select which Power Control option the secondary server is using.

CDP/NSS Administration Guide

214

Failover

20. Confirm all of the information and then click Finish to create the failover configuration.

Once your configuration is complete, each time you connect to either server in the console, you will automatically be connected to the other as well.After configuring cross-mirror failover, you will see all of the virtual machine disks listed in the tree, similar to the following:

These are local physical disks for this server. The V indicates the disk is virtualized for this server and an F indicates a foreign disk. The Q indicates a quorum disk containing the configuration repository. These are remote physical disks for this server.

Notes:

If the setup fails during the setup configuration stage (for example, the configuration is written to one server but then the second server is unplugged while the configuration is being written to it), use the Remove Failover Configuration option to delete the partially saved configuration. You can then create a new failover configuration. Do not change the host name of a server that is part of a failover pair.

CDP/NSS Administration Guide

215

Failover

After a failover occurs, if a client machine is rebooted while either of the failover servers is powered off, the client must rescan devices once the failover server is powered back on, but before recovery occurs. If this is not done, the client machine will need to be rebooted in order to discover the newly restored paths.

Recreate the configuration repository


To recreate the configuration repository for any reason, such as switching to another physical drive, you can use the Reconfigure option. To do this, follow the steps below: 1. Navigate to Logical Resources --> Configuration Repository. 2. Right-click and select Reconfigure. 3. Follow the instructions on the wizard to select a physical device (10240 MB of space in one contiguous physical disk segment is required). 4. Click Finish to recreate the configuration repository. 5. Repeat these steps on the second node of the failover pair

Power Control options


At times, a server may become unresponsive, but, because of network or internal reasons, it may not release its resources or its IP address, thereby preventing failover from occurring. To allow for a graceful failover, you can use the Power Control options to force the primary server to release its resources after a failure. Power Control options are used to prevent clusters from competing for access to the same storage. They are triggered when a secondary server fails to communicate with the primary server over both the network and the quorum drive. When this occurs, the secondary server triggers a forceful take over of the primary server and triggers the selected Power Control option. When a partner server has been forcefully taken over, it cannot communicate with the power control device (i.e. IPMI, HP iLO), and failover will not occur. However, you may issue a manual takeover from the console, if necessary. This default behavior (for version 7.00 and later) also occurs if the failover configuration has been set up with no power control option. Failure to communicate to the power control devices may be caused by one the following reasons: Authentication error (password and/or username is incorrect) Network connectivity issue Server power cable is unplugged Wrong information used for power control device such as incorrect IP

Power Control is set during failover configuration. To change options, right-click on either failover server and select Failover --> Power Control.

CDP/NSS Administration Guide

216

Failover

HP iLO

This option powers down the primary server in addition to forcing the release of the servers resources and IP address. HP iLO is available on HP servers with the ILO (Integrated Lights Out) option. In order to use HP iLO, you must have configured the controllers IP address to be accessible from the storage servers. The console will prompt you to enter the HP iLO ports IP address of the server.
Note: The HP iLO power control option depends on the storage server being able to access the HP iLO port through its regular network connection. If the HP iLO port is inaccessible, this option will not function. Each time the power control dialog screen is launched, the username/password fields will be blank. The fields are available for update but the current username and password information is not revealed for security purposes. You can make changes by re-entering your username and password.

RPC100

This option will power down the primary server in addition to forcing the release of the servers resources and IP address. RPC100 is an external power controller available in both serial and parallel versions. The console will prompt you to select the serial or parallel port, depending upon which version of the RPC100 you are using. Note that the RPC100 power controller only controls one power connection. If the storage server has multiple power supplies there will be a need for a special power cable to connect them all. (Not available in version 7) This option is not an actual Power Control option, but a storage solution to prevent two storage servers from accessing the same physical storage device simultaneously. Note that this option is only available on those storage devices that support SCSI Reserve & Release. This option will not force a hung storage server to reboot and will not force the hung server to release its IP addresses or bring down its FC targets. The secondary server will simply reserve the primary servers physical resources, thereby preventing the possibility of a double mount. If the primary server is not actually hung and is only temporarily unable to communicate with the secondary server through normal means, the triggering of the SCSI Reserve/Release from the secondary server will trigger a reservation conflict on the primary server. At this point the primary server will release both its IP addresses and FC targets so the secondary can successfully take over. If this occurs the primary server will need to be rebooted before the reservation conflict can be resolved. The commands, ipstor restart and ipstor restart all will NOT resolve the reservation conflict. This option will reset the power of the primary server, forcing the release of the servers resources and IP address. Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server. If IPMI is provided by your hardware vendor, you must follow the vendors instructions to configure it and you must create an administrative user via your IPMI configuration tool. The IP address cannot be the virtual IP address that was set for failover. If you are using IPMI, you will see several IPMI options on the servers System Maintenance menu, Monitor, and Filter. Refer to Perform system maintenance for more information.

SCSI Reserve/ Release

IPMI

CDP/NSS Administration Guide

217

Failover

You should check the FalconStor certification matrix for a current list of FalconStor appliances and server hardware that has been certified for use with IPMI. APC PDU This option will reset the power of the primary server, forcing the release of the servers resources and IP address. The APC PDU is an external hardware power controller that must be set up before you can use it. To set up the APC PDU power controller: 1. Connect the APC PDU to your network. 2. Via the COM port on the unit, set an IP address that is accessible from the storage servers. 3. Launch the APC PDU user interface from the COM port or the Web. 4. Enable SNMP on the APC PDU. This can be found under Network. 5. Add or edit a Community Name and give it Write+ access. You will use this Community Name as the password for configuration of the power control option. For example, if you want to use the password apc, you have to create a Community Name called apc or change the default Community Name to APC and give it Write+ access. 6. Connect the power plugs of your storage servers to the APC PDU. Be sure to note which outlets are used for each server.

CDP/NSS Administration Guide

218

Failover

Check Failover status


You can see the current status of your failover configuration, including all settings, by checking the Failover Information tab for the server.

Failover settings, including which IP addresses are being monitored for failover.

Current status of failover configuration.

The server is highlighted in a specific color indicating the following conditions: Red - The server is currently in failover mode and has been taken over by the secondary server. Green - The server has taken over the primary server's resources. Yellow - The user has suspended failover on this server. The current server will NOT take over the primary server's resources even it detects abnormal condition from the primary server.

Failover events are also written to the primary server's Event Log, so you can check there for status and operational information, as well as any errors. You should be aware that when a failover occurs, the console will show the failover partners Event Log for the server that failed. For troubleshooting issues pertaining to failover, refer to the Failover Troubleshooting section.

Failover Information report


The Failover Information Report can be viewed by double clicking on the server status of the failed server from the console in the General tab.

CDP/NSS Administration Guide

219

Failover

Failover network failure status report


The Network failure status report can be viewed using the sms command on the failed server when failover has been triggered due to a client associated NIC link being down.

Recover from failover


When a failed server is restarted, it communicates with the acting primary server and must receive the okay from the acting primary server in order to recover its role as the primary server. If there is a communication problem, such as a network error, and no notification is received, the failed server remains in a 'ready' state but does not recover its role as the primary server. After the communication problem has been resolved, the storage server will then be able to recover normally. If failover is suspended on the secondary server, or if the failover module is stopped, the primary will not automatically recover until the ipstorsm.sh recovery command is entered. If both failover servers go offline and then only one is brought up, type the ipstorsm.sh recovery command to bring the storage server back online.

Manual recovery
Manual recovery is the process when the secondary server releases the identity of the primary to allow the primary to restore its operation. Manual recovery can be triggered by selecting the Stop Takeover option from the FalconStor Management Console.

CDP/NSS Administration Guide

220

Failover

If the primary server is not ready to recover, and you can still communicate with the server, a detailed failover screen displays.

If the primary server is not ready to recover, and you cannot communicate with the server, a warning message displays.

CDP/NSS Administration Guide

221

Failover

Auto recovery
You can enable auto recovery by changing the Auto Recovery option after failover, when control is returned to the primary server once the primary server has recovered. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode.

Fix a failed server


If the primary server fails over to the secondary and hardware changes are made to the failed server, the secondary server will not be aware of these changes. When failback occurs, the original configuration parameters will be returned to the primary server. To ensure that both servers become synchronized with the new hardware information, you will need to issue a physical device rescan for the machine whose hardware has changed as soon as the failback occurs.

CDP/NSS Administration Guide

222

Failover

Recover from a cross-mirror disk failure


For virtual appliances: Whether your cross-mirror disk was brought down for maintenance or because of a failure requires that you follow the procedure listed below to properly bring up the cross-mirror appliance. When powering down both servers in an Active-Active cross-mirror configuration for maintenance, the server must be properly brought up as follows in order to successfully recover from failover. If the cross-mirror environment is in a healthy state, all resources are in sync, and all storage is local to the server (none have swapped), the procedure would be as follows. 1. Stop CDP/NSS on the secondary and wait for the primary to take over,. 2. Power down the server. 3. After the primary has successfully taken over, stop CDP/NSS on the primary server and power it down as well.
Note: This would be considered a graceful way of powering down both servers for maintenance. After maintenance is complete this would be the proper way to bring up the servers and put the servers in a healthy and up state.

4. Power up the primary server. 5. Power up the secondary server. 6. CDP/NSS will automatically start. 7. Verify in the /proc/scsi/scsi that both servers can see their remote storage (usually identified by having 50 as the adapter number, for example the first LUN would be 50:0:0:0.) If this is not the case restart the iSCSI initiator or re-login to the servers respective targets to see the remote storage.
Restarting the iSCSI initiator: restart" "/etc/init.d/iscsi

Logging into a target: iscsiadm -m node -p <ipaddress>:3261,0 -T <remote-target-name> -l Example: "iscsiadm -m node -p 192.168.200.201:3261,0 -T iqn.2000-03.com.falconstor:istor.PMCC2401 -l"

8. Once you have verified that both servers can access the remote storage, restart CDP/NSS on both servers. Failure to do so will result in server recovery issues. 9. After CDP/NSS has been restarted, verify that both servers are in a ready state by using the sms -v command. Both servers should now be recovered and in a healthy state.
CDP/NSS Administration Guide 223

Failover

Re-synchronize Cross mirror


After recovering from a cross mirror failure, the disks will automatically be resynchronized according to the server properties that have been set up. You can click on the Performance tab to configure the synchronization options. The disks must manually re-synchronized if the disk is offline for more than 20 minutes. Right-click on the server and select Cross Mirror --> Synchronize to manually re-synchronize the disks.

Remove Cross mirror


You can remove cross mirror failover to enable both servers to act as a stand-alone storage server. To remove the cross mirror failover: 1. Restart both servers from the console. 2. Re-login to the servers and manually remove all mirrors from the virtual devices left behind after cross-mirror removal. This can also be done in batch mode by right-clicking SAN resources --> Mirror --> Remove.

Check resources and swap if possible


Swapping takes place when data functions are moved from a failed disk on the primary server to the mirrored disk on the secondary server. Afterwards, the system automatically checks every hour to see if the disks can be swapped back. If the disk has been replaced/repaired and the cross mirror has been synchronized, you can force a swap to occur immediately by selecting Cross Mirror --> Check & Swap. The system verifies that the local mirror disk is usable and that the cross mirror is synchronized. Once verified, the system swaps the disks. You can verify the status after the swap operation by selecting the Layout tab for the SAN resource.

Verify and repair a cross mirror configuration


There may be circumstances in which you need to use the Verify & Repair option. For example: Use the Verify & Repair option for the following situations: A physical disk used by the cross mirror has been replaced A mirror resource was offline when auto expansion occurred Create a mirror for virtual resources that existed on the primary server prior to configuration View the storage exception information that cannot be repaired and requires further assistance.

CDP/NSS Administration Guide

224

Failover

When replacing local or remote storage, if a mirror needs to be swapped first, a swapping request will be sent to the server to trigger the swap. Storage can only be replaced when the damaged segments are part of the mirror, either local or remote. New storage has to be available for this option.
Note: If you have replaced disks, you should perform a rescan on both servers before using the Verify & Repair option.

To use the Verify & Repair option: 1. Log into both cross mirror servers. 2. Right-click on the primary server and select Cross Mirror --> Verify & Repair. 3. Click the button for any issue that needs to be corrected. You will only be able to select a button if that is the scenario where the problem occurred. The other buttons will not be selectable. Resources If everything is working correctly, this option will be labeled Resources and will not be selectable. The option will be labeled Incomplete Resources for the following scenarios: The mirror resource was offline when auto expansion (i.e. Snapshot resource or CDP journal) occurred but the device is now back online. You need to create a mirror for virtual resources that existed on the primary server prior to cross mirror configuration.

1. Right-click on the server and select Cross Mirror --> Verify & Repair.

CDP/NSS Administration Guide

225

Failover

2. Click the Incomplete Resources button.

3. Select the resource to be repaired. 4. When prompted, confirm that you want to repair this resource. Remote Storage If everything is working correctly, this option will be labeled Remote Storage and will not be selectable. The option will be labeled Damaged or Missing Remote Storage when a physical disk being used by cross mirroring on the secondary server has been replaced.
Note: You must suspend failover before replacing the storage.

1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

CDP/NSS Administration Guide

226

Failover

2. Click the Damaged or Missing Remote Storage button.

3. Select the remote device to be repaired. Local Storage If everything is working correctly, this option will be labeled Local Storage and will not be selectable. The option will be labeled Damaged or Missing Local Storage when a physical disk being used by cross mirroring is damaged on the primary server and has been replaced.
Note: You must suspend failover before replacing the storage.

1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

CDP/NSS Administration Guide

227

Failover

2. Click the Damaged or Missing Local Storage button.

3. Select the local device to be replaced. 4. Confirm that this is the device to replace. Storage and Complete Resources If everything is working correctly, this option will be labeled Storage and Complete Resources and will not be selectable. The option will be labeled Resources with Missing segments on both Local and Remote Storage when a virtual device spans multiple physical devices and one physical device is offline on both the primary and secondary server. This situation is very rare and this option is informational only. 1. Right-click on the server and select Cross Mirror --> Verify & Repair.

CDP/NSS Administration Guide

228

Failover

2. Click the Resources with Missing segments on both Local and Remote Storage button.

You will see a list of failed devices. Because this option is informational only, no action can be taken here.

Modify failover configuration


Make changes to the servers in your failover configuration
The first time you set up your failover configuration, the secondary server cannot have any Replica resources. In order to make any changes to a mutual failover configuration, you must be running the console with write access to both servers. CDP/NSS will automatically log on" to the failover pair when you attempt any configuration on the failover set. While it is not required that both servers have the same username and password, the system will try to connect to both servers using the same username and password. If the servers have different usernames/passwords, it will prompt you to enter them before you can continue. Change physical device If you make a change to a physical device (such as if you add a network card that will be used for failover), you will need to re-run the Failover wizard. Be sure to scan both servers during the wizard. At that point, the secondary server is permitted to have Replica resources. This makes it easy for you to upgrade your failover configuration.

CDP/NSS Administration Guide

229

Failover

Change subnet

If you switch IP segments for an existing failover configuration, the following needs to be done: 1. Remove failover from both storage servers. 2. Delete the current failover servers from the FalconStor Management Console. 3. Make network modifications to the storage servers (i.e. change IP segments). 4. Add the storage servers back to the FalconStor Management Console. 5. Configure failover using the new IP segment.

Convert a failover configuration into a mutual failover configuration


Right-click on the server and select Failover --> Setup Mutual Failover to convert your failover configuration into a mutual failover configuration where both servers monitor each other. A configuration repository should be created even if you have on standalone server. The status of the configuration repository is always displayed on the console under the General tab. In the case of a configuration repository failure, the console displays the time of failure along with the last successful update.
Note: If no configuration repository is found on the secondary server, the wizard to set up mutual failover includes the creation of a configuration repository on the secondary server. The configuration repository requires 10 GB of free space.

Exclude physical devices from health checking


You can create a storage exception list that will exclude one or more specific physical devices from being monitored. Devices on this list will not prompt the system to fail over, even if the device stops functioning. This is useful when using less reliable storage (for asynchronous mirroring or local replication), whose temporary loss will not be critical. When removing failover, this list is reset and cleaned up. To exclude devices, right-click on the server and select Failover --> Storage Exception List.

CDP/NSS Administration Guide

230

Failover

Change your failover intervals


Right-click on the server and select Failover --> View/Update Failover Options to change the intervals (heartbeat, self-checking, and auto recovery) for this configuration.

Note: We recommend keeping the Self-checking Interval and Heartbeat Interval set to the default values. Changing the values can result in a significantly longer failover and recovery process.

The Self-checking Interval determines how often the primary server will check itself. The Heartbeat Interval determines how often the secondary server will check the heartbeat of the primary server. If enabled, Auto Recovery determines how long to wait before returning control to the primary server once the primary server has recovered.

Verify physical devices match


The Check Consistency tool (right-click on the server and select Failover --> Check Consistency) helps verify that both nodes can still see the same LUNs or the same number of LUNs. This is useful when physical storage devices need to be added or removed. After suspending failover and removing/adding storage to both nodes, you would first perform a rescan of the resources on both sides to pick up the changes in configuration. After verifying storage consistency between the two nodes, failover can be resumed without risking a failover trigger.

CDP/NSS Administration Guide

231

Failover

Start/stop failover or recovery


Force a takeover by a secondary server
On the secondary server, select Failover --> Start Takeover <servername> to initiate a failover to the secondary server. You may want to do this if you are taking your primary server offline, such as when you will be performing maintenance on it. Once failover is complete, a failover message will blink in red at the bottom of the console and you will be disconnected from the primary server.

Manually start a server


If you cannot connect to a server via the virtual IP, you have the option to bring up the server by attempting to log into the server from the FalconStor Management Console. The server must be powered on and have IPStor services running in order to be forced to an up state. You can verify that a server is in a ready state by connecting to the server via SSH using the heartbeat address and running the sms command. When attempting to force a server up from the console, log into the server you are attempting to manually start. Do not attempt to log into the server from the console using the Heartbeat IP address. The Bring up Primary Server window displays if the server is accessible via the heartbeat IP address.

Type YES in the dialog box to bring the server to a ready state and then force the server up via the monitor IP address.

Manually initiate a recovery to your primary server


Select Failover --> Stop Takeover if your failover configuration was not set up to use the FalconStor Auto Recovery feature and you want to force control to return to your primary server or if you manually forced a takeover and now want to recover to your primary server. Once failback is complete, you will be logged off from the virtual primary server.
CDP/NSS Administration Guide 232

Failover

Suspend/resume failover
Select Failover --> Suspend Failover to stop monitoring its partner server. In the case of Active-Active failover, you can suspend from either server. However, the server that you suspend from will stop monitoring its partner and will not take over for that partner server in the event of failure. It can still fail over itself. For example, server A and server B are configured for Active-Active failover. If you go to server B and suspend failover, server A will no longer fail over to server B. However, server B can still fail over to server A. Select Failover --> Resume Failover to restart the monitoring.
Notes: If the cross mirror link goes down, failover will be suspended. Use the Resume Failover option when the cross mirror link comes back up. The disks will automatically be re-synced at the scheduled interval or you can manually synchronize using the cross mirror synchronize option.

If you stop the CDP/NSS processes on the primary server after suspending failover, you must do the following once you restart your storage server: 1. At a Linux command prompt, type sms to see the failover status. 2. When the system is in a ready state, type the following: ipstorsm.sh recovery

Once the connection is repaired, the failover status is not cleared until failover is resumed on both servers.

CDP/NSS Administration Guide

233

Failover

Remove a failover configuration


Right-click on one of your failover servers and select Failover --> Remove Failover Server to remove the selected server from the failover configuration. In a non-mutual failover configuration, this eliminates the configuration and returns the servers to independent storage servers. If this is a mutual failover configuration and you want to eliminate the failover relationship from both sides, select the Remove Mutual Failover option. If this is a mutual failover configuration, and you do not select the Remove Mutual Failover option, this server (the one you right-clicked on) becomes the secondary server in a non-mutual configuration.

Select if you want to eliminate the failover relationship from both sides.

If everything is checked, this eliminates the failover relationship and removes the health monitoring IP addresses from the servers and restores the Server IP addresses. If you uncheck the IP address(es) for a server, the health monitoring address becomes the Server IP address.
Note: If you are using cross mirror failover, after removal the cross mirror relationship will be gone but the configuration of your iSCSI initiator will remain and the disks will still be presented to both primary and secondary servers.

CDP/NSS Administration Guide

234

Failover

Power cycle servers in a failover setup


If you need to shut down servers in a failover setup for maintenance purposes, follow the steps below: 1. Suspend failover on each server. 2. Log in to each server using the heartbeat IP via SSH. 3. Stop IPStor services on each server using the "ipstor stop all" command. 4. Power off each server. 5. Perform any required maintenance while the servers are powered off. 6. Once maintenance is complete, power on each server. 7. IPStor services automatically start. If services are not started, manually start services by running the "IPStor start all" command. 8. Verify that both servers are in a ready state using the "sms" command. 9. Log in to each server from the FalconStor Management Console. When prompted to bring each server up via the monitor IP address, type YES to do so. 10. Resume failover on each server.

CDP/NSS Administration Guide

235

Failover

Mirroring and Failover


(Shared storage failover) If a physical drive contains only a mirrored resource and the physical drive fails, the server will not fail over. If the physical drive contained a primary mirrored resource, the mirror will swap roles so that its mirrored copy becomes the primary disk. If the physical drive contained a mirrored copy, nothing will happen to the mirror because there is no need for the mirror to swap. If there are other virtual devices on the same physical drive and the other virtual devices are not mirrored resources, the server will fail over. Swapping will only occur if all of the virtual devices on the physical drive contain mirrored resources.

TimeMark/CDP and Failover


Clients may not be able to access TimeViews during failover.

Throttle and Failover


Setting up throttle on a failover pair requires the following additional considerations: The failover pair must have matching target site names. (This does not apply to the target server name) The failover pair can have different throttle settings, even if they are replicating to the same server. During failover, the throttle values of the two partners combine and are used on the "up" server to maintain throttle settings. In other words, from the software perspective, each server is still maintaining it's throttle. From hardware perspective, the "up" server is the combined throttle level of itself and it's partner. The failover pair's throttle levels may be combined to equal over 100%. Example: 80%+80%=160%. Note: This percentage is relative to the link type. This value is the maximum speed allowed, not the instantaneous speed. If one of the throttle levels is set to no limit, then in failover state, both servers throttle level becomes no limit. It is highly recommended that you avoid the use of different link types. Using different link types may cause unexpected results in network traffic while in a failover state.

HotZone and Failover


Using HotZone with failover improves failover performance as disk read operations are faster and more efficient. Failover with HotZone on local storage further improves performance since it is mapped locally. Local Storage prepared disks can only be used for HotZone using Read Cache. They cannot be used to create virtual device, mirror, snapshot resource, SafeCache, CDP journal, replica, or join a storage pool.
CDP/NSS Administration Guide 236

Failover

For failover with HotZone created on local storage, the failover must be setup first. The local storage cannot be created on standalone server. For additional information regarding HotZone, refer to HotZone.

Enable HotZone using local storage with failover


Local Storage must be prepared from an individual physical disk, instead of using the Physical Devices Preparation Wizard to ensure proper mapping of physical disks to the partner server. 1. Right-click on the physical device and select Properties. Select Reserved for Local Storage on the Disk Preparation screen.

1. The Disk Preparation screen displays. 2. Select Reserved for Local Storage from the drop-down menu. Local Storage is only available when devices are detected on both servers. The servers do not need to be the same size as long as the preparation is initiated from the smaller size device. For example, if server A has a 1GB disk and server B has 2GB disk, Local Storage can only be prepared/initiated from server A. 3. Right click on SAN Resources and select HotZone --> Enable. The Enable HotZone Resources for SAN Resources wizard launches.

CDP/NSS Administration Guide

237

Failover

4. On the Storage Option screen, select the Allocate from Local Storage option to allocate space from the high performance disks.

Note: If you need to remove failover setup, it is recommended that you unassign the physical disks so they can be re-used as virtual devices or SED devices after failover has been removed.

CDP/NSS Administration Guide

238

CDP/NSS Administration Guide

Performance
FalconStor offers several options that can dramatically increase the performance of your SAN. SafeCache - Allows the storage server to make use of high-speed storage devices as a staging area for write operations, thereby improving the overall performance. HotZone - Offers two methods to improve performance, Read Cache and Prefetch.

SafeCache
The FalconStor SafeCache option improves the overall performance of CDP/NSSmanaged disks (virtual and/or service-enabled) by making use of high-speed storage devices, such as RAM disk, NVRAM, or solid-state disk (SSD), as a persistent (non-volatile) read/write cache. In a centralized storage environment where a large set of database servers share a smaller set of storage devices, data tends to be randomly accessed. Even with a RAID controller that uses cache memory to increase performance and availability, hard disk storage often cannot keep up with application servers I/O requests. SafeCache, working in conjunction with high-speed devices (RAM disk, NVRAM or SSDs) to front slower real disks, can significantly improve performance. Since these high-speed devices are 100% immune to random access, SafeCache can write data blocks sequentially to the cache and then move (flush) them to the data disk (random write) as a separate process once the writes have been acknowledged, effectively accelerating the performance of the slower disks. The SafeCache default throttle speed is 10,240 KB/s, which can be adjusted depending on your client IO pattern. Regardless of the type of high-speed storage device being used as persistent cache (RAM disk, NVRAM, or SSD), the persistent cache can be mirrored for added protection using the FalconStor Mirroring option. In addition, SSDs and NVRAM have a built-in power supply to minimize potential downtime.

CDP/NSS Administration Guide

239

Performance

SafeCache is fully compatible with the NSS failover option, which allows one server to automatically fail over to another without any data loss and without any cache write coherency problems. It is highly recommended that you use a Solid State disk as SafeCache.

Configure SafeCache
To set up SafeCache for a SAN Resource you must create a cache resource. You can create a cache resource for a single SAN resource or you can use the batch feature to create cache resources for multiple SAN resources. To enable SafeCache: 1. Navigate to Logical Resources --> SAN Resources and right-click on a SAN resource. 2. Select SafeCache --> Enable. The Create Cache Resource wizard displays to guide you through creating the cache resource and allocating space for the storage.
Note: If Cache is enabled, up to 256 unflushed TimeMarks are supported. Once the Cache has 256 unflushed TimeMarks, new TimeMarks cannot be created.

Create a cache resource


1. For a single SAN resource, right-click on a SAN Resource and select SafeCache --> Enable.

CDP/NSS Administration Guide

240

Performance

For multiple SAN resources, right-click on the SAN Resources object and select SafeCache --> Enable. 2. Select how you want to create the cache resource.

Note that the cache resource cannot be expanded. Therefore, you should allocate enough space for your cache resource, taking into account future growth. If you outgrow your cache resource, you will need to disable it and then recreate it.
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the cache resource using the criteria you select: Select different drive - CDP/NSS will look for space on another hard disk. Select drives from different adapter/channel - CDP/NSS will look for space on another hard disk only if it is on a separate adapter/channel. Select any available drive - CDP/NSS will look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that appears as a single physical device.

CDP/NSS Administration Guide

241

Performance

If you select Custom, you will see the following windows


Select either an entirely unallocated or partially unallocated disk. Only one disk can be selected at a time from this dialog. To create a cache resource from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks.

Indicate how much space to allocate from this disk.

Click Add More if you need to add more space to this cache resource. If you select to add more disks, you will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide

242

Performance

3. Configure when and how the cache should be flushed.

These parameters can be used to further enhance performance. Flush cache when data reaches n% of threshold - Specify what percentage of the cache resource can be used before the cache is flushed. The default value is 50%. Flush cache after n milliseconds of inactivity - Specify how many milliseconds of inactivity should pass before the cache is flushed even if the threshold from the above is not met. The default value is 0 milliseconds. Flush cache up to the speed of - Specify the flush speed / number of KB/s to flush at a time. The default value is 256,000 KB/s. Skip Duplicate Write Commands - This option prevents the system from writing more than once to the same block during the cache flush. Therefore, when the cache flushes data to the underlying virtual device, if there is more than one write to the same block, it skips all except the most recent write. Leave this option unchecked if you are using asynchronous mirroring through a WAN or an unreliable network. 4. Confirm that all information is correct and then click Finish to create the cache resource. You can now mirror your cache resource by highlighting the SAN resource and selecting SafeCache --> Mirror --> Add.
Note: If you take a snapshot manually (via the Console or the command line) of a SafeCache-enabled resource, the snapshot will not be created until the cache has been flushed. If failover should occur before the cache is empty, the snapshot will be inserted into the cache. The snapshot will be created after the snapshot marker has flushed.
CDP/NSS Administration Guide 243

Performance

Global Cache
Global SafeCache can be viewed from the FalconStor Management Console by selecting the Global SafeCache node under Logical Resources. You can choose to create a global or private cache resource. A global cache allows you to share the cache with up to 128 resources. To create a global cache, select Use Global Cache Resource in the Create Cache Resource Wizard.

Notes:

Global Cache can be enabled in batch mode by selecting Logical Resources --> Global SafeCache --> Enable. Otherwise, Global Cache must be enabled for each device one at a time. To enable Global Cache for multiple resources, navigate to Logical Resources --> Global SafeCache and select Enable Each server can only have one Global Cache. If the Global Cache is suspended, resumed or its properties is changed on a virtual device, it also affects on the rest of the members. Disabling the Global Cache only removes the Global Cache on that specific device. Importing the Global Cache from one server to another server is not supported.

CDP/NSS Administration Guide

244

Performance

SafeCache for groups


If you want to preserve the write order across SAN resources, you should create a group and enable SafeCache for the group. This is useful for large databases that span over multiple devices. In such situations, the entire group of devices is acting as one huge device that contains the database. When changes are made to the database, it may involve different places on different devices, and the write order needs to be preserved over the group of devices in order to preserve database integrity. Refer to Groups for more information about creating a group.

Check the status of your SafeCache resource


You can see the current status of your cache resource by checking the SafeCache tab for a cached resource. Unlike a snapshot resource that continues to grow, the cache resource is cleared out after data blocks are moved to the data disk. Therefore, you can see the Usage Percentage decrease, even return to 0% if there is no write activity. For troubleshooting issues pertaining to SafeCache operations, refer to the SafeCache Troubleshooting section.

Configure SafeCache properties


You can update the parameters that control how and when data will get flushed from the cache resource to the CDP/NSS-managed disk. To update these parameters: 1. Right-click on a SAN resource that has SafeCache enabled and select SafeCache --> Properties. 2. Type a new value for each parameter you want to change. Refer to the SafeCache configuration section for more details about these parameters.

Disable a SafeCache resource


The SafeCache --> Disable option causes the cache to be flushed, and once completely flushed, removes the cache resource. Because there is no dynamic free space expansion when the cache resource is full, you can use this option to disable your current cache resource and then manually create a larger one. If you want to temporarily suspend the SafeCache, use the SafeCache --> Suspend option instead. You will then need to use the SafeCache --> Resume option to begin using the SafeCache again.

CDP/NSS Administration Guide

245

Performance

HotZone
The FalconStor HotZone option offers two methods to improve performance, Read Cache and Prefetch.

Read Cache
Read Cache is an intelligent, policy-driven, disk-based staging mechanism that automatically remaps "hot" (frequently used) areas of disks to high-speed storage devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). This results in enhanced read performance for the applications accessing the storage. It also allows you to manage your storage network with a minimal number of high-speed storage devices by leveraging their performance capabilities.

When you configure the Read Cache method, you must divide your virtual or Service-Enabled disk into zones of equal size. The HotZone storage is then automatically created on the specified high-speed disk. This HotZone storage is divided into zones equal in size to the zones on the virtual or service-enabled disk (e.g.,32 MB), and is provisioned to the disk. Reads/writes to each zone are monitored on the virtual or service-enabled disk. Based on the statistics collected, the application determines the most frequently accessed zones and re-maps the data from these hot disk segments to the HotZone storage (located on the high-speed disk) resulting in enhanced read performance for the application accessing the storage. Using the continually collected statistics, if it is determined that the corresponding hot disk segment is no longer hot, the data from the high performance disk is moved back to its original zone on the virtual or service-enabled disk.

Prefetch
Prefetch enables pre-fetching of data for clients. This allows clients to read ahead consecutively, which can result in improved performance because the data is ready from the anticipatory read as soon as the next request is received from the client. This will reduce the latency of the command and improve the sequential read benchmarks in most cases.

Prefetch may not be helpful if the client is already submitting sequential reads with multiple outstanding commands. However, the stop-and-wait case (with one read outstanding) can often be improved dramatically by enabling Prefetch.
Prefetch does not affect writing, or random reading.

Applications that copy large files (i.e. video streaming) and applications that back up files are examples of applications that read sequentially and might benefit from Prefetch.

CDP/NSS Administration Guide

246

Performance

Configure HotZone
1. Right-click on a SAN resource and select HotZone --> Enable. For multiple SAN resources, right-click on the SAN Resources object and select HotZone --> Enable. 2. Select the HotZone method to use.

3. (Prefetch only) Set Prefetch properties.

CDP/NSS Administration Guide

247

Performance

These properties control how the prefetching (read ahead) is done. While you may need to adjust the default settings to enhance performance, FalconStor has determined that the defaults shown here are best suited for most disks/ applications. Maximum prefetch chains - Number of locations from the disk to read from. Maximum read ahead - The maximum per chain. This can override the Read ahead option. Read ahead - How much should be read ahead at a time. No matter how this is set, you can never read more than the Maximum read ahead setting allows. Chain Timeout - Specify how long the system should wait before freeing up a chain. 4. (Read Cache only) Select the storage pool or physical device(s) from which to create this HotZone. 5. (Read Cache only) Select how you want to create the HotZone.

Note that the HotZone cannot be expanded. Therefore, you should allocate enough space for your SAN resource, taking into account future growth. If you outgrow your HotZone, you will need to disable it and then recreate it.
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the HotZone storage using the criteria you select: Select different drive - CDP/NSS will look for space on another hard disk.

CDP/NSS Administration Guide

248

Performance

Select drives from different adapter/channel - CDP/NSS will look for space on another hard disk only if it is on a separate adapter/channel. Select any available drive - CDP/NSS will look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that appears as a single physical device.

6. (Read Cache only) Select the disk to use for the HotZone storage.

If you selected Custom, you can piece together space from one or more disks.

CDP/NSS Administration Guide

249

Performance

7. (Read Cache only) Enter configuration information about the zones.

Size of each zone - Indicate how large each zone should be. Reads/writes to each zone on the disk are monitored. Based on the statistics collected, The application determines the most frequently accessed zones and remaps the data from these hot zones to the HotZone storage. You should check with your application server to determine how much data is read/ written at one time. The block size used by the application should ideally match the size of each zone. Minimum stay time - Indicate the minimum amount of time data should remain in the HotZone before being moved back to its original zone once it is determined that the zone is no longer hot.

CDP/NSS Administration Guide

250

Performance

8. (Read Cache only) Enter configuration information about zone access.

Access type - Indicate whether the zone should be monitored for reads, writes, or both. Access intensity - Indicate how to determine if a zone is hot. Number of IOs performed at the site uses the amount of data transferred (read/write) as a determining factor for each zone.

9. Confirm that all information is correct and then click Finish to enable HotZone.

Check the status of HotZone


You can see the current status of your HotZone by checking the HotZone tab for a configured resource.

CDP/NSS Administration Guide

251

Performance

Note that if you manually suspend HotZone from the Console when the device configured with the HotZone option is running normally, the Suspended field will display Yes. You can also see statistics about the zone by checking the HotZone Statistics tab:

The information displayed is initially for the current interval (hour, day, week, or month). You can go backward (and then forward) to see any particular interval. You can also view multiple intervals by moving backward to a previous interval and then clicking the Play button to see everything from that point to the present interval. Click the Detail View button to see more detail. There you will see the information presented more granularly, for smaller amounts of the disk. If HotZone is being used in conjunction with Fibre Channel or iSCSI failover and a failover has occurred, the HotZone Statistics will not be displayed while in a failover state. The reason for this is because the server that took over does not contain the failed servers information on the HotZone Statistics. As a result, the Console will display empty statistics for the primary server while the secondary has taken over. Once the failed server is restored, the statistics will display properly. This does not affect the functionality of the HotZone option while in a failover state.

CDP/NSS Administration Guide

252

Performance

Configure HotZone properties


You can configure HotZone properties by right-clicking on the storage server and selecting HotZone. If HotZone has already been enabled, you can select the properties option to configure the Zone and Access policies if the HotZone was set up using the Read Cache method. Alternatively, you will be able to set the Prefetch Properties if your HotZone has been set up using the Prefetch method. For additional information on these parameters, see Configure HotZone.

Disable HotZone
The HotZone --> Disable option permanently stops HotZone for the specific SAN resource. Because there is no dynamic free space expansion when the HotZone is full, you can use this option to disable your current HotZone and then manually create a larger one. If you want to temporarily suspend HotZone, use the HotZone --> Suspend option instead. You will then need to use the HotZone --> Resume option to begin using HotZone again

CDP/NSS Administration Guide

253

CDP/NSS Administration Guide

Mirroring
Mirroring provides high availability by minimizing the down time that can occur if a physical disk fails. The mirror can be defined with disks that are not necessarily identical to each other in terms of vendor, type, or even interface (SCSI, FC, iSCSI). With mirroring, the primary disk is the disk that is used to read/write data for a SAN Client and the mirrored copy is a copy of the primary. Both disks are attached to a single storage server and are considered a mirrored pair. If the primary disk fails, the disks swap roles so that the mirrored copy becomes the primary disk. There are two Mirroring options, Synchronous Mirroring and Asynchronous Mirroring.

Synchronous mirroring
FalconStors Synchronous Mirroring option offers the ability to define a synchronous mirror for any CDP/NSS managed disk (virtualized or service-enabled). In the Synchronous Mirroring design, each time data is written to a designated disk, the same data is simultaneously written to another disk. This disk maintains an exact copy of the primary disk. In the event that the primary disk is unable to read/write data when requested to by a SAN Client, CDP/NSS seamlessly swaps data functions to the mirrored copy disk.

CDP/NSS Administration Guide

254

Mirroring

Asynchronous mirroring
FalconStors Asynchronous Mirroring Option offers the ability to define a near realtime mirror for any CDP/NSS-managed disk (virtual or service-enabled) over long distances between data centers. When you configure an asynchronous mirror, you create a dedicated cache resource and associate it to a CDP/NSS-managed disk. Once the mirror is created, the primary and secondary disks are synchronized if the Start initial synchronization when mirror is added option is enabled in global settings. This process does not involve the application server. After the synchronization is complete, all writerequests from the associated application server are sequentially delivered to the dedicated cache resource. This data is then committed to both the primary and its mirror as a separate background process. For added protection, the cache resource can also be mirrored.

STAGING AREA Data blocks are written sequentially to the cache resource to provide enhanced write performance.
1 6 2 7 3 8 4 9 5

IPStor
WRITES ACKNOWLEDGEMENT

For read operations, the cache resource is checked first in case a newly written block has not yet been moved to the data disk.

STAGING AREA

Blocks are moved to the primary disk and mirror disk (random write) as a secondary operation, after writes have been acknowledged from the cache resource.

PRIMARY

MIRROR 5 3 1 8 6 5 3 2 1 8 6

9 7 4

9 7 4 2

Primary Site

Remote Site

PRIMARY DISK

MIRROR DISK

CDP/NSS Administration Guide

255

Mirroring

Mirror requirements
The following are the requirements for setting up a mirroring configuration: The mirrored devices must be composed of one or more hard disks. The mirrored devices must both be accessible from the same storage server. The mirrored devices must be the same size. If you try to expand the primary disk, CDP/NSS will also expand the mirrored copy to the same size. A mirror of a Thin Provisioned disk is another Thin Provisioned disk.

Enable mirroring
You can enable mirroring for a single SAN resource or you can use the batch feature to enable mirroring for multiple SAN resources. You can also enable mirroring for an existing snapshot resource, cache resource, or incoming replica resource.
Note: For asynchronous mirroring, if you want to preserve the write order of data that is being mirrored asynchronously, you should create a group for your SAN resources and enable SafeCache for the group. This is useful for large databases that span over multiple devices. In such situations, the entire group of devices is acting as one huge device that contains the database. When changes are made to the database, it may involve different places on different devices, and the write order needs to be preserved over the group of devices in order to preserve database integrity. Refer to Groups for more information about creating a group.

1. For a single SAN resource, right-click on the resource and select Mirror --> Add. For multiple SAN resources, right-click on the SAN Resources object and select Mirror --> Add. For an existing snapshot resource or cache resource, right-click on the SAN resource and select Snapshot Resource or Cache Resource --> Mirror --> Add.

CDP/NSS Administration Guide

256

Mirroring

2. (SAN resources only) Select the type of mirrored copy you are creating.

3. Select the storage pool or physical device(s) from which to create the mirror.

CDP/NSS Administration Guide

257

Mirroring

4. Select how you want to create this mirror.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the Mirrored Copy using the criteria you select: Select different drive - Look for space on another hard disk. Select drives from different adapter/channel - Look for space on another hard disk only if it is on a separate adapter/channel. Select any available drive - Look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device.

CDP/NSS Administration Guide

258

Mirroring

If you select Custom, you will see the following windows:


Select either an entirely unallocated or partially unallocated disk. Only one disk can be selected at a time from this dialog. To create a mirrored disk from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks.

Indicate how much space to allocate from this disk.

Click Add More if you need to add more space to this mirrored disk. If you select to add more disks, you will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide

259

Mirroring

5. (SAN resources only) Indicate if you want to use synchronous or asynchronous mirroring.

If a cache resource already exists, mirroring will automatically be set to asynchronous mode. If no cache resource exists, you can use either synchronous or asynchronous mode. However, if you select asynchronous mode, you will need to create a cache resource. The wizard will guide you through creating it. If you select synchronous mode for a resource without a cache and later create a cache, the mirror will switch to asynchronous mode.
Note: If you are enabling asynchronous mirroring for multiple resources that are being used by the same application (for example, your Oracle database spans three disks), to ensure write order consistency you must first create a group. You must enable SafeCache for this group and add all of the related resources to it before enabling asynchronous mirroring for each resource. By doing this, all of the resources will share the same read/write cache and will be flushed at the same time, thereby guaranteeing the consistency of the data.

CDP/NSS Administration Guide

260

Mirroring

6. Determine if you want to monitor the mirroring process.

If you select to monitor the mirroring process, the I/O performance is evaluated to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is, mirroring will be suspended so it does not impact the primary storage.
Note: Mirror monitoring settings are retained when a mirror is enabled on the same device.

Monitor mirroring process every n seconds - Specify how frequently the system should check the lag time (delay between I/O to the primary disk and the mirror). Checking more or less frequently will not impact system performance. On systems with very low I/O, a higher number may help get a more accurate representation. Maximum lag time for mirror I/O - Specify an acceptable lag time. Suspend mirroring when the failure threshold reaches n% - Specify what percentage of I/O must pass the lag time test. For example, you set the percentage to 10% and the maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred and 20 of them took longer than 15 milliseconds to update the mirror disk. With a 20% failure rate, mirroring would be suspended. Note: If a mirror becomes out of sync because of a disk failure or an I/O error (rather than having too much lag time), the mirror will not be suspended. Because the mirror is still active, re-synchronization will be attempted based on the global mirroring properties that are set for the server. Refer to Set global mirroring options for more information.

CDP/NSS Administration Guide

261

Mirroring

7. If mirroring is suspended, specify when re-synchronization should be attempted.

Re-synchronization can be started based on time (every n minutes/hours default is every five minutes) and/or I/O activity (when I/O is less than n KB/MB). If you select both, the time will be applied first before the I/O activity level. If you do not select either, the mirror will stay suspended until you manually synchronize it. If you select one or both re-synchronization methods, you must also specify how many times the system should retry the re-synchronization if it fails to complete. When the system initiates re-synchronization, it does not check lag time and mirroring will not be suspended if there is too much lag time. If you manually resume mirroring, the system will monitor the process during synchronization and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If CDP/NSS is restarted or the server experiences a failover while attempting to re-synchronize, the mirror will remain suspended.

8. Confirm that all information is correct and then click Finish to create the mirroring configuration.

CDP/NSS Administration Guide

262

Mirroring

Create cache resource


The cache resource wizard will be launched automatically when you configure Asynchronous Mirroring but you do not have a cache resource. You can also create a cache resource by right-clicking on a SAN resource and selecting SafeCache --> Enable. For multiple SAN resources, right-click on the SAN Resources object and select SafeCache --> Add. 1. Select how you want to create the cache resource.

Note that the cache resource cannot be expanded. Therefore, you should allocate enough space for your SAN resource, taking into account future growth. If you outgrow your cache resource, you will need to disable it and then recreate it.
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the cache resource using the criteria you select: Select different drive - Look for space on another hard disk. Select drives from different adapter/channel - Look for space on another hard disk only if it is on a separate adapter/channel. Select any available drive - Look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device.

2. Confirm that all information is correct and then click Finish to create the cache resource. You can now mirror your cache resource by highlighting the SAN resource and selecting SafeCache --> Mirror --> Add.

CDP/NSS Administration Guide

263

Mirroring

Check mirroring status


You can see the current status of your mirroring configuration by checking the General tab for a mirrored resource.
Synchronized - Both disks are synchronized. This is the normal state. Not synchronized - A failure in one of the disks has occurred or synchronization has not yet started. If there is a failure in the Primary Disk, the Primary Disk is swapped with the Mirrored Copy. If the synchronization is occurring, you will see a progress bar along with the percentage that is completed.

Note: In order to update the mirror synchronization status, refresh the Console screen (View --> Refresh).

Swap the primary disk with the mirrored copy


Right-click on the SAN resource and select Mirror --> Swap to reverse the roles of the primary disk and the mirrored copy. You will need to do this if you are going to perform maintenance on the primary disk or if you need to remove the primary disk.

Promote the mirrored copy to become an independent virtual drive


Right-click on the mirrored drive and select Mirror --> Promote to break the mirrored pair and convert the mirrored copy into an independent virtual drive. The new virtual drive will have all of the properties of a regular virtual drive.

CDP/NSS Administration Guide

264

Mirroring

This feature is useful as a safety net when you perform major system maintenance or upgrades. Simply promote the mirrored copy and you can perform maintenance on the primary disk without worrying about anything going wrong. If there is a problem, you can use the newly promoted virtual drive to serve your clients.
Notes:

Before promoting a mirrored drive, all clients should first detach or unmount from the drive. Promoting a drive while clients are attached or mounted may cause the file system to become corrupt on the promoted drive. If you are copying files over in Windows to a SAN resource that has a mirror, you need to wait for the cache to flush out before promoting the mirrored drive on the SAN resource. If you do not wait for the cache to flush, you may see errors in the files. If you are using asynchronous mirroring, you can promote the mirror only when the SafeCache option is suspended and there is no data in the cache resource that needs to be flushed. When you promote the mirror of a replica resource, the replication configuration is maintained. Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.

CDP/NSS Administration Guide

265

Mirroring

Recover from a mirroring hardware failure


Replace a failed disk If one of the mirrored disks has failed and needs to be replaced: 1. Right-click on the SAN resource and select Mirror --> Remove to remove the mirroring configuration. 2. Physically replace the failed disk. The failed disk is always the mirrored copy because if the Primary Disk fails, the primary disk is swapped with the mirrored copy. Important: To replace the disk without having to reboot your storage server, refer to Expand the primary disk. 3. Run the Create SAN Resource Mirror Wizard to create a new mirroring configuration. Fix a minor disk failure If one of the mirrored disks has a minor failure, such as a power loss: 1. Fix the problem (turn the power back on, plug the drive in, etc.). 2. Right-click on the SAN resource and select Mirror --> Synchronize. This re-synchronizes the disks and restarts the mirroring.

Replace a disk that is part of an active mirror configuration


If you need to replace a disk that is part of an active mirror configuration: 1. If you need to replace the Primary Disk, right-click on the SAN resource and select Mirror --> Swap to reverse the roles of the disks and make it a Mirrored Copy. 2. Select Mirror --> Remove to cancel mirroring. 3. Replace the disk. Important: To replace the disk without having to reboot your storage server, refer to Expand the primary disk. 4. Run the Create SAN Resource Mirror Wizard to create a new mirroring configuration.

CDP/NSS Administration Guide

266

Mirroring

Expand the primary disk


The mirrored devices must be the same size. If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. When you use the Expand SAN Resource Wizard, it will automatically launch the Create SAN Resource Mirror Wizard so that you can enlarge the Mirrored Copy as well.
Notes:

As you expand the primary disk, the wizard only shows half the available disk space as available because it reserves an equal amount of space for the mirrored drive. On a Thin Provisioned disk, if the mirror is offline, it will be removed when storage is being added automatically. If this occurs, you must recreate the mirror.

Manually synchronize a mirror


The Synchronize option re-synchronizes a mirror and restarts the mirroring process once it is synchronized. This is useful if one of the mirrored disks has a minor failure, such as a power loss. 1. Fix the problem (turn the power back on, plug the drive in, etc.). 2. Right-click on the resource and select Mirror --> Synchronize. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If your mirror disk is offline, storage cannot be added to the thin disk manually.

CDP/NSS Administration Guide

267

Mirroring

Set mirror throttle


The default throttle speed is 10,240 KB/s, which can be adjusted depending on your client IO pattern. To set the mirror throughput speed/throttle for mirror synchronization, select Mirror --> Throttle.

Select the Enable Mirror Throttle checkbox and enter the throughput speed for mirror synchronization. This option is disabled by default. If this option is disabled for an individual device, the global settings will be followed. Refer to Set global mirroring options. The synchronization speed can go up to the specified value, but the actual throughput depends upon the storage environment.
Note: The mirror throttle settings are retained when the mirror is enabled on the same device.

The throughput speed can also be set for multiple devices (in batch mode) by rightclicking on Logical Resources in the console and selecting Set Mirror Throttle.

CDP/NSS Administration Guide

268

Mirroring

Set alternative read mirror


To set the alternative read mirror for mirror synchronization, select Mirror --> Alternative read mirror. Enable this option to have the I/O alternatively read from both the primary resource and the mirror. The alternative read mirror can also be set in batch mode by right-clicking on Logical Resources in the console and selecting Set Alternative Read Mirror.

Set mirror resynchronization priority


To set the resynchronization priority for pending mirror synchronization, select Mirror --> Priority. The Mirror resynchronization priority screen displays, allowing you to prioritize the order that device/group will begin mirroring if scheduled to start at the same time. This option can be set for a single resource or a single group via the Mirror submenu.

CDP/NSS Administration Guide

269

Mirroring

The resynchronization priority can also be set in batch mode by right-clicking on Logical Resources in the console and selecting Set Mirror Priority.

CDP/NSS Administration Guide

270

Mirroring

Rebuild a mirror
The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring process once it is synchronized. The rebuild feature is useful if the mirror disk you want to synchronize is from a different storage server A rebuild might be necessary if your disaster recovery site has been servicing clients due to some type of issue, such as a storm or power outage, at your primary data center. Once the problem is resolved, the mirror is out of sync. Because the mirror disk is located on a different storage server in a remote location, the local storage server must rebuild the mirror from beginning to end. Before you rebuild a mirror, you must stop all client activity. After rebuilding the mirror, swap the mirror so that the primary data center can service clients again. To rebuild the mirror, right-click on a resource and select Mirror --> Rebuild. You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

Suspend/resume mirroring
You can suspend mirroring for an individual resource or for multiple resources. When you manually suspend a mirror, the system will not attempt to re-synchronize, even if you have a re-synchronization policy. You will have to resume the mirror in order to synchronize. When mirroring is resumed, if the mirror is not synchronized, a synchronization will be triggered immediately. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit. To suspend/resume mirroring for an individual resource: 1. Right-click on a resource and select Mirror --> Suspend (or Resume). You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource. To suspend/resume mirroring for multiple resources: 1. Right-click on the SAN Resources object and select Mirror --> Suspend (or Resume). 2. Select the appropriate resources. 3. If the resource is in a group, select the checkbox to include all of the group members enabled with mirroring.

CDP/NSS Administration Guide

271

Mirroring

Change mirroring configuration options


Set global mirroring options You can set global mirroring options that affect system performance during mirroring. While the default settings should be optimal for most configurations, you can adjust the settings for special situations. To set global mirroring properties for a server: 1. Right-click on the server and select Properties. 2. Select the Performance tab. Throttle [n] KB/s (Range 128 - 1048576, 0 to disable) - The throttle parameter allows you to set the maximum allowable mirror synchronization speed, thereby minimizing potential impact to performance for your devices. This option is set at 10 MB per second by default. If disabled, throughput is unlimited. Note: Actual throughput depends upon your storage environment. Select the Start Initial Synchronization when mirror check box to have the mirror sync when added. By default, the mirror will not automatically synchronize when added. If this option is not selected, the mirror will not sync until the next synchronization interval or until a manual synchronization operation is performed. This option is not applicable for Near-line recovery and thin disk relocation. Synchronize Out-of-Sync Mirrors - Indicate how often the system should check and attempt to re-synchronize active out-of-sync mirrors. The default is every five minutes and up to two mirrors at each interval. These settings are also used for the initial synchronization during creation or loading of the mirror. Manual synchronizations can be performed at any time and are not included in the number of mirrors at each interval set here. Enter the retry value to indicate how often synchronization should be retried if it fails to complete. The default is to retry 20 times. These settings will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that re-synchronization policy will apply instead. Indicate whether or not to include replica mirrors in the re-synchronization process by selecting the Include replica mirrors in the automatic synchronization process checkbox. This is unchecked by default. Change properties for a specific resource You can change the following mirroring configuration for a resource: Policy for monitoring the mirroring process Conditions for re-synchronization

To change the configuration: 1. Right-click on the primary disk and select Mirror --> Properties. 2. Make the appropriate changes and click OK.

CDP/NSS Administration Guide

272

Mirroring

Remove a mirror configuration


Right-click on the SAN resource and select Mirror --> Remove to delete the mirrored copy and cancel mirroring. You will not be able to access the mirrored copy afterwards.

Mirroring and failover


If mirroring is in progress during failover/recovery, mirroring will restart from where it left off once the failover/recovery is complete. If the mirror is synchronized but there is a Fibre disconnection between the server and storage, the mirror may become unsynchronized. It will re-synchronize automatically after failover/recovery. A synchronized mirror will always remain synchronized during a recovery process.

CDP/NSS Administration Guide

273

CDP/NSS Administration Guide

Snapshot Resource
TimeMark snapshots allow you to create point-in-time delta snapshot copies of data volumes. The concept of performing a snapshot is similar to taking a picture. When we take a photograph, we are capturing a moment in time and transferring this moment in time to a photographic medium, even while changes are occurring to the object we focused our picture on. Similarly, a snapshot of an entire device allows us to capture data at any given moment in time and move it to either tape or another storage medium, while allowing data to be written to the device. The basic function of the snapshot engine is to allow images to be created of data volumes (virtual drives) using minimal storage space. The snapshot initially uses no disk space. As new data is written to the source volume, the old data blocks are moved to a temporary snapshot storage area. By combining the snapshot storage with the source volume, the data can be recreated exactly at it appeared at the time the snapshot was taken. For added protection, a Snapshot Resource can also be mirrored. A trigger is an event that notifies the application when it is time to perform a snapshot of a virtual device. FalconStors Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact Backup options all trigger snapshots.

Create a Snapshot Resource (Updated April 2012)


Each SAN resource can have one Snapshot Resource. The Snapshot Resource supports up to 64 TB and is shared by all of the FalconStor options that use Snapshot (Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact backup). Each snapshot initially uses no disk space. As new data is written to the source volume, the old data blocks are moved to the Snapshot Resource. Therefore, it is not necessary to have 100% of the size of the SAN resource reserved as a Snapshot Resource. The amount of space initially reserved for each Snapshot Resource is calculated as follows:
Size of SAN Resource Reserved for Snapshot Resource

Less than 500 MB 500 MB or more but less than 2 GB 2 GB or more

100% 50% 20%

Using the table above, if you create a 10 GB SAN resource, your initial Snapshot Resource will be 2 GB but you can set the Snapshot Resource to expand automatically, as needed. If you create a SAN resource that is less than 500 MB, the amount of space reserved for the Snapshot Resource will be 100% of the virtual drive size. This is because a smaller-sized volume can overfill quickly, leaving no time for the auto-

CDP/NSS Administration Guide

274

Snapshot Resource

expansion to take effect. By reserving a Snapshot Resource equal to 100% of the SAN resource, the snapshot is able to free up enough space so normal write operations can continue. If you do not create a Snapshot Resource for your SAN resource, when you configure Replication, TimeMark/CDP, Snapshot Copy, or backup, the Create Snapshot Resource wizard will launch first, allowing you to create it. You can create a Snapshot Resource for a single SAN resource or you can use the batch feature to create snapshot resources for multiple SAN resources: 1. For a single SAN resource, right-click on the resource and select Snapshot Resource --> Create. For multiple SAN resources, right-click on the SAN Resources object and select Snapshot Resource --> Create. 2. Select the storage pool or physical device that should be used to create this Snapshot Resource.

CDP/NSS Administration Guide

275

Snapshot Resource

3. Select how you want to create this Snapshot Resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates a Snapshot Resource using an available device. Select different drive - The storage server will look for space on another hard disk. Select drives from different adapter/channel - The storage server will look for space on another hard disk only if it is on a separate adapter/channel. Select any available drive - The storage server will look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device to your storage server.

CDP/NSS Administration Guide

276

Snapshot Resource

If you select Custom, you will see the following windows:

Select either an entirely unallocated or partially unallocated device.

Indicate how much space to allocate from this device.

Click Add More if you need to add another physical disk to this Snapshot Resource. You will go back to the physical device selection screen where you can select another disk.

4. Verify the physical devices you have selected.

CDP/NSS Administration Guide

277

Snapshot Resource

5. Determine whether the storage server should expand your Snapshot Resource if it runs low and how it should be expanded.

Specify a threshold as a percentage of the space used. The threshold is used to determine if more space is needed for the Snapshot Resource. The default is 50%. If you want your storage server to automatically expand the Snapshot Resource when space is running low, set the threshold level and make sure the option Automatically allocate more space for the Snapshot Resource is selected. The default expansion size is 20%. Make sure not to set this expansion increment too low, otherwise the snapshot resource may go offline if the snapshot expansion cannot be complete on time. However, if you have a very large snapshot resource, you can set this value as a small percentage size. Then, determine the amount of space to be allocated for each expansion. You can set this to be a specific size (in MB) or a percentage of the size of the Snapshot Resource. There is no limit to the number of times a Snapshot Resource can be expanded. Once the low space threshold is triggered, the system will attempt to expand the resource by allocating additional space. The time required to accomplish this may be in milliseconds or even seconds, depending on how busy the system is. If expansion fails, depending on the snapshot policy set, you will experience either client I/O failure (once snapshot resource is full) or earlier TimeMarks being deleted so that the Snapshot Resource does not run out of space. To prevent this from happening, we recommend that you allow enough time for expansion after the low space threshold is reached. We recommend that your safety margin be at least five seconds. This means that from the time the low
CDP/NSS Administration Guide 278

Snapshot Resource

space threshold is reached, while data is being written to the drive at maximum throughput, it will take a minimum of five seconds to fill up the rest of the drive. Therefore, if the maximum throughput is 50 MB/s, the threshold should be set for when the space is below 250 MB. Of course if the throughput is lower, the allowance can be lowered accordingly. The Maximum size allowed for the Snapshot Resource can be set to limit automatic expansion. Specify 0 for no limit.
Note: If you do not select automatic expansion, old TimeMarks will be deleted to prevent the Snapshot Resource from running out of space.

6. Configure what your storage server should do under different error conditions.

The default is to Always maintain write operations. However, if you are setting the Snapshot Resource policy on a near-line mirror or replica, the default is to Preserve all TimeMarks. If you select Always maintain write operations, the system will delete the earliest TimeMark once the threshold for your Snapshot Resource is reached and cannot be expanded. If there is a failure due to the Snapshot Resource not being accessible or a memory error, you will lose all TimeMarks. If you select Preserve all TimeMarks, the system will prevent any new writes to the primary device and its Snapshot Resource once an error is detected, regardless of whether the error is due to the Snapshot Resource running out of space, a Snapshot Resource disk error, or a memory error. As a result, clients can experience write errors.This option is useful when you want to preserve
CDP/NSS Administration Guide 279

Snapshot Resource

backups (i.e. for a DiskSafe client or for a replica site), but would not be desirable for an in-band client. If you select Preserve recent TimeMarks, the system will delete the earliest TimeMark once the threshold for your Snapshot Resource is reached and cannot be expanded. If the errors were due to a Snapshot Resource disk error or memory error, all new writes to the primary device and its Snapshot Resource are blocked. The client will experience write error behavior similar to the Preserve all TimeMarks option. If you select Enable MicroScan, the data block will be analyzed and only the changed data will be copied. Refer to the Snapshot Resource policy behavior table for additional information regarding Snapshot Resource Policy settings and the associated behavior for error conditions.
Note: For Always maintain write operations and Preserve recent TimeMarks, the earliest TimeMark will be deleted for all resources in a group when any one member of the group cannot write to the Snapshot Resource due to lack of space. All TimeMarks may be deleted to free up necessary space in the Snapshot Resource.

7. Determine if you want to use Snapshot Notification.

Snapshot Notification works with the Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using Snapshot Notification guarantees that you will get a transactionally consistent image of your data. 8. Confirm that all information is correct and then click Finish. You will now see a new Snapshot tab for this SAN resource.

CDP/NSS Administration Guide

280

Snapshot Resource

Snapshot Resource policy behavior


The following table summarizes the system behavior under boundary or error conditions based on different Snapshot Resource policies.
Snapshot Resource Policy Preserve all TimeMarks Condition
Snapshot Resource threshold has been reached. The resource cannot be expanded or the expansion policy is not configured. When the Snapshot Resource is full, any new I/O to the primary resource that requires writing to the Snapshot Resource will be blocked. When the Snapshot Resource reaches the threshold, the system will start deleting the earliest TimeMarks and continuing one-by-one (regardless of priority or if they are in use) until the available space falls below the threshold or there are no more TimeMarks to be deleted. Even the last TimeMark can be deleted. All new I/O to the primary resource that requires writing to the Snapshot Resource will fail. All TimeMarks will be kept. When the Snapshot Resource reaches the threshold, the system starts deleting the earliest TimeMarks one-by-one (regardless of priority or if in use) until the available space falls below the threshold or there are no more TimeMarks to be deleted. Even the last TimeMark can be deleted New I/O to the primary resource is allowed, but the Snapshot Resource will go offline and all TimeMarks will be lost.

Preserve recent TimeMarks

Always maintain write operation

Snapshot Resource failure with the exception of a disk full (i.e. a recoverable storage error) or system error (i.e. out of memory)

All new I/O to the primary resource that requires writing to the Snapshot Resource will fail. All TimeMarks will be kept.

CDP/NSS Administration Guide

281

Snapshot Resource

Check status of a Snapshot Resource


You can see how much of your Snapshot Resource is currently being used and your expansion methods by checking the Snapshot tab for a SAN resource.

Because Snapshot Resources record block-level changes, not file-level, you may not see the Usage Percentage decrease when you delete files. This is because deleted files really still exist on the disk. The Usage Percentage bar colors indicate usage percentage in relation to the threshold level: The usage percentage is displayed in green as long as the available sectors are greater than 120% of the threshold (in sectors). It is displayed in blue when available sectors are less than 120% of threshold (in sectors) but still greater than the threshold (in sectors). The usage percentage is displayed in red when the available sectors are less than the threshold (in sectors). Note that Snapshot resources will be marked off-line if the physical resource from which they have been created is disconnected from a single server in a failover set prior to a failing over to the secondary server.

CDP/NSS Administration Guide

282

Snapshot Resource

Protect your Snapshot Resources


If the physical disk that contains a snapshot resource fails, you will still be able to access your SAN resource, but the snapshot data already in the Snapshot Resource will become invalid. This means that you will not be able to roll back to a point-intime image of your data. However, you can protect your snapshot resources by using the Mirroring option. With Mirroring, each time data is written to the Snapshot Resource, the same data is also written to another disk which maintains an exact copy of the Snapshot Resource. If the primary Snapshot Resource disk fails, the storage server seamlessly swaps to the mirrored copy. To mirror a Snapshot Resource, right-click on the SAN resource and select Snapshot Resource --> Mirror --> Add. Refer to the Mirroring section for more information.

Options for Snapshot Resources


When you right-click on a logical resource that has a Snapshot Resource, you will see a Snapshot Resource menu with the following options: Reinitialize
Reinitialize allows you to refresh your Snapshot Resource and start over. You will only need to reinitialize your Snapshot Resource if you are not mirroring it and it has gone offline but is now back online. Expand allows you to manually expand the size of your Snapshot Resource.

Expand Shrink

The Shrink Policy allows you to reduce the size of your Snapshot Resource. This is useful if your snapshot resource does not need all of the space currently allocated to it. Based on current usage, when you select the Shrink option, the system calculates the maximum amount of space that can be used to shrink the Snapshot Resource. The amount of disk space saved by this operation is calculated from the last block of data where data is written. If there are gaps between blocks of data, the gaps are not included in the amount of space saved.
Note: Be sure to stop all I/O to the source resource before starting this operation. If you have I/O occurring during the shrinking process, the space used for the Snapshot Resource may increase and the operation may fail.

Delete Properties

Delete allows you to delete the Snapshot Resource for this logical resource. Properties allows you to change the snapshot resource automatic expansion policy and snapshot notification policies. Mirror allows you to protect your Snapshot Resource by creating a mirror of it.

Mirror

CDP/NSS Administration Guide

283

Snapshot Resource

Reclaim

Reclaim allows you to free available space in the snapshot resource. Enable the reclamation policy to automatically free up space when a TimeMark Snapshot is deleted. Once the snapshot is deleted, space will be reclaimed at the next scheduled reclamation.

Snapshot Resource shrink and reclamation policies


The reclamation policy allows you to save space by reclaiming previously used storage areas. In the regular course of running your business, TimeMarks are added and deleted. However, the amount of space used up by the deleted TimeMark does not automatically return to the available resource pool until the space is reclaimed. Space can be reclaimed automatically by setting a schedule or manually. For manual reclamation, you can select a TimeMark to be reclaimed one at a time. For scheduled reclamation, you can reclaim all the deleted TimeMarks on that device. Scheduling allows you to set the reclamation policy to automatically free up space when a TimeMark Snapshot is deleted.

Enable Reclamation Policy


The global reclamation policy is enabled by default and scheduled to run at 12:00 a.m. every seven days, automatically removing obsolete TimeView data and conserving space. You can also enable the reclamation option for an individual SAN resource. While setting the reclamation policy for automatic reclamation works to conserve space in most instances, there are some cases where you may need to manually reclaim space. For example, if you delete a TimeMark Snapshot other than the first or the last one, space will not automatically be available. In this case, you can manually reclaim the space by right-clicking on the SAN resource in the FalconStor Management Console and selecting Snapshot Resource --> Reclaim --> Start.

CDP/NSS Administration Guide

284

Snapshot Resource

Highlight the TimeMark(s) to start the reclamation process and click OK.
Notes:

If auto-expansion occurs on the Snapshot Resource while the reclamation process is in progress, the reclamation operation will not succeed. The autoexpansion will be skipped as well. Delete TimeMark and Rollback TimeMark operations are not supported during reclamation.You must stop reclamation before attempting either operation. You can stop a reclaim process by right-clicking on the SAN resource in the FalconStor Management Console and selecting Snapshot Resource --> Reclaim --> Stop. To enable a reclamation policy for a particular SAN resource: 1. Right-click on the SAN resource in the FalconStor Management Console and select Snapshot Resource --> Reclaim --> Enable. The Enable Reclamation Policy screen displays.

2. Enter the following reclamation policy parameters: Set the Reclaim threshold - Reclaim space from deleted TimeMarks is there is at least 2 MB of data to be reclaimed. The default is 2 MB, however you can set your own threshold (in MB or percentage) for the minimum amount of space to be reclaimed per TimeMark. Set the Reclaim schedule - Enter the date and time to start the reclamation schedule, along with the repeat interval.

CDP/NSS Administration Guide

285

Snapshot Resource

Set the maximum processing time for reclamation - Specify the maximum time for the reclamation process. Once this threshold is reached, the reclamation process will stop. Specify 0 to set an unlimited processing time. It is recommended that you schedule lengthy reclamation processing during non-peak operation periods.

Global reclamation policy and retention schedule


You can set and/or edit the global reclamation policy and the TimeMark retention schedule via server properties by right-clicking on the server, and selecting Properties --> TimeMark Maintenance tab.

Note: If reclamation is in progress and failover occurs, the reclamation will fail gracefully. After failover, the global reclamation policy will use the setting on the primary server. For example, if the global reclamation schedule has been disabled on primary server, and it is enabled on the secondary server (failover pair). After failover, the global reclamation schedule will not be triggered on the device(s) on the primary server.

CDP/NSS Administration Guide

286

Snapshot Resource

Select a time to Start TimeMark Retention schedule, or accept the 10pm daily default. This means, the policy will kick off every day at 10:00 pm, deleting the TimeMarks from the previous day according to the retention policy per device, if specified. The retention policy excludes TimeMarks created after 12:00 am on the current day. Once the reclamation policy has been configured, at-a-glance information regarding reclamation settings can be obtained from the FalconStor Management Console --> Snapshot Resource tab.

Disable Reclamation
To disable the reclamation policy, right-click on the SAN resource in the FalconStor Management Console and select Snapshot Resource --> Reclaim --> Disable.
Note: If the global reclamation schedule is disabled on the primary server, and it is enabled on the secondary server (failover pair). After failed over, no global reclamation schedule should trigger on the device(s) on the primary server.

CDP/NSS Administration Guide

287

Snapshot Resource

Check reclamation status


You can check the status of a reclaim process from the console by highlighting the appropriate node under San Resources in the console.

Shrink Policy
Just as you can set your snapshot resource to automatically expand when it requires more space; and you can also set it to "shrink" when it can reclaim unused space. Setting the shrink policy for your snapshot resources is another way to conserve space. The shrink policy allows you to shrink the size of a Snapshot Resource after each successful scheduled reclamation. The shrink policy can be set for multiple SAN resources as well as for individual resources. In order to set a shrink policy, a global or individual reclamation policy must be enabled for the SAN resource. Shrinkage amounts depend upon the minimum amount of disk space you set to trigger the shrink policy. When the shrink policy is triggered, the system calculates the maximum amount of space that can be used to shrink the snapshot resource. The amount of disk space saved by this operation is calculated from the last block of data where data is written. When the specified amount of space to be gained is equal to, or greater than the number entered, shrinkage occurs. The snapshot resource can shrink down to the minimum size you set for the resource.

CDP/NSS Administration Guide

288

Snapshot Resource

To set the shrink policy: 1. Right-click on SAN Resources and select Snapshot Resource -- > Properties. 2. Click the Advanced button.

3. Set the minimum amount of disk space and the minimum snapshot resource size that will trigger the shrink policy. When the amount of space to be reclaimed is equal to, or greater than the minimum disk space specified here and the minimum Snapshot Resource size is reached, the shrink policy will be triggered. By default, the Enable this Snapshot Resource to Shrink option is disabled. The minimum Amount of Disk Space to Trigger Policy is set to 1 GB. 4. Set the minimum Snapshot Resource size. Enter the amount of space to keep. The Snapshot Resource will remain equal to or greater than this size. The minimum Snapshot Resource size is 1 GB by default. Once the shrink policy has been enabled, at-a-glance information regarding shrink policy settings can be obtained from the FalconStor Management Console --> Snapshot Resource tab.

CDP/NSS Administration Guide

289

Snapshot Resource

Shrink a snapshot resource


1. Highlight the Replication node in the navigation tree. 2. Right-click on the replica resource that needs shrinking and select Snapshot Resource -- > Shrink. The shrink option will be unavailable if the TimeMark option is not enabled on the replica resource. 3. Enter the amount of space to be reclaimed from the current snapshot space and enter YES to confirm. By default, the maximum amount of space that can be reclaimed within the snapshot resource will be calculated. If there are no TimeMarks on the replica resource, the size will automatically calculated at 50% of the actual snapshot resource space.

Use Snapshot to copy a SAN resource


FalconStors Snapshot Copy option allows you to create a duplicate, independent point-in-time copy of a SAN resource without impacting application servers. The entire resource is copied to another drive, overwriting any data on the target drive. The source must have a Snapshot Resource in order to create a Snapshot Copy. If it does not have one, you will be prompted to create one. Refer to Create a Snapshot Resource (Updated April 2012) for more information.
Note: We recommend that if a Snapshot Copy is being taken of a large database without the use of a FalconStor Snapshot Agent, the database should reside on a journaling file system (JFS). Otherwise, under heavy I/O, there is a slight possibility that the file system could be changed, resulting in the need to run a file system check (fsck) in order to repair the file system.

1. Right-click on the SAN resource that you want to copy and select Copy.

CDP/NSS Administration Guide

290

Snapshot Resource

2. Select how you want to create the target resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the target for you from available hard disk segments. Select Existing lets you select an existing resource. There are several restrictions as to what you can select: - The target must be the same type as the source. - The target must be the same size as or larger than the source. Note: All data on the target will be overwritten.

CDP/NSS Administration Guide

291

Snapshot Resource

If you select Custom, you will see the following windows:


Only one disk can be selected at a time from this dialog. To create a target resource from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space.

Indicate how much space to allocate from this disk.

Click Add More if you need to add another physical disk to this target resource. You will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide

292

Snapshot Resource

If you selected Select Existing in step 2, you will see the following window from which you can select an existing resource:

3. Enter a name for the target resource.

The name is not case sensitive. 4. Confirm that all information is correct and then click Finish to perform the Snapshot Copy.
Note: If a failover or recovery occurs when snapshot copy is taking place, the snapshot copy will fail. You must resubmit the snapshot copy afterwards.
CDP/NSS Administration Guide 293

Snapshot Resource

5. Assign the snapshot copy to a client.


Note: If you attempt to assign a snapshot copy of a virtual disk multiple times to the same Windows SAN Client, the snapshot copy will fail to import. This is because the import of the foreign disk uses the same disk group name as that of the current computer's disk group. This is a problem with Dynamic Disks; Basic Disks will not have this issue.

Check Snapshot Copy status


You can see the current status of your Snapshot Copy by checking the General tab of both the virtual drive you copied from or copied to.

Snapshot Copy events are also written to the servers Event Log, so you can check there for status information, as well as any errors.

CDP/NSS Administration Guide

294

Snapshot Resource

Groups
The Group feature allows virtual drives and service-enabled drives to be grouped together. Groups can be created for different reasons, for CDP purposes, for snapshot synchronization, for organizational purposes, or for caching using the SafeCache option. Snapshot synchronization builds on FalconStors snapshot technology, which ensures point-in-time consistency for data recovery purposes. Snapshots for all resources in a group are taken at the same time whenever a snapshot is triggered. Working in conjunction with the database-aware Snapshot Agents, groups ensure transactional integrity for database or messaging files that reside on multiple disks. You can create up to 64 groups. When you create a group, you can configure TimeMark/CDP, Backup, Replication, and SafeCache (and, indirectly, asynchronous mirroring) for the entire group. All members of the group get configured the same way.

Create a group
To create a group from the FalconStor Management Console: 1. Expand the Logical Resources object, right-click on Groups and select New.

Depending upon which options you enable, the subsequent screens will let you set group policies for those options. Refer to the appropriate section(s) (Replication, ZeroImpact Backup, TimeMarks and CDP, or SafeCache) for details on configuration. Note that you cannot enable CDP and SafeCache for the same group. 2. Indicate if you would like to add SAN resources to this group.

CDP/NSS Administration Guide

295

Snapshot Resource

Refer to the following sections for limitations as to which SAN resources can/ cannot join a group.

Groups with TimeMark/CDP enabled


The following notes affect groups configured for TimeMark/CDP: You cannot add a resource to a group configured for either TimeMark or CDP if the resource is already configured for CDP. You cannot add a resource to a group configured for CDP if the resource is already configured for SafeCache. CDP can only be enabled for an existing group if members of the group do not have CDP or SafeCache enabled. TimeMark can be enabled for an existing group if members of the group have TimeMark enabled. The group will have only one CDP journal. You will not see a CDP tab for the individual resources. If you want to remove a resource from a group with CDP enabled, you must first suspend the CDP journal for the entire group and wait until it finishes flushing. Groups with TimeMark/CDP enabled: If a member of a group has its own TimeMark that needs to be updated, it must leave the group, make the TimeMark updates individually, and then rejoin the group.

Groups with SafeCache enabled


The following notes affect groups configured for SafeCache: You cannot add a resource to a group configured for SafeCache if the resource is already configured for SafeCache. SafeCache can only be enabled for an existing group if members of the group do not have CDP or SafeCache enabled. The group will have only one SafeCache resource. You will not see a SafeCache tab for the individual resources. If you want to remove a resource from a group with SafeCache enabled, you must first suspend SafeCache for the entire group.

Groups with replication enabled


The following notes affect groups configured for replication: When you create a group on the primary server, the target server gets a group also. When you add resources to a group configured for replication, you can select any resource that is already configured for replication on the target server or any resource that does not have replication configured at all. You cannot select a resource if it is configured for replication to a different server.
CDP/NSS Administration Guide 296

Snapshot Resource

If a watermark policy is used for replication, the retry delay value configured affects each group member individually rather than the group as a whole. For example, if replication starts for the group and a group member fails during the replication process, the retry delay value will take effect. In the meantime, if another resource in the group reaches its watermark, a group replication will be triggered for all group members and the retry delay will become irrelevant. If you are using continuous replication, the group will have only one Continuous Replication Resource. If a group is configured for continuous replication, you cannot add a resource to the group if the resource has continuous replication enabled. Similarly, continuous replication can only be enabled for an existing group if members of the group do not have continuous replication enabled. If you add a resource to a group that is configured for continuous replication, the system switches to periodic replication mode until the next regularlyscheduled replication takes place.

Grant access to a group


By default, only the root user and IPStor administrators can manage SAN resources, groups, or clients. While IPStor users can add new groups, if you want a CDP/NSS user to manage an existing group, you must grant that user access. To do this: 1. Right-click on a group and select Access Control. 2. Select which user can manage this group. Each group can only be assigned to one IPStor user. This user will have rights to perform any function on this group, including assigning, joining, and configuring storage services.

Add resources to a group


Each group can be comprised of multiple SAN resources. Each resource can only join one group and you cannot have both types of resources in the same group.
Note: There is a limit of 128 resources per group. If the group is enabled for replication, the recommended limit is 50.

There are several ways to add resources to a group. After you create a group, you will be prompted to add resources. At any time afterwards, you can: 1. Right-click on any group and select Join. You can also right-click on any SAN resource and select Group --> Join. 2. Select the type of resources that will join this group. If this is a group with existing members, you will see a list of members instead.

CDP/NSS Administration Guide

297

Snapshot Resource

3. Determine if you want to use Express Mode. If you select Express Mode, you will be able to select multiple resources to join this group at one time. After you finish selecting resources, they will automatically be synchronized with the options and settings configured for the group. If you do not select Express Mode, you will need to select resources one-by-one. For each resource, you will be taken through the applicable Replication and/or Backup wizard(s) and you will have to manually configure each option. (TimeMark is always configured automatically.) 4. Select resources to join this group.

CDP/NSS Administration Guide

298

Snapshot Resource

If you started the wizard from a SAN resource instead of from a group, you will see the following window and you will select a group, instead of a resource:

When you click Next, you will see the options that must be activated. You will be taken through the applicable Replication and/or Backup wizard(s) so you can manually configure each option. (TimeMark is always configured automatically.) 5. Confirm all information and click Finish to add the resource(s) to the group. Each resource will now have a tab for each configured option except CDP and SafeCache which share a CDP journal or SafeCache resource as a group. By default, group members are not automatically assigned to clients. You must still remember to assign your group members to the appropriate client(s).

Remove resources from a group


Note that if you want to remove a resource from a group with CDP or SafeCache enabled, you must first suspend the CDP journal for the group and wait for it to finish flushing or suspend SafeCache. To suspend the CDP journal, right-click on the group and select TimeMark/CDP --> CDP Journal --> Suspend. Afterwards, you will need to resume the CDP journal. To suspend SafeCache, right-click on the group and select SafeCache --> Suspend. To remove resources from a group: 1. Right-click on any group and select Leave.

CDP/NSS Administration Guide

299

Snapshot Resource

2. Select resources to leave this group.

For groups enabled with Backup or Replication, leaving the group does not disable Backup or Replication for the resource.

CDP/NSS Administration Guide

300

CDP/NSS Administration Guide

TimeMarks and CDP


Overview
FalconStors TimeMark and CDP options protect your mission critical data, enabling you to recover data back from a previous point-in-time.
TimeMarks are point-in-time images of any SAN virtual drive. Using FalconStors Snapshot technology, TimeMarks track multiple virtual images of the same disk marked by "time". If you need to retrieve a deleted file or "undo" data corruption, you can recreate/restore the file instantly based on any of the existing TimeMarks.

While the TimeMark option allows you to track changes to specific points in time, with Continuous Data Protection (CDP) you can roll back data to any point-in-time.
TimeMark/CDP guards against soft errors, non-catastrophic data loss, including the accidental deletion of files and software/virus issues leading to data corruption. TimeMark/CDP protects where high availability configurations cannot, since in creating a redundant set of data, high availability configurations also create a duplicate set of soft errors by default. TimeMark/CDP protects data from your slipups, from the butter fingers of employees, unforeseen glitches during backup, and from the malicious intent of viruses.

The TimeMark/CDP option also provides an undo button for data processing. Traditionally, when an administrator performed operations on a data set, a full backup was required before each dangerous step, as a safety net. If the step resulted in undesirable effects, the administrator needed to restore the data set and start the process all over again. With FalconStor's TimeMark/CDP option, you can easily rollback (restore) a drive to its original state. FalconStors TimeView feature is an extension of the TimeMark/CDP option and allows you to mount a virtual drive as of a specific point-in-time. Deleted files can be retrieved from the drive or the drive can be assigned to multiple application servers for concurrent, independent processing, all while the original data set is still actively being accessed/updated by the primary application server. This is useful for what if scenarios, such as testing a new payroll application on your actual, but not live, data. Configure TimeMark properties by right-clicking on the TimeMark/CDP option and selecting Properties.

CDP/NSS Administration Guide

301

TimeMarks and CDP

Enable TimeMark
You will need a Snapshot Resource for the logical resource you are going to configure. If you do not have one, you will create it through the wizard. Refer to Create a Snapshot Resource (Updated April 2012) for more information. 1. Right-click on a SAN resource, incoming replica resource, or a Group and select TimeMark/CDP --> Enable. For multiple SAN resources, right-click on the SAN Resources object and select TimeMark/CDP --> Enable. The Enable TimeMark/CDP Wizard launches. 2. Indicate if you want to enable CDP. Select the checkbox to enable CDP.

CDP enhances the benefits of using TimeMark by recording all changes made to data, allowing you to recover to any point in time.
Note: If you enable CDP on the replica, it is recommended that you perform replication synchronization. CDP journaling will not begin until the next successful replication. You can wait until the next scheduled replication synchronization or manually trigger synchronization. To manually trigger replication synchronization, Right-click on the primary server and select Replication --> Synchronization.

CDP/NSS Administration Guide

302

TimeMarks and CDP

3. (CDP only) Select the storage pool or physical device that should be used to create the CDP journal.

4. (CDP only) Select how you want to create the CDP journal.

The minimum size required for the journal is 1 GB, which is the default size.
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates a CDP journal using an available device. Select different drive - look for space on another hard disk. Select drives from different adapter/channel - look for space on another hard disk only if it is on a separate adapter/channel.
CDP/NSS Administration Guide 303

TimeMarks and CDP

Select any available drive - look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device. Note: The CDP Journal performance level is set to Moderate by default. You can modify this setting (to aggressive) by right-clicking on the SAN resource and selecting TimeMark/CDP --> CDP Journal --> Performance.

5. Determine how often TimeMarks should be created.

The number selected in the Maximum number of TimeMarks allowed enforces the number of TimeMarks that can be created. The default number is based upon your license.

CDP/NSS Administration Guide

304

TimeMarks and CDP

6. Select the Retention Policy.

You can select one of the following three retention policies: Keep the maximum number of TimeMarks. This policy will keep all TimeMarks created for the device. Keep the ___ most recent TimeMarks. This policy will keep latest TimeMarks up to the value specified. The maximum number can be up to the value set in previous screen (Maximum number of TimeMarks allowed). The default value is 8. Keep TimeMarks based on the following rules: This policy allows you the flexibility to retain TimeMarks according to your scheduling needs. If nothing is checked, the default will be set to keep all TimeMarks for the past 1 day. Refer to TimeMark retention policyfor more details on configuring each rule.
Note: Make sure the Maximum number of TimeMarks allowed is set to a large enough number to retain the desired number of TimeMarks. If the number set is too low, your earlier TimeMarks will be deleted before the retention policy can take effect.

CDP/NSS Administration Guide

305

TimeMarks and CDP

7. Select the Trigger replication after TimeMark is taken checkbox if TimeMark and Replication are both enabled for this device/group in order to have replication triggered automatically after each TimeMark event.

If TimeMark is enabled for a group, replication must also be enabled at the group level. You should manually suspend the replication schedule when using this option to avoid a scheduling conflict. 8. Confirm that all information is correct and then click Finish to enable TimeMark/ CDP. You now have a TimeMark tab for this resource or group. If you enabled CDP, you also have a separate CDP tab. If you are using CDP, the TimeMarks will be points within the CDP journal. In order for a TimeMark to be created, you must select Create an initial TimeMark on... policy. Otherwise, you will have enabled TimeMark, but not created any. You will then need to manually create them using TimeMark/CDP --> Create. If you are configuring TimeMark for an incoming replica resource, you cannot select the Create an initial TimeMark on... policy. Instead, a TimeMark will be created after each scheduled replication job finishes. Depending upon the version of your system, the maximum number of TimeMarks that can be maintained is 1000. The maximum does not include the snapshot images that are associated with TimeView resources. Once the maximum is reached, the earliest TimeMarks will be deleted depending upon priority. Low priority TimeMarks are deleted first, followed by Medium, High, and

CDP/NSS Administration Guide

306

TimeMarks and CDP

then Critical. When a TimeMark is deleted, journal data is merged together with a previous TimeMark (or a newer TimeMark, if no previous TimeMarks exist).
Note:

If CDP is enabled, only 256 TimeMarks are supported. This is because CDP can only allow 256 snapshot markers, regardless of whether they are flushed or not. A temporary TimeMark does not count toward the maximum TimeMark count Within list.

The first TimeMark that is created when CDP is used will have a Medium priority. Subsequent TimeMarks will have a Medium priority by default, but can be changed manually. Refer to Add a comment or change priority of an existing TimeMark for more information.
Note:

A TimeView cannot be created from the CDP journal if TimeMarks already have TimeView data or is a VSS TimeMark. When a TimeView is created from the CDP journal, it is recommended that you change the default 32 MB setting to a larger size to accommodate the large amount of data.

Snapshot Notification works with FalconStor Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using snapshot notification guarantees that you will get a transactionally consistent image of your data.

This might take some time if the client is busy. You can speed up processing by skipping snapshot notification if you know that the client will not be updating data when a TimeMark is taken. Use the Trigger snapshot notification for every n scheduled TimeMark(s) option to select which TimeMarks should use snapshot notification.
Note: Once you have successfully enabled CDP on the replica, perform Replication synchronization.

CDP/NSS Administration Guide

307

TimeMarks and CDP

Check TimeMark status


You can see a list of TimeMarks for this virtual drive, along with your TimeMark policies, by clicking the TimeMark tab.

TimeMarks displayed in orange are pending, meaning there is unflushed data in the CDP journal. Unflushed TimeMarks cannot be selected for rollback or TimeView. To re-order the list of TimeMarks, click on a column heading to sort the list. The Quiescent column indicates whether or not snapshot notification occurred when the TimeMark was created. When a device is assigned to a client, the initial value is set to No. A Yes in the Quiescent column indicates there is an available agent on the client to handle the snapshot notification, and the snapshot notification was successful. If a device is assigned to multiple clients, such as nodes of a cluster, the Quiescent column displays Yes only if the snapshot notification is successful on all clients; if there is a failure on one of the clients, the column displays No. However, in the case of a VSS cluster, the Quiescent column displays Yes with VSS when the entire VSS process has successfully completed on the active node and the snapshot has been created. If you are looking at this tab for a replica resource, the status will be carried from the primary resource. For example, if the TimeMark created on the primary virtual device used snapshot notification, Quiescent will be set to Yes for the replica. The TimeView Data column indicates whether TimeView data or a TimeView resource exists on the TimeMark. The Status column indicates the TimeMark state.
Note: A vdev expanded TimeMark is created automatically when a source device with CDP is expanded.

CDP/NSS Administration Guide

308

TimeMarks and CDP

Right-click on the virtual drive and select Refresh to update the TimeMark Used Size and other information on this tab. To see how much space TimeMark is using, check the Snapshot Resource tab.

Check CDP journal status


You can see the current size and status of your CDP journal by checking the CDP tab.

CDP/NSS Administration Guide

309

TimeMarks and CDP

Protect your CDP journal


This section applies only to CDP. You can protect your CDP journal by using FalconStors Mirroring option. With Mirroring, each time data is written to the journal, the same data is also written to another disk which maintains an exact copy of the journal. If the primary journal disk fails, CDP seamlessly swaps to the mirrored copy. To mirror a journal, right-click on the SAN resource and select TimeMark/CDP --> CDP Journal --> Mirror --> Add.

Add a tag to the CDP journal


You can manually add a tag to the CDP journal. The tag will be used to notate the journal when the next I/O occurs. Adding a tag with a meaningful comment is useful for marking special situations, such as system maintenance or software upgrades. With these tags, it is easy to find the point just prior to when the system maintenance or software upgrade began, making rollback easy and accurate. 1. Highlight a SAN resource and select TimeMark/CDP --> CDP Journal --> Add tag.

2. Type in a tag and click OK.

Add a comment or change priority of an existing TimeMark


You can add a comment to an existing TimeMark to make it easy to identify later. For example, you might add a known good recovery point, such as an application checkpoint to identify a TimeMark for easy recovery. You can also change the priority of a TimeMark. Priority eases long term management of TimeMarks by allowing you to designate importance, aiding in the preservation of critical point-in-time images.

CDP/NSS Administration Guide

310

TimeMarks and CDP

Priority affects how TimeMarks will be deleted once the maximum number of TimeMarks to keep has been reached. Low priority TimeMarks are deleted first, followed by Medium, High, and then Critical.
Note: Groups with TimeMark/CDP enabled: If a member of a group has its own TimeMark that needs to be updated, it must leave the group, make the TimeMark updates individually, and then rejoin the group.

1. Right-click on the TimeMarked SAN resource that you want to update and select TimeMark/CDP --> Update

2. Click in the Comment or Priority field to make/change entries. 3. Click Update when done.

Manually create a TimeMark


1. To create a TimeMark that is not scheduled, select TimeMark/CDP --> Create.

2. If desired, add a comment for the TimeMark that will make it easily identifiable later if you need to locate it. 3. Set the priority for this TimeMark.

CDP/NSS Administration Guide

311

TimeMarks and CDP

Once the maximum number of TimeMarks allowed has been reached, the earliest TimeMarks will be deleted depending upon priority. Low priority TimeMarks are deleted first, followed by Medium, High, and then Critical. 4. Indicate if you want to use Snapshot Notification for this TimeMark.
Snapshot Notification works with FalconStor Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using snapshot notification guarantees that you will get a transactionally consistent image of your data.

This might take some time if the client is busy. You can speed up processing by skipping snapshot notification if you know that the client will not be updating data when this TimeMark is taken. The use of this option overrides the Snapshot Notification setting in the snapshot policy.

Copy a TimeMark
The Copy feature works similarly to FalconStors Snapshot Copy option. It allows you to take a TimeMark image of a drive (for example, how your drive looked at 9:00 this morning) and copy the entire drive image to another virtual drive or SAN resource. The virtual drive or SAN resource can then be assigned to clients for use and configured for FalconStor storage services. 1. Right-click on the TimeMarked SAN resource that you want to copy and select TimeMark/CDP --> Copy.
Note: Do not initiate a TimeMark Copy while replication is in progress. Doing so will result in the failure of both processes.

2. Select the TimeMark image that you want to copy.

CDP/NSS Administration Guide

312

TimeMarks and CDP

To copy the TimeMark and TimeView data, select the Copy the TimeMark and TimeView data checkbox at the bottom left of the screen.

This option is only available if there is TimeView data available. This option is not available if the TimeView data is in use/mounted or if there is no TimeView. In this case, you will only be able to create a copy of the disk image at the time of the timestamp (without new data that has been written to the TimeView). To capture the new data in this case, see the example below. For example, if you have assigned a TimeView to a disaster recovery (DR) host and have started writing new data to the TimeView, when you use TimeMark Copy you will have a copy of the point in time without the "new" data that was written to the TimeView. In order to create a full disk copy to include the data in the TimeView, you will need to unassign the TimeView from the DR host, delete the TimeView and select the keep the TimeView data persistent option. Afterwards, TimeMark Copy will include the new data. You can recreate the TimeView again with the new data and assign back to the DR host. To revert back to the original TimeMark, you must delete the TimeView again, but do not select the keep the TimeView data persistent option. This will remove the new data from the TimeMark. 3. Select how you want to create the target resource.
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the target for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the copy. Select Existing lets you select an existing resource. There are several restrictions as to what you can select:
CDP/NSS Administration Guide 313

TimeMarks and CDP

The target must be the same type as the source. The target must be the same size as, or larger than, the source. The target cannot have any Clients assigned or attached.
Note: All data on the target will be overwritten.

4. Enter a name for the target resource. 5. Confirm that all information is correct and then click Finish to perform the TimeMark Copy. You can see the current status of your TimeMark Copy by checking the General tab of either virtual drive. You can also check the servers Event Log for status information.

Recover data using the TimeView feature


TimeView allows you to mount a virtual drive as of a specific point-in-time, based on your existing TimeMarks or your CDP journal.

Use TimeView if you need to restore individual files from a drive but you do not want to rollback the entire drive to a previous point in time. Simply use TimeView to mount the virtual drive and then copy the files you need back to your original virtual drive. TimeView also enables you to perform what if scenarios, such as testing a new payroll application on your actual, but not live, data. After mounting the virtual drive, it can be assigned to an application server for independent processing without affecting the original data set. A TimeView cannot be configured for any of FalconStors storage services. Why should you use TimeView instead of Copy? Unlike Copy, which creates a new virtual drive and requires disk space equal to or larger than the original disk, a TimeView requires very little disk space to mount. It is also quicker to create a TimeView than to copy data to a new virtual drive.
Note: Clients may not be able to access TimeViews during failover.

1. Highlight a SAN resource and select TimeMark/CDP --> TimeView.

CDP/NSS Administration Guide

314

TimeMarks and CDP

The Create TimeView Wizard displays.

Move the slider to select any point in time. You can also type in the date and time down to the millisecond and microsecond.

Zoom In to see greater detail for the selected time period.

Click to select a CDP journal tag that was manually added.

If this resource has CDP enabled, the top section contains a graph with marks that represent TimeMarks. The graph is a relative reflection of the data changing between TimeMarks within the available journal range. The vertical y axis represents data usage per TimeMark; the height of each mark represents the Used Size of each TimeMark. The horizontal x axis represents time. Each mark on the graph indicates a single TimeMark. You will not see TimeMarks that have no data. Because the graph is a relative reflection of data, and the differences in data usage can be very large, the proportional height of each TimeMark might not be very obvious. For example, if you have one TimeMark with a size of 500 MB followed by several much smaller TimeMarks, the 500 MB TimeMark will be much more visible. Similarly, if the maximum number of TimeMarks has been reached and older TimeMarks have been deleted to make way for newer ones, journal data is merged together with a previous TimeMark (or a newer TimeMark, if no previous exist). Therefore, it is possible that you will see one large TimeMark containing all of the merged data. Also, since the length of the x axis can reflect a range as small as one hour to 30 days, the location of an actual data point is approximate. Zooming in and using the Search button will allow you to get a more accurate location of a particular data point. If CDP is enabled, you can use the visual slider to create a TimeView from any point in the CDP journal or you can create a TimeView from a scheduled TimeMark.

CDP/NSS Administration Guide

315

TimeMarks and CDP

You can also click the Select Tag button to select a CDP journal tag that was manually added or was automatically added by CDP after a rollback occurred. Note that you will only see the tags for which there was subsequent I/O. If CDP is not enabled, you will only be able to create a TimeView from a scheduled TimeMark. 2. To create a TimeView from a scheduled TimeMark, select Create TimeView from TimeMark Snapshots, highlight the correct TimeMark, and click OK. If this is a replica server, the timestamp of a TimeMark is the timestamp of the source (not the replicas local time). 3. To create a TimeView from the CDP journal, use the slider or type in an approximate time. For example, if you are trying to find a deleted file, select a time prior to when the file was deleted. If this was an active file, aim for a time just prior to when the file was deleted so that you can recover the most up-to-date version. If you are positive that the time you selected is correct, you can click OK to create a TimeView. If you are unsure of the exact time, you can zoom into an approximate time period to see greater detail, such as seconds, milliseconds, and even microseconds. 4. If you need to see greater detail, click Zoom In.
TimeMark period Five minute range within this TimeMark

You can see the I/O that occurred during this five minute time frame displayed in seconds.
CDP/NSS Administration Guide 316

TimeMarks and CDP

If you zoomed in and dont see what you are looking for, you can click the Scroll button. It will move forwards or backwards by five minutes within the period of this TimeMark. You can also click the Search button to locate data or a period with limited or no I/O.

At any point, if you know what time you want to select, you can click OK to return to the main dialog so that you can click OK to create a TimeView. Otherwise, you can zoom in further to see greater detail, such as milliseconds and microseconds. You can then use the slider to select a time just before the file was deleted.

It is best to select a quiet time without I/O to get the most stable version of the file.

CDP/NSS Administration Guide

317

TimeMarks and CDP

5. After you have selected the correct point in time, click OK to return to the main dialog and then click OK to create a TimeView. 6. Select the physical resource for SAN TimeView Resource.

CDP/NSS Administration Guide

318

TimeMarks and CDP

7. Select a method for TimeView Creation.

Notes:

The TimeView only uses physical space when I/O is written to the TimeView device. New write I/O may trigger expansion to allocate more physical space for the TimeView when no more space is available. Read I/ O does not require additional physical space. The maximum size to which a TimeView device can be allocated is 5% more than the primary device. For example: Maximum TimeView body size = 1.05 X primary device size. The allocated size will be checked for both policy and user triggers to expand when necessary. The formula for allocating the initial size of the physical space for the TimeView is as follows: If the primary device size is less than 5GB, the initial TimeView size = primary size X 1.05 (the maximum TimeView size) If the primary device size is greater than 5GB, the initial TimeView size = 5GB If creating a TimeView from a VSS TimeMark, the initial TimeView size = 32MB (as shown in the screen above) For best performance, it is recommended that you do not lower the default initial size of the TimeView if you intend to write to the TimeView device (i.e. when using HyperTrac). Once the TimeView is deleted, the space becomes available. TimeViews cannot be shrunk once the space is allocated.

CDP/NSS Administration Guide

319

TimeMarks and CDP

8. Enter a name for the TimeView and click OK to finish. The Set TimeView Storage Policy screen displays.

9. Verify and create the TimeView resource.

10. Assign the TimeView to a client. The client can now recover any files needed.

CDP/NSS Administration Guide

320

TimeMarks and CDP

Remap a TimeView
With TimeViews, you can mount a virtual drive as of a specific point-in-time, based on your existing TimeMarks or your CDP journal. If you are finished with one TimeView but need to create another for the same virtual device, you can remap the TimeView to another point-in-time. When remapping, a new TimeView is created and all of the client connections are retained. To remap a TimeView, follow the steps below:
Note: It is recommend that you disable the TimeView from the client (via the Device Manager on Windows machines) before remapping it.

1. Right-click on an existing TimeView and select Remap. You must have at least one additional TimeMark available. 2. Select a TimeMark or a point in the CDP journal. 3. Enter a name for the new TimeView and click Finish.

Delete a TimeView
Deleting a TimeView also involves deleting the SAN resource. To delete a TimeView: 1. Right-click on the TimeView and select Delete.

2. Select whether you want to Keep the TimeView data to be persistent when recreated with the same TimeMark. This option allows you to save the TimeView data on the TimeMark and restore the data when it is recreated with the same TimeMark. 3. Type Yes in the box and click OK to confirm the deletion.
CDP/NSS Administration Guide 321

TimeMarks and CDP

Remove TimeView Data


Obsolete TimeView data is automatically removed for all devices after a successful scheduled reclamation. Use the Remove TimeView Data option to manually delete obsolete data. You may want to use this option after you have deleted a TimeMark and you want to clean up TimeView data. This option can be triggered in batch mode, by right-clicking on the Logical Resources node in the FalconStor Management Console and selecting Remove TimeView Data. To remove TimeView data on an individual device, right-click on the SAN resource and select Remove TimeView Data.

The first option allows you to remove all TimeView data from selected virtual device(s) The second option allows you to select specific TimeView data for deletion.

CDP/NSS Administration Guide

322

TimeMarks and CDP

Set TimeView Policy


TimeView uses its own storage, separate from the snapshot resource. The TimeView Storage Policy can be set during TimeView creation. After a TimeView is created, the storage (auto-expansion) policy can be modified from the properties option. To set the TimeView storage policy: 1. Right-click on a TimeView device and select Properties.

2. Select the storage policy to be used when space starts to run low. Specify the threshold as a percentage of the space used (1 - 99%). The default is the same value as the snapshot resource threshold. Once the specified threshold is met, automatic expansion is triggered. Automatically allocate more space for the TimeView device. Check this option to allow the system to allocate additional space (according to the following settings) once the threshold is met. Enter the percentage to Increment space by. The default is the same value as the snapshot resource threshold. Enter the maximum size (in MB) allowed for the TimeView device. This is the maximum size limit used by automatic expansion. The default is 0, which means maximum TimeView size.

CDP/NSS Administration Guide

323

TimeMarks and CDP

Rollback or roll forward a drive


Rollback restores your drive to a specific point in time, based on your existing TimeMarks, TimeViews, or your CDP journal. After rollback, your drive will look exactly like it did at that point in time.

After rolling a drive back, TimeMarks made after that point in time will be deleted but all of the CDP journal data will be available, if CDP is enabled. Therefore it is possible to perform another rollback and select a journal date ahead of the previous time, essentially rolling forward. Group rollback allows you to rollback up to 32 (the default) disks to a TimeMark or CDP data point. To perform a group rollback, right-click on the group and select Rollback. TimeMarks that are common to all devices in a group will display in the wizard. 1. Unassign the Client(s) from the virtual drive before rollback. For non-Windows Clients, type ./ipstorclient stop from /usr/local/ ipstorclient/bin.
Note: To avoid the need to reboot a Windows 2003 client, unassign the SAN resource from the client now and then reassign it just before re-attaching your client using the FalconStor Management Console.

2. Right-click on the virtual drive and select TimeMark/CDP --> Rollback.

To enable preservation of all timestamps, check Preserve all TimeMarks with more recent timestamps.

CDP/NSS Administration Guide

324

TimeMarks and CDP

Do not initiate a TimeMark rollback to a raw device while data is currently being written to the raw device. The rollback will fail because the device will fail to open. If you have already created a TimeView from the CDP journal and want to roll back your virtual device to that point in time, right-click on the TimeView and select Rollback to. 3. Select a specific point in time or select the TimeMark to which you want to rollback. If CDP is enabled and you have previously rolled back this drive, you can select a future journal date. If you selected a TimeView in the previous step, you will not have to select a point in time or a TimeMark. 4. Confirm that you want to continue. A TimeMark will be taken automatically at the point of the rollback and a tag will be added into the journal. The TimeMark will have the description !!XX-- POST CDP ROLLBACK --XX!! This way, if you later need to create a TimeView, it will contain data from the new TimeMark forward to the TimeView time. This means you will see the disk as it looked immediately after rollback plus any data written to the disk after the rollback occurred until the time of the TimeView. It is recommended that you remove the POST CDP ROLLBACK after a successful rollback because it counts towards the TimeMark count for that member. 5. When done, re-attach your Client(s).
Note: If DynaPath is running on a Windows client, reboot the machine after rollback.

Change your TimeMark/CDP policies


You can change your TimeMark schedule, and enable/disable CDP on single devices.
Note: You cannot enable/disable CDP by updating TimeMark properties in batch mode.

To change a policy: 1. Right-click on the virtual drive and select TimeMark/CDP --> TimeMark --> Properties.

CDP/NSS Administration Guide

325

TimeMarks and CDP

2. Make the appropriate changes and click OK.


Note: If you uncheck the Enable Continuous Data Protection box, this will disable CDP and will delete the CDP journal. It will not delete TimeMarks. If you want to disable TimeMark and CDP, refer to the Disable TimeMark and CDP section below.

In addition, you can update TimeMark properties in batch mode. To update multiple SAN resources: 1. Right-click on the SAN resources object and select Properties. The Update TimeMark Properties screen displays. 2. Select all of the resources you want to update and click Next. 3. Make the desired policy changes and click OK.

TimeMark retention policy


When defining the rule-based policy, you can specify the offset of the moment to keep, i.e. Use the TimeMark closest to___. For example, for daily TimeMarks, you are asked to specify which hour of the day to use for the TimeMark. For weekly TimeMarks, you are asked which day of the week to keep. If you set an offset for which there is no TimeMark, the closest one to that time is taken. The default offset values correspond to typical usage based on the fact that the older the information, the less valuable it is. For instance, you can take TimeMarks every 20 minutes, but keep only those snapshots taken at the minute 00 each hour for the last 24 hours. This feature allows you to save a pre-determined number of TimeMarks and delete the rest. The TimeMarks that are preserved are the result of the pruning process. This method allows you to keep only meaningful snapshots after each retention run. The retention policy is scheduled to start once a day by default. To modify this setting, refer to the Global reclamation policy and retention schedule section. To set the retention policy, right-click on the SAN resource and select TimeMark/ CDP --> Properties. Then select the TimeMark Retention tab. Rule-based policy can have the following combinations of rules. Each rule is described below:
Keep all TimeMarks for the past [value][unit] This rule is required to retain all TimeMarks specified by the value set. The range is [1-168 hours or 1-365 days]. The default value is 1 day. Hourly from the past [value] Days Use the TimeMark closest to [minute] as the hourly TimeMark. Set the above rules to retain the hourly TimeMark that is closest to the selected minute for the past specified number of days. The ranges are 1365 days and 0-59 minutes. The default values are 1 day and 0 minute.
CDP/NSS Administration Guide 326

TimeMarks and CDP

Daily from the past [value] Days Use the TimeMark closest to [hour] as the daily TimeMark. Set this rule to retain one daily TimeMark that is closest to the selected hour of the day for the past specified number of days. The ranges are 1-730 days and 0-23 hours. The default values are 1 day and the 23rd hour. Weekly from the past [value] Weeks Use the TimeMark closest to [day of the week] as the weekly TimeMark. Set this rule to retain one weekly TimeMark that is closest to the selected day of the week for the past specified number of weeks. The ranges are 1110 weeks and Monday-Sunday. The default values are 1 week and Friday. Monthly from the past [value] Months Use the TimeMark closest to [day of the month] as the monthly TimeMark. Set this rule to retain one monthly TimeMark that is closest to the selected day of the month for the past specified number of months. The ranges are 1120 months and 1-31 day. The default values are 1 month and the 31st day (last day of the month).

Retention policy example

The following example illustrates how the retention policy values are interpreted. TimeMarks are created every 15 minutes, starting on midnight November 1, 2010 (Monday). Manual, critical TimeMarks were created at 12:02 on November 22, 2010 and at 15:37 on December 24, 2010. Retention policy is deployed on November 1, 2010 as follows: Rule based policy Keep all TimeMarks for the past 1 day Keep hourly TimeMarks for 4 days closest to 0 minutes Keep daily TimeMarks for 7 days closest to the 23rd hour. Keep weekly TimeMarks for 4 weeks closest to Friday. Keep monthly TimeMarks for 4 months closest to the 31st day. None of the TimeMarks are currently mounted.

After the retention policy is run at 1:00AM on December 30, 2010 (Thursday) the results are: Keep the two (2) critical TimeMarks; these are excluded by the critical rule. Keep all TimeMarks on December 30, 2010; these are excluded by starting point rule. Keep all TimeMarks on December 29, 2010; these are protected by the "all" retention rule. Keep TimeMarks at 00:00, 01:00, 02:00 23:00 on December 26, 27, and 28; these are protected by the "hourly" retention rule. *Note that TimeMarks for December 29, 2010 are already protected by "all" retention rule.

CDP/NSS Administration Guide

327

TimeMarks and CDP

Keep TimeMarks at 23:00 on December 23, 24, and 25; these are protected by the "daily" retention rule. *Note that the TimeMarks on December 26, 2010 to December 29, 2010 are already protected by the "hourly" retention rule. Keep TimeMarks at 23:00 on December 3, 10, and 17; these are protected by the "weekly" retention rule. *Note that the TimeMarks on December 24, 2010 is already protected by the "daily" retention rule. Keep TimeMark at 23:00 on November 30 and TimeMark at 00:00 on November 1; this is protected by the "monthly" retention rule. *Note that since no TimeMarks were created in October, the November 1 TimeMark is protected because it is the closest to October 31.

Assuming processing completes within 40 minutes (by 1:40AM on December 30, 2010), there should be 184 Snapshots remaining: 2 TimeMarks each on November 22, 2010 and on December 24, 2010 (critical) 6 TimeMarks today - December 30, 2010 (exclude) 96 TimeMarks on December 29, 2010 (all) 24 TimeMarks each on December 26, 27, and 28 (hourly) 1 TimeMark each on December 23, 24, and 25 (daily) 1 TimeMark each on December 3, 10, and 17 (weekly) 1 TimeMark each on November 1, 2010 and November 30, 2010 (Monthly)

CDP/NSS Administration Guide

328

TimeMarks and CDP

Delete TimeViews in batch mode


You can delete multiple TimeViews in a device. To do this, select the SAN device to delete.

Suspend/resume CDP
[For CDP only] You can suspend/resume CDP for an individual resource. If the resource is in a group, you can suspend/resume CDP at the group level. Suspending CDP does not delete the CDP journal and it does not delete any TimeMarks. When CDP is resumed, data resumes going to the journal. To suspend/resume CDP, right-click on the resource or group and select TimeMark/ CDP --> CDP Journal --> Suspend (or Resume).

Delete TimeMarks
The Delete option lets you delete one or more TimeMark images for a virtual drive. Depending upon which TimeMark(s) you delete, this may or may not free up space in your Snapshot Resource. A general rule is that you will only free up Snapshot Resource space if the earliest TimeMark is deleted. If other TimeMarks are deleted, you will need to run reclamation to free up space. Refer to Snapshot Resource shrink and reclamation policies. 1. Right-click on the virtual drive and select TimeMark/CDP --> Delete. 2. Highlight one or more TimeMarks and click Delete.
CDP/NSS Administration Guide 329

TimeMarks and CDP

3. Type yes to confirm and click OK to finish.

Disable TimeMark and CDP


If you ever need to disable TimeMark and CDP, you can select TimeMark/CDP --> Disable. In addition to disabling TimeMark and CDP, this will delete the CDP journal and all existing TimeMarks. For multiple SAN resources, right-click on the SAN Resources object and select TimeMark/CDP --> Disable. If you only want to disable CDP and delete the CDP resource, refer to the Change your TimeMark/CDP policies section.

Replication and TimeMark/CDP


The timestamp of a TimeMark on a replica is the timestamp of the source. You cannot manually create any TimeMarks on the replica, even if you enable TimeMark/CDP on the replica. If you are using TimeMark with CDP, you must use Continuous Mode replication (not Delta Mode).

CDP/NSS Administration Guide

330

CDP/NSS Administration Guide

NIC Port Bonding


NIC Port Bonding is a load-balancing/path-redundancy feature available for Linux. This feature enables you to configure your storage server to load-balance network traffic across two or more network connections creating redundant data paths throughout the network. NIC Port Bonding offers a new level of data accessibility and improved performance for storage systems by eliminating the point of failure represented by a single input/ output (I/O) path between servers and storage systems and permits I/O to be distributed across multiple paths. NIC Port Bonding allows you to group up to eight network interfaces into a single group. NIC Port Bonding supports the following scenarios: 2 port bond 4 port bond Dual 2 port bond 8 port bond Dual 4 port bond

You can think of this group as a single virtual adapter that is actually made up of multiple physical adapters. To the system and the network, it appears as a single interface with one IP address. However, throughput is increased by a factor equal to the number of adapters in the group. Also, NIC Port Bonding detects faults anywhere from the NIC out into the network and provides dynamic failover in the event of a failure. You can define a virtual network interface (NIC) which sends and receives traffic to/ from multiple physical NICs. All interfaces that are part of a bond have SLAVE and MASTER definitions.

Enable NIC Port Bonding


To enable NIC Port Bonding with less than four NICs: 1. Right click on the server. 2. Select System Maintenance --> Bond NIC Port. The NIC Port Bonding screen displays. 3. Enter the IP Address and Netmask for the bonded interfaces: eth0 and eth1. Then click OK. A bonding interface bond0 with slaves eth0 and eth1 is created.

CDP/NSS Administration Guide

331

NIC Port Bonding

To enable NIC Port Bonding with four or more NICs: 1. Right click on the server. 2. Select System Maintenance --> Bond NIC Port. The NIC Port Bonding screen displays. 3. Select the number of bonded teams you are setting up. You can choose to bond the ethernet interfaces in one group, two groups, or you can bond the first two interfaces into one group. For one team containing four to eight NICs, enter the IP Address and Netmask of the master and click OK.

For two teams, enter the IP Address and Netmask of each Master and click OK.

CDP/NSS Administration Guide

332

NIC Port Bonding

For one team containing only eth0 and eth1, enter the IP Address and Netmask of the master and click OK.

NIC Port Bonding can be configured to use round robin load-balancing, so the first frame is sent on eth0, the second on eth1, the third on eth0 and so on. The bonding choices are shown below:

Bonding choices:
No Bonding Eth0/Eth1 Eth0/Eth1/Eth2/Eth3 Eth0/Eth1/Eth2/Eth3/Eth4/Eth5/Eth6/Eth7 Eth0/Eth2,Eth1/Eth3 Eth0/Eth2/Eth4/Eth6, Eth1/Eth3/Eth5/Eth7 (1 group), 2 port (1 group), 4 port (1 group), 8 port (2 group), 4 port (2 group), 8 port

Mode=0 (Sequential) transmission of data in Round-Robin mode, (mode=0) is the default mode option. There is no switch involved. Mode=4 (Link Aggregation) transmission of data in a more dedicated, tuned mode where the NIC ports work together with switches. This mode requires an LACP (802.3.ad) capable switch.

CDP/NSS Administration Guide

333

NIC Port Bonding

Remove NIC Port Bonding


To remove NIC Port Bonding, right click on the server, select System Maintenance, and click Yes to confirm the NIC Port Bonding removal.

Change IP address
During the bonding process, you will have the option to select a new IP address.

CDP/NSS Administration Guide

334

CDP/NSS Administration Guide

Replication
Overview
Replication is the process by which a SAN resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk. If a disaster occurs and the replica is needed, the administrator can promote the replica to become a SAN resource so that clients can access it. Replica disks can be configured for CDP or NSS storage services, including backup, mirroring, or TimeMark/CDP, which can be useful for viewing the contents of the disk or recovering files. Replication can be set to occur continuously or at set intervals (based on a schedule or watermark). For performance purposes and added protection, data can be compressed or encrypted during replication. Remote replication Remote replication allows fast, data synchronization of storage volumes from one CDP or NSS appliance to another over the IP network. With remote replication, the replica disk is located on a separate CDP or NSS appliance, called the target server.

Local replication

Local replication allows fast, data synchronization of storage volumes within one CDP or NSS appliance. It can be used within metropolitan area Fibre Channel SANs, or can be used with IP-based Fibre Channel extenders.

CDP/NSS Administration Guide

335

Replication

With local replication, the replica disk is connected to the CDP or NSS appliance via a gateway using edge routers or protocol converters. Because there is only one CDP or NSS appliance, the primary and target servers are the same server.

How replication works


Replication works by transmitting changed data from the primary disk to the replica disk so that the disks are synchronized. How frequently replication takes place depends on several factors. Delta replication Continuous replication With standard, delta replication, a snapshot is taken of the primary disk at prescribed intervals based on the criteria you set (schedule and/or watermark value). With FalconStors Continuous Replication, data from the primary disk is continuously replicated to a secondary disk unless the system determines it is not practical or possible, such as when there is insufficient bandwidth. In these types of situations the system automatically switches to delta replication. After the next regularlyscheduled replication takes place, the system automatically switches back to continuous replication. For continuous replication to occur, a Continuous Replication Resource is used to stage the data being replicated from the primary disk. Similar to a cache, as soon as data comes into the Continuous Replication Resource, it is written to the replica disk. The Continuous Replication Resource is created during the replication configuration. There are several events that will cause continuous replication to switch back to delta replication, including when: The Continuous Replication Resource is full due to insufficient bandwidth The CDP or NSS appliance is restarted After failover occurs You perform the Replication --> Scan option You add a resource to a group configured for continuous replication Continuous Replication Resource is offline The target server IP address is changed

CDP/NSS Administration Guide

336

Replication

Configure Replication
Requirements
The following are the requirements for setting up a replication configuration: (Remote replication) You must have two storage servers. (Remote replication) You must have write access to both Servers. You must have enough space on the target server for the replica and for the Snapshot Resource. Both clocks should be synchronized so that the timestamp matches. In order to replicate to a disk with Thin Provisioning, the size of the SAN resource must be equal to or greater than 10GB (the minimum permissible size of a thin disk).

Setup (updated February 2012)


You can enable replication for a single SAN resource or you can use the batch feature to enable replication for multiple SAN resources. You need Snapshot Resources for the primary and replica disks. If you do not have them, you can create them through the wizard. Refer to Create a Snapshot Resource (Updated April 2012) for more information. 1. For a single SAN resource, right-click on the resource and select Replication --> Enable. For multiple SAN resources, right-click on the SAN Resources object and select Replication --> Enable. The Enable Replication for SAN resources wizard launches. Each primary disk can only have one replica disk. If you do not have a Snapshot Resource, the wizard will take you through the process of creating one.

CDP/NSS Administration Guide

337

Replication

2. Select the server that will contain the replica.

For local replication, select the Local Server. For remote replication, select any server but the Local Server. If the server you want does not appear on the list, click the Add button. 3. (Remote replication only) Confirm/enter the target servers IP address.

CDP/NSS Administration Guide

338

Replication

4. Specify if you want to use Continuous Replication mode or Delta mode.

Note: When using Continuous Data Replication, you can enable CDP on the replica to get a more granular recovery point from the replica. Continuous Mode - Select if you want to use FalconStors Continuous Replication. After the replication wizard completes, you will be prompted to create a Continuous Replication Resource for the primary disk.

The TimeMark options listed below for continuous mode are primarily used for devices assigned to a VSS-enabled client to maintain the TimeMark synchronization on both the primary and replica disks. Create Primary TimeMark - By default, temporary TimeMarks created by replication are deleted once replication is complete. This option allows you to preserve the temporary TimeMark created/used by replication on the primary server whether it is triggered by replication schedule or a manual synchronization. Synchronize Replica TimeMark - By default, only TimeMarks that are created by a replication operation are synchronized to the replica server. This option allows you to synchronize all TimeMarks on the primary server to the replica server, regardless of how the TimeMark was triggered. Both of the above options need to be selected when replicating VSS TimeMarks in CDR mode to ensure synchronous VSS TimeMarks on both the primary and replica. This is necessary because VSS TimeMarks contain additional VSS TimeView data that will not be replicated over to the replica server without these options selected.
Delta Mode - Select if you want replication to occur at set intervals (based on schedule or watermark).

The TimeMark options for delta mode are as follows:


CDP/NSS Administration Guide 339

Replication

Use existing TimeMark - Determine if you want to use the most current TimeMark on the primary server when replication begins or if the replication process should create a TimeMark specifically for the replication. In addition, using an existing TimeMark reduces the usage of your Snapshot Resource. However, the data being replicated may not be the most current. When configuring replication for DiskSafe devices, selecting this "Use existing TimeMark" option will not trigger the initial replication sync until a new TimeMark is created - even if there is an existing TimeMark. Preserve Replication TimeMark - If you did not select the Use Existing TimeMark option, a temporary TimeMark is created when replication begins. This TimeMark is then deleted after the replication has completed. Select Preserve Replication TimeMark to create a permanent TimeMark that will not be deleted when replication has completed (if the TimeMark option is enabled). This is convenient way to keep all of the replication TimeMarks without setting up a separate TimeMark schedule.

Notes about using an existing TimeMark: While using an existing TimeMark reduces the usage of your Snapshot Resource, the data being replicated may not be the most current. For example, Your replication is scheduled to start at 11:15 and your most recent TimeMark was created at 11:00. If you have selected Use Existing TimeMark, the replication will occur with the 11:00 data, even though additional changes may have occurred between 11:00 and 11:15. Therefore, if you select Use Existing TimeMark, you must coordinate your TimeMark schedule with your replication schedule. Even if you select Use Existing TimeMark, a new TimeMark will be created under the following conditions: The first time replication occurs. Each existing TimeMark will only be used once. If replication occurs multiple times between the creation of TimeMarks, the TimeMark will be used once; a new TimeMark will be created for subsequent replications until the next TimeMark is created. The most recent TimeMark has been deleted, but older TimeMarks exist. After a manual rescan.

CDP/NSS Administration Guide

340

Replication

5. Configure how often, and under what circumstances, replication should occur.

An initial replication for individual resources begins immediately upon setting the replication policy. Then replication occurs according to the specified policy. You must select at least one policy but you can have multiple. You must specify a policy even if you are using continuous replication so that if the system switches to delta replication, it can automatically switch back to continuous replication after the next regularly-scheduled replication takes place. Any number of continuous replication jobs can run concurrently. However, by default, 20 delta replication jobs can run, per server, at any given time. If there are more than 20, the highest priority disks begin replication first while the remaining disks wait in the queue in the order of their priority. As soon as one of the jobs finishes, the disk with the next highest priority in the queue begins.
Note: Contact Technical Support for information about changing this value but note that additional replication jobs will increase the load and bandwidth usage of your servers and network and may be limited by individual hardware specifications. Start replication when the amount of new data reaches - If you enter a watermark value, when the value is reached, a snapshot will be taken and replication of that data will begin. If additional data (more than the watermark value) is written to the disk after the snapshot, that data will not be replicated until the next replication. If a replication that was triggered by a watermark fails, the replication will be re-started based on the retry value you enter, assuming the system detects any write activity to the primary disk at that time. Future watermark-triggered replications will not start until after a successful replication occurs.

CDP/NSS Administration Guide

341

Replication

If you are using continuous replication and have set a watermark value, make sure that it is a value that can actually be reached; otherwise snapshots will rarely be taken. Continuous replication does not take snapshots, but you will need a recent, valid snapshot if you ever need to rollback the replica to an earlier TimeMark during promotion. If you are using SafeCache, replication is triggered when the watermark value of data is moved from the cache resource to the disk.
Start initial replication on mm/dd/yyyy at hh:mm and then every n hours/ minutes thereafter - Indicate when replication should begin and how often it should be repeated.

If a replication is already occurring when the next time interval is reached, the new replication request will be ignored.
Note: if you are using the FalconStor Snapshot Agent for Microsoft Exchange 5.5, the time between each replication should be longer than the time it takes to stop and then re-start the database.

6. Indicate which options you want to use for this device.

The Compress Data option provides enhanced throughput during replication by compressing the data stream. This leverages machines with multi-processors by using more than one thread for processing data compression/decompression during replication. By default, two (2) threads are used. The number can be increased to eight (8).

CDP/NSS Administration Guide

342

Replication

This reduces the size of the transmission, thereby maximizing network bandwidth.
Note: Compression requires 64K of contiguous memory. If the memory in the storage server is very fragmented, it will fail to allocate 64K. When this happens, replication will fail.

The Encrypt Data option provides an additional layer of security during replication by securing data transmission over the network. Initial key distribution is accomplished using the authenticated Diffie-Hellman exchange protocol. Subsequent session keys are derived from the master shared secret, making it very secure.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. If the global MicroScan option is turned on, it overrides the MicroScan setting for an individual virtual device. Also, if the virtual devices are in a group configured for replication, group policy always overrides the individual devices policy. This option is selected by default.

7. Select how you want to create the replica disk on the target server.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. You can select a larger SED disk as a replica. Only in this case can a larger physical replica device be selected. However, the logical size will match the primary size and the extra space on the SED disk will not be available for any other purpose. Express automatically creates the replica for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the replica resource. This is the default setting.
CDP/NSS Administration Guide 343

Replication

Select Existing lets you select an existing resource. There are several restrictions as to what you can select: The target must be the same size as the primary. The target can have Clients assigned to it but they cannot be connected during the replication configuration. Note: All data on the target will be overwritten.

CDP/NSS Administration Guide

344

Replication

If you select Custom, you will see the following windows:


Indicate the type of replica disk you are creating.

Select the storage pool or device to use to create the replica resource.

Only one disk can be selected at a time from this dialog. To create a replica disk from multiple physical disks, you will need to add the disks one at a time. After selecting the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space

Indicate how much space to allocate from this disk.

Click Add More if you need to add another physical disk to this replica disk. You will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide

345

Replication

8. Enter a name for the replica disk.

The name is not case sensitive. 9. Confirm that all information is correct and then click Finish to create the replication configuration.
Note: Once you create your replication configuration, you should not change the hostname of the source (primary) server. If you do, you will need to recreate your replication configuration.

When will replication begin? Replication for a group

If you have configured replication for an individual resource, the system will begin synchronizing the disks immediately after the configuration is complete if the disk is attached to a client and is receiving I/O activity. If you have configured replication for a group, synchronization will not start until one of the replication policies (time or watermark) is triggered. If replication fails for one group member, it is skipped and replication continues for the rest of the group. After successful replication, group members will have a TimeMark created on their replica. In order for the group members that were skipped to have the same TimeMark on its replica, you will need to remove them from the group, use the same TimeMark to replicate again, and then re-join the group. If you are using continuous replication, you will be prompted to create a Continuous Replication Resource for the primary disk and a Snapshot Resource for the replica disk. If you are not using continuous replication, the wizard will only ask you to create a Snapshot Resource on the replica. Because old data blocks are moved to the Snapshot Resource as new data is written to the replica, the Snapshot Resource should be large enough to handle the amount of changed data that will be replicated. Since it is not always possible to
CDP/NSS Administration Guide 346

If you configured continuous replication

Replication

know how much changed data will be replicated, it is a good idea for you to enable expansion on the target servers Snapshot Resource. You then need to decide what to do if your Snapshot Resource runs out of space (reaches the maximum allowable size or does not have expansion enabled). The default is to preserve all TimeMarks. This option stops writing data to the source SAN resource if there is no more space available or there is a disk failure in order to preserve all TimeMarks. Protect your replica resource For added protection, you can mirror or TimeMark an incoming replica resource by highlighting the replica resource and right-clicking on it.

Create a Continuous Replication Resource


This is needed only if you are using continuous replication. 1. Select the storage pool or physical device that should be used to create this Continuous Replication Resource.

CDP/NSS Administration Guide

347

Replication

2. Select how you want to create this Continuous Replication Resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates the resource using an available device. Note: The Continuous Replication Resource maximum size is 1 TB and cannot be expanded. Therefore, you should allocate enough space for the resource. By default, the size will be 256 MB or 5% of the size of your primary disk (or 5% of the total size of all members of this group), whichever is larger. If the primary disk regularly experiences a large number of writes, or if the connection to the target server is slow, you may want to increase the size, because if the Continuous Replication Resource should become full, the system switches to delta replication mode until the next regularly-scheduled replication takes place. If you outgrow your resource, you will need to disable continuous replication and then re-enable it.

3. Verify the physical devices you have selected, confirm that all information is correct, and then click Finish. On the Replication tab, you will notice that the Replication Mode is set to Delta. Replication must be initiated once before it switches to continuous mode. You can either wait for the first scheduled replication to occur or you can right-click on your SAN resource and select Replication --> Synchronize to force replication to occur.

CDP/NSS Administration Guide

348

Replication

Check replication status


There are several ways to check replication status: The Replication tab on the primary disk displays information about a specific resource. The Incoming and Outgoing objects under the Replication object display information about all replications to or from a specific server. The Event Log displays a list of replication information and errors. The Delta Replication Status Report provides a centralized view for displaying real-time replication status for all drives enabled for replication.

Replication tab
The following are examples of what you will see by checking the Replication tab for a primary disk:
With Continuous Replication enabled

With Delta Replication

CDP/NSS Administration Guide

349

Replication

All times shown on the Replication tab are based on the primary servers clock.
Accumulated Delta Data is the amount of changed data. Note that this value will not display accurate results after a replication has failed. The information will only be accurate after a successful replication. Replication Status / Last Successful Sync / Average Throughput - You will only see these fields if you are connected to the target server. Transmitted Data Size is based on the actual size transmitted after compression or with MicroScan performed. Delta Sent represents the amount of data sent (or processed) based on the uncompressed size.

If compression and MicroScan are not enabled, the Transmitted Data Size will be the same as Delta Sent and the Current/Average Transmitted Data Throughput will be the same as Instantaneous/Average Throughput. If compression or MicroScan is enabled and the data can be compressed or blocks of data have not changed and will not be sent, the Transmitted Data Size is going to be different from Delta Sent and both Current/Average Transmitted Data Throughput will be based on the actual size of data (compressed or Micro-scanned) sent over the network.

Event Log
Replication events are also written to the primary servers Event Log, so you can check there for status and operational information, as well as any errors.

Replication object
The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the servers icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in".

CDP/NSS Administration Guide

350

Replication

Delta Replication Status Report


The Delta Replication Status Report can be run from the Reports object. It provides a centralized view for displaying real-time replication status for all drives enabled for replication. It can be generated for an individual drive, multiple drives, source server or target server, for any range of dates. This report is useful for administrators managing multiple servers that either replicate data or are the recipients of replicated data. This report only provides statistics for delta replication activity. Continuous Replication statistics are not available from the report but can be monitored in realtime within the FalconStor Management Console. The report can display information about existing replication configurations only or it can include information about replication configurations that have been deleted or promoted (you must select to view all replication activities in the database). The following is a sample Delta Replication Status Report:

CDP/NSS Administration Guide

351

Replication

Configure Replication performance


Set global replication options You can set global replication options that affect system performance during replication. While the default settings should be optimal for most configurations, you can adjust the settings for special situations. To set global replication properties for a server: 1. Right-click on the server and select Properties. 2. Select the Performance tab. Click the Configure Throttle button to configure target site(s)/server(s) to limit the maximum replication speed thus minimizing potential impact to network traffic.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. This global MicroScan option overrides the MicroScan setting for each individual virtual device.

Tune replication parameters

You can run a test to discover maximum bandwidth and latency for remote replication within your network. 1. Right-click on a server under Replication --> Outgoing and select Replication Parameters. 2. Click the Test button to see information regarding the bandwidth and latency of your network. While this option allows you to measure the bandwidth and latency of the network between the two servers (replication source and target), it is not a tool to test the connectivity of the network. Therefore, if there is a network connection issue or connection failure, the Test button will not work (and should not be used for testing the network connection between the servers).

CDP/NSS Administration Guide

352

Replication

Assign clients to the replica disk


You can assign Clients to the replica disk in preparation for promotion or reversal. Clients will not be able to connect to the replica disk and the Clients operating system will not see the replica disk until after the promotion or reversal. After the replica disk is promoted or a reversal is performed, you can restart the SAN Client to see the new information and connect to the promoted disk. To assign Clients: 1. Right-click on an incoming replica resource under the Replication object and select Assign. 2. Select the Client to be assigned. If the Client you want to assign does not appear in the list, you will need to exit the wizard and add the client by right-clicking on SAN Client and selecting Add. 3. Confirm all of the information and then click Finish to assign the Client.

Switch clients to the replica disk when the primary disk fails
Because the replica disk is used for disaster recovery purposes, clients do not have access to the replica. If a disaster occurs and the replica is needed, the administrator can promote the replica to become the primary disk so that clients can access it. The Promote option promotes the replica disk to a usable resource. Doing so breaks the replication configuration. Once a replica disk is promoted, it cannot revert back to a replica disk. You must have a valid replica disk in order to promote it. For example, if a problem occurred (such as a transmission problem or the replica disk failing) during the first and only replication, the replicated data would be compromised and therefore could not be promoted to a primary disk. If a problem occurred during a subsequent replication, the data from the Snapshot resource will be used to recreate the replica from its last good state.
Notes:

You cannot promote a replica disk while a replication is in progress. If you are using continuous replication, you should not promote a replica disk while write activity is occurring on the replica. If you just need to recover a few files from the replica, you can use the TimeMark/TimeView option instead of promoting the replica. Refer to Use TimeMark/TimeView to recover files from your replica for more information.

To promote a replica: 1. In the Console, right-click on an incoming replica resource under the Replication object and select Replication --> Promote. If replication is not in a normal status, you will be prompted to roll back the replica to the last TimeMark. When this occurs, the wizard will not continue with the promotion and you will have to check the Event Log to make sure the
CDP/NSS Administration Guide 353

Replication

rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Promote to continue. 2. Confirm the promotion and click OK. 3. Assign the appropriate clients to this resource. 4. Rescan devices or restart the client to see the promoted resource.
Note: Once the rollback process is triggered, it cannot be cancelled. If the process is interrupted, the promote replica process must be restarted from the beginning.

Recreate your original replication configuration


Your original primary disk became unusable due to a disaster and you have promoted the replica disk to a primary disk so that it can service your clients. You have now fixed, rebuilt, or replaced your original primary disk. Do the following to recreate your original replication configuration: 1. From the current primary disk, run the Replication Setup wizard and create a configuration that replicates from the current resource to the original primary server. Make sure a successful replication has been performed to synchronize the data after the configuration is completed. If you select the Scan option, you must wait for this to complete before running another scan or replication. 2. Assign the appropriate clients to the new replica resource. 3. Detach all clients from the current primary disk. For Unix clients, type ./ipstorclient stop from /usr/local/ ipstorclient/bin. 4. Right-click on the appropriate primary resource or replica resource and select Replication --> Reversal to switch the roles of the disks. Afterwards, the replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is maintained but clients will be disconnected from the former primary disk. For more information, refer to Reverse a replication configuration.

CDP/NSS Administration Guide

354

Replication

Use TimeMark/TimeView to recover files from your replica


While the main purpose of replication is for disaster recovery purposes, the TimeMark feature allows you to access individual files on your replica without needing to promote the replica. This can be useful when you need to recover a file that was deleted from the primary disk. You can simply create a TimeView of the replica, assign it to a client, and copy back the needed file. Using TimeMark with a replica is also useful for what if scenarios, such as testing a new application on your actual, but not live, data. In addition, using HyperTrac Backup with Replication and TimeMark allows you to back up your replica at your disaster recovery site without impacting any application servers. For more information about using TimeMark and HyperTrac, refer to your HyperTrac Backup Accelerator User Guide.

Change your replication configuration options


You can change the following for your replication configuration: Static IP address of a remote target server Policies that trigger replication (watermark or schedule) Replication protocol Use of compression, encryption, or MicroScan Replication mode

To change the configuration: 1. Right-click on the primary disk and select Replication --> Properties.

CDP/NSS Administration Guide

355

Replication

The Replication Setup Options screen displays.

2. Select the appropriate tab to make the desired changes: The Target Server Parameters tab allows you to modify the host name or IP address of the Target server. The Replication Policy tab allows your modify the policies that trigger replication. The Replication Protocol tab allows your modify the replication protocol. The Throughput Control tab allows you to enable throughput control. The Data Transmission Options allows you to select the following options: Compress Data Encrypt Data Enable MicroScan The Replication Transfer Mode and TimeMark tab allows you to modify the Continuous mode and TimeMark options for the replication. 3. Make the appropriate changes and click OK.
Notes:

If you are using continuous replication and you enable or disable encryption, the change will take effect after the next delta replication. If you are using continuous replication and you change the IP address of your target server, replication will switch to delta replication mode until the next regularly-scheduled replication takes place.

CDP/NSS Administration Guide

356

Replication

Suspend/resume replication schedule


You can suspend future replications from automatically being triggered by your replication policies (watermark, interval, time) for an individual virtual device. Once suspended, all of the devices replication policies will be put on hold, preventing any future policy-triggered replication from starting. This will not stop a replication that is currently in progress and you can still manually start the replication process while the schedule is suspended. When replication is resumed, replication will start at the normally scheduled interval based on the devices replication policies. To suspend/resume replication, right-click on the primary disk and select Replication --> Suspend (or Resume). You can see the current settings by checking the Replication Schedule field on the Replication tab of the primary disk.

Stop a replication in progress


You can stop a replication that is currently in progress. To stop a replication, right-click on the primary disk and select Replication --> Stop.

Manually start the replication process


To force a replication that is not scheduled, select Replication --> Synchronize.
Note: If replication is already occurring, this request will fail.

CDP/NSS Administration Guide

357

Replication

Set the replication throttle


Configuring the throttle allows you to limit the amount of bandwidth replication will use. This is useful when the WAN is shared among many applications and you do not want replication traffic to dominate the link. Setting this parameter affects the server to server relationship, which includes remote delta and remote continuous replication. Throttle does not affect local replication. Throttle configuration involves three factors: Percentage - the amount of throttle relative to the selected Link-Type. leaving the Throttle field set to 0 (zero) means the throttle is disabled. If you change the field to 100 percent, this means that the maximum bandwidth available with the selected link type will be used. Besides 0, valid input is 1 100%. Link-Type - the link type to be used. Window - the window can be scheduled by hours in the day.

Setting the throttle instructs the application to keep the network speed constant. Although network traffic bursts may still occur, depending on the environment, the throttle tries to remain at the set speed. Throttle configuration settings are retained for each server even after replication has been disabled. When replication is enabled again, the previous throttle settings will be present. Once you have set up replication and/or a target site, you can configure your throttle settings. The throttle can be set and edited from various locations in the console as well as from the command line interface. To set the throttle, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Configure.

CDP/NSS Administration Guide

358

Replication

Set the throttle via Server Properties --> Performance tab --> click the Configure Throttle button.

Highlight the server or target site that you want to edit and click the Edit button. (Target sites are indicated by a T icon.)

Add a Target Site


Another way to set the throttle is by adding a target site. A target site is a group of sites that share the same throttle configuration. Target sites can contain existing target servers or can be empty.

CDP/NSS Administration Guide

359

Replication

Navigate to the Replication node in the console, right-click on Outgoing and select Target Site --> Add.

Enter a name for the target site. Select the target servers by checking the boxes next to their host name. Optional: You can also add a target server for future use by clicking the Add button and entering the new target server name. Any throttle configuration existing on the new target server will be replaced with the Target Site throttle configuration settings. Link Types: Select the link type for this target site. To add a custom link type, refer to Manage Link Types. The default throttle is zero (disabled). You may change the default throttle to a percentage (1 - 100) of the link type. This setting takes effect immediately when the default throttle is in use. If the window throttle is in use, the new default setting takes effect the next time throttle is triggered outside of the window. The Throttle Window contains the throttle schedule for business hours and the backup window. You can select one of these built-in schedules or add a custom window via Throttle --> Manage Throttle Window.

Once a target site has been added, it displays, along with the individual servers, in the FalconStor Management Console under the Replication --> Outgoing node. You can right-click on the target site in the console to delete, edit or export it.

CDP/NSS Administration Guide

360

Replication

Manage Throttle windows


Throttle windows allow you to limit read activity to the primary disk during peak processing times to avoid significant performance impact. Two throttle windows have been pre-populated for you - Business Hours and Backup Window. You can edit the pre-defined times to fit your business needs. You can also add custom throttle windows as needed. Throttle configuration settings persist when replication is disabled and re-enabled on the same server to server relationship. Throttle windows can be defined to limit read activity to the primary disk so that performance is not significantly impacted during peak hours. For example, if you have a production server disk with replication enabled, that experiences heavy I/O between 9:00AM and 5:00PM, replication adds to the read/ write load since replication requires to read from the primary disk. Since this may impact disk performance when replication is accessing the disk, you can resolve this issue with a throttle window between 9:00AM and 5:00PM to throttle the replication speed down. With a lower replication speed, the need to access the disk for replication is lessened, resulting in a reduced read load on the disk. To manage throttle windows, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Throttle Windows.

CDP/NSS Administration Guide

361

Replication

Edit a Throttle window

To edit throttle windows times, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Throttle Windows. Then click the Edit button.
Add a throttle window

To add a new throttle window, click the Add button. The Add Throttle Window screen displays, allowing you to add a unique name, start time and end time.

Time is entered as a 24 hour time period. For example, 5:00 p.m. would be entered as 17:00. Make sure the times do not overlap with an existing window. For example, if one window has an end time of 12:00, the next window must start at 12:01.
Delete a throttle window

You can also delete any custom (user-created) throttle window to cancel the schedule. Built-in throttle windows cannot be deleted. To delete a custom throttle window, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Throttle Windows. Then click the Delete button.
Throttle tab

Right-click on the target server or target site and click the Throttle tab for information on Link Type, default throttle, and any selected throttle windows.

Throttle and failover

Setting up throttle on a failover pair requires some additional considerations. Refer to the Throttle and Failover section for details.

CDP/NSS Administration Guide

362

Replication

Manage Link Types


To manage link types and speed, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Link Types. The Manage Link Types screen displays all link types, along with the description and speed.

Throttle speed displays the maximum speed, not necessarily the actual speed. For example, a throttle speed of 30 Mbps indicates a speed of 30 Mbps or less. The speed is determined by multiplying the throttle percentage to the link type speed. For example, a default throttle of 30% of a 100Mbps Link Type would be (30%) x (100Mbps) = 30Mbps. Actual speed may or may not be evenly distributed across all Target Sites and servers. Actual speed depends on many factors, such as disk performance, network traffic, functions enabled (encryption, compression, MicroScan), and other processes in progress (TimeMark, Mirror, etc).

CDP/NSS Administration Guide

363

Replication

Add link types

If your link type is not listed in the pre-populated/build-in list, you can add a custom link type by navigating to the Replication node in the console, right-clicking on Outgoing and selecting Throttle --> Manage Link Types. Then click the Add button.

Then enter the Link Type, a brief description, and the speed in Megabytes per second (Mbps). Edit link types Custom link types can be modified by clicking the Edit button. However, built-in link types cannot be edited. To edit a custom link type, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Link Types. Then click the Edit button Delete link types Link Types can be deleted as long as they are not currently in use by any target site or server. Custom link types can be deleted when no longer needed. Built-in link types cannot be deleted. To delete a custom link type, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Link Types. Then click the Delete button.

CDP/NSS Administration Guide

364

Replication

Set replication synchronization priority


To set the synchronization priority for pending replications, select Replication --> Priority.

This allows you to prioritize the order that device/group will begin replicating if scheduled to start at the same time. This option can be set for a single resource or a single group via the Replication submenu or for multiple resources or group from the context menu of Replication Outgoing node.

Reverse a replication configuration


Reversal switches the roles of the replica disk and the primary disk. The replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is reset to the default. After the reversal, clients will be disconnected from the former primary disk.

To perform a role reversal: 1. , right-click on the appropriate primary resource or replica resource and select Replication --> Reversal.
Notes:

The primary and replica must be synchronized in order to reverse a replica. If needed, you can manually start the replication from the Console and re-attempt the reversal after the replication is completed. If you are using continuous replication, you have to disable it before you can perform the reversal. If you are performing a role reversal on a group, we recommend that the group have 40 or fewer resources. If there are more than 40 resources in a group, we recommend that multiple groups be configured to accomplish this task.

2. Enter the New Target Server host name or IP address to be used by the new primary server to connect to the new target server for replication.

CDP/NSS Administration Guide

365

Replication

Reverse a replica when the primary is not available


Replication can be reversed from the replica server side even if the primary server is offline or is not accessible. When you reverse this type of replica, the replica disk will be promoted to become the primary disk and the replication configuration will be removed. Afterwards, when the original primary server becomes available, you must repair the replica in order to re-establish a replication configuration.The original replication policy will be used /maintained after repair.
Notes:

If a primary disk is in a group but the group doesnt have replication enabled, the primary resource should leave the group first before the repair replica can be performed. If you have CDP enabled on the replica and you want to perform a rollback, you can roll back before or after reversing the replica.

Forceful role reversal


When the primary server is down and the replica is up. Or if the primary server is up but corrupted and the replica is not synchronized, you can force a role reversal as long as there are no replication processes running.
Notes:

The forceful role reversal operation can be performed even if the CDP journal has unflushed data. The forceful role reversal operation can be performed even if data is not synchronized between the primary and replica server. The snapshot policy, TimeMark/CDP, and throttle control policy settings are not swapped after the repair operation for replication role reversal.

To perform a forceful role reversal: 1. Suspend the replication schedule. If you are using Continuous Mode, disable it by right-clicking on the disk and selecting Replication --> Properties and uncheck Continuous Mode in the replication transfer Mode and TimeMark tab under the Replication Setup Options. 2. Right-click on the primary or replica server and select Replication --> Forceful Reversal. 3. Type YES to confirm the operation and then click OK. 4. Once the forceful role reversal is done, Repair the promoted replica to establish the new connection between the new primary and replica server.
CDP/NSS Administration Guide 366

Replication

The replication repair operation must be performed from the NEW primary server.
Note: If the SAN resource is assigned to a client in the original primary server, it must be unassigned in order to perform the repair on the new primary.

5. Confirm the IP address and click OK. The current primary disk remains as the primary disk and begins replicating to the recovered server. After the repair operation is complete, replication will synchronize again either by schedule or manual trigger. A full synchronization is performed if the replication was not synchronized prior the forceful role reversal and the replication policy from the original primary server will be used/update on the new primary server. If you want to recreate your original replication configuration, you will need to perform another reversal so that your original primary becomes the primary disk again.

Repair a replica
When performing a repair, the following status conditions may display:

Repair status - after forceful role reversal


Status Description

Valid

The server performing the repair has verified that the server is OK for repair. If there is a problem with the replica server, respective errors will show after repair is initiated. The server performing the repair has reported that the repair cannot be processed. Make sure all devices involved are online and have no missing segments. The repair cannot be processed because one of the devices involved in the repair is currently performing a rollback. The repair cannot be processed because there is a problem with a device which is a member of a group. Make sure there are no extra members or missing members of the group.

Invalid

TimeMark Rollback in Progress Not Configured for Replication

Relocate a replica
The Relocate feature allows replica storage to be moved from the original replica server to another server while preserving the replication relationship with the primary server. Relocating reassigns ownership to the new server and continues
CDP/NSS Administration Guide 367

Replication

replication according to the set policy. Once the replica storage is relocated to the new server, the replication schedule can be immediately resumed without the need to rescan the disks. Before you can relocate the replica, you must import the disk to the new CDP or NSS appliance. Refer to Import a disk if you need more information. Once the disk has been imported, open the source server, highlight the virtual resource that is being replicated, right-click and select Relocate.
Notes:

You cannot relocate a replica that is part of a group. If you are using continuous replication, you must disable it before relocating a replica. Failure to do so will keep replication in delta mode, even after the next manual or scheduled replication occurs. You can reenable continuous replication after relocating the replica.

Remove a replication configuration


Right-click on the primary disk and select Replication --> Disable. This allows you to remove the replication configuration on the primary and either delete or promote the replica disk on the target server at the same time. When promoting a replica, the replication status must be normal. If the replication is not in a normal status, you will be prompted to roll back the replica to the last TimeMark. When this occurs, the wizard will not continue with the promotion and you will have to check the Event Log to make sure the rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Disable --> Promote to continue. If you choose to delete the replica, you will not be prompted to rollback.
Note: Once the rollback process is triggered, it cannot be cancelled. If the process is interrupted, the promote replica process must be restarted from the beginning.

CDP/NSS Administration Guide

368

Replication

Expand the size of the primary disk (updated February 2012)


Devices with replication configured can be expanded with some limitation and restrictions. Expansion can only be initiated from the primary disk, and both disks will always have the same logical size. See the following examples of behaviors to expect when expanding different types of SAN Resources. (A thin disk will have the same behavior as a virtual disk):

Disk expansion behavior


Virtual replicated to Virtual You can select any expansion size up to the amount of storage space available for both the primary and replica disks. You must expand the physical size of the replica disk from the storage prior to expanding the primary disk. You can only expand to the full physical size of the replica. You must first expand the physical size of the primary disk from the storage You can only expand to the full physical size of the primary You must first expand the physical size of both the primary and replica disks from the storage. You can only expand the primary disk to its full physical size when the physical disk size is equal to or smaller than the replica disk. If the replica disk is smaller than the primary disk, you can only expand the primary disk to the full physical disk size of the replica disk.

Virtual replicated to SED

SED replicated to Virtual

SED replicated to SED

Refer to the Expand a Service-Enabled Device section for more information regarding SED expansion.
Note: Do not attempt to expand the primary disk during replication. Otherwise, the disk will expand but the replication will fail,

CDP/NSS Administration Guide

369

Replication

Replication with other CDP or NSS features


Replication and TimeMark
While enabling TimeMarks, you can set the Trigger Replication after TimeMark is taken option. This option is applicable if TimeMark and Replication are both enabled for that device/Group. If TimeMark is enabled for a Group, replication must also be enabled at the group level. When this option is set, replication synchronization triggers automatically for that device or group when the TimeMark is created. If SafeCache or CDP is enabled, replication synchronization is triggered when the cache marker is flushed. Since you cannot create TimeMarks on a replica device, if you enable this option for replica devices, it will only take effect after a role reversal.
Note: The timestamp of a TimeMark on a replica is the timestamp of the source.

Replication and Failover


If replication is in progress and a failover occurs at the same time, the replication will fail. After failover, replication will start at the next normally scheduled interval. This is also true in reverse, if replication is in progress and a recovery occurs at the same time.

Replication and Mirroring


When you promote the mirror of a replica resource, the replication configuration is maintained. Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.

Replication and Thin Provisioning


A disk with Thin Provisioning enabled can be configured to replicate to a normal SAN resource or another disk with Thin Provisioning enabled. The normal SAN resource can replicate to a disk with Thin Provisioning as long as the size of the SAN resource is equal to or greater than 10GB (the minimum permissible size of the thin disk).

CDP/NSS Administration Guide

370

CDP/NSS Administration Guide

Near-line Mirroring
Near-line mirroring allows production data to be synchronously mirrored to a protected disk that resides on a second storage server. You can enable near-line mirroring for a single SAN resource or multiple resources. With near-line mirroring, the primary disk is the disk that is used to read/write data for a SAN Client and the mirrored copy is a copy of the primary. Each time data is written to the primary disk, the same data is simultaneously written to the mirror disk. TimeMark or CDP can be configured on the near-line server to create recovery points. The near-line mirror can also be replicated for disaster recovery protection. If the primary disk fails, you can initiate recovery from the near-line server and roll back to a valid point-in-time.

Application Servers

IPStor

IPStor

Service Enabled Disk

Synchronous Mirror

Synchronous Mirror

Production

Nearline

CDP/NSS Administration Guide

371

Near-line Mirroring

Near-line mirroring requirements


The following are the requirements for setting up a near-line mirroring configuration: The primary server cannot be configured to replicate to the near-line server. At least one protocol (FC or iSCSI) must be enabled on the near-line server. If you are using the FC protocol for your near-line mirror, zone the appropriate initiators on your primary server with the targets on your nearline server. For recovery purposes, zone the appropriate initiators on your near-line server with the targets on your primary server.

Setup Near-line mirroring


You can enable near-line mirroring for a single SAN resource or multiple resources. To enable and set up near-line mirroring on one resources, follow the steps described below. To enable near-line mirroring for multiple resources, refer to Enable Near-line Mirroring on multiple resources. 1. Right-click on the resource and select Near-line Mirror --> Add. The Welcome screen displays. 2. If you are enabling one disk, specify if you want to enable near-line mirroring for the primary disk or just prepare the near-line disk.

When you create a near-line disk, the primary server performs a rescan to discover new devices. If you are configuring multiple near-line mirrors, the scans can become time consuming. Instead, you can select to prepare the near-line disk now and then manually rescan physical resources and discover new resources on the primary server. Afterwards, you will have to re-run the wizard and select the existing, prepared disk.
CDP/NSS Administration Guide 372

Near-line Mirroring

If you are enabling near-line mirroring for multiple disks, the above screen will not display. 3. Select the storage pool or physical device(s) for the near-line mirrors virtual header information.

4. Select the server that will contain the near-line mirror.

CDP/NSS Administration Guide

373

Near-line Mirroring

5. Add the primary server as a client of the near-line server. You will go through several screens to add the client: Confirm or specify the IP address the primary server will use to connect to the near-line server as a client. This IP address is used for iSCSI; it is not used for Fibre Channel. Determine if you want to enable persistent reservation for the client (primary server). This allows clustered clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes. Select the clients protocol(s). If you select iSCSI, you must indicate if this is a mobile client. (FC protocol) Select or add WWPN initiators for the client. (FC protocol) Specify if you want to use Volume Set Addressing (VSA). VSA is used primarily for addressing virtual buses, targets, and LUNs. If your client requires VSA to access a broader range of LUNs, you must enable it for the client. (iSCSI protocol) Select the initiator that this client uses. If the initiator does not appear, you may need to rescan. You can also manually add it, if necessary. (iSCSI protocol) Add/select users who can authenticate for this client. 6. Confirm the IP address of the primary server.

Confirm or specify the IP address the near-line server will use to connect to the primary server when a TimeMark is created, if snapshot notification is used. If needed, you can specify a different IP address from what you used when you added the primary server as a client of the near-line server.

CDP/NSS Administration Guide

374

Near-line Mirroring

7. Determine if you want to monitor the mirroring process.

If you select to monitor the mirroring process, the I/O performance will be checked to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is, mirroring will be suspended so it does not impact the primary storage.
Monitor mirroring process every n seconds - Specify how frequently the system should check the lag time (delay between I/O to the primary disk and the mirror). Checking more or less frequently will not impact system performance. On systems with very low I/O, a higher number may help get a more accurate representation. Maximum lag time for mirror I/O - Specify an acceptable lag time (1 - 1000 milliseconds) between I/Os to the primary disk and the mirror. Suspend mirroring - If the I/O to the mirror disk is lagging beyond the specified level of acceptance, mirroring will be suspended when the following conditions are met: When the failure threshold reaches n% - Specify what percentage of I/O must pass the lag time test. For example, you set the percentage to 10% and the maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred and 20 of them took longer than 15 milliseconds to update the mirror disk. With a 20% failure rate, mirroring would be suspended.

CDP/NSS Administration Guide

375

Near-line Mirroring

When the outstanding I/Os reaches n - Specify the minimum number of I/Os that can be outstanding. When the number of outstanding I/Os are above the specified number, mirroring is suspended. Note: If a mirror becomes out of sync because of a disk failure or an I/O error (rather than having too much lag time), the mirror will not be suspended. Because the mirror is still active, re-synchronization will be attempted based on the global mirroring properties that are set for the server. Refer to Set global mirroring options for more information.

8. If mirroring is suspended, specify when re-synchronization should be attempted.

Re-synchronization can be started based on time (every n minutes/hours) and/or I/O activity (when I/O is less than n KB/MB). If you select both, the time will be applied first before the I/O activity level. If you do not select either, the mirror will stay suspended until you manually synchronize it. If you select one or both re-synchronization methods, you must also specify how often the system should retry the re-synchronization if it fails to complete. If you only select the second resync option, the default will be 10 minutes. When the system initiates re-synchronization, it does not check lag time and mirroring will not be suspended if there is too much lag time. If you manually resume mirroring, the system will monitor the process during synchronization and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If CDP/NSS is restarted or the server experiences a failover while attempting to resynchronize, the mirror will remain suspended.

CDP/NSS Administration Guide

376

Near-line Mirroring

9. Select how you want to create this near-line mirror resource.

Custom lets you select which physical device(s) and which segments to use and lets you designate how much space to allocate from each. Express lets you select which physical device(s) to use and automatically creates the near-line resource from the available hard disk segments. Select existing lets you select an existing virtual device that is the same size as the primary or a previously prepared (but not yet created) near-line mirror resource. (The option to only prepare a near-line disk appeared on the first Near-line Mirror wizard dialog.)

CDP/NSS Administration Guide

377

Near-line Mirroring

10. Enter a name for the near-line resource.

Note: Do not change the name of the near-line resource if the server is a nearline mirror or configured with near-line mirroring.

11. (iSCSI protocol) Select the iSCSI targets to assign.

CDP/NSS Administration Guide

378

Near-line Mirroring

12. Confirm that all information is correct and then click Finish to create the near-line mirroring configuration.

To set the near-line mirror throughput speed/throttle for near-line mirror synchronization, refer to Set mirror throttle.

CDP/NSS Administration Guide

379

Near-line Mirroring

Enable Near-line Mirroring on multiple resources


You can enable near-line mirroring on multiple SAN resources. 1. Right-click on SAN Resources and select Near-line Mirror --> Add. The Enable Near-line Mirroring wizard launches. 2. Click Next at the Welcome screen. The list of available resources displays. 3. Select the resources be Near-line Mirror resources or click the Select All button. 4. Select the storage pool or physical device(s) for the near-line mirrors virtual header information. 5. Select the server that will contain the near-line mirrors. 6. Continue to set up near-line mirroring as described in Setup Near-line mirroring.

Whats next?
Near-line disks are prepared but not created If you prepared one or more near-line disks and are ready to create near-line mirrors, you must manually rescan physical resources and discover new devices on the primary server. Afterwards, you must re-run the Near-line Mirror wizard for each primary disk and select the existing, prepared disk. This will create a near-line mirror without re-scanning the primary server. After creating your near-line mirror, you should enable TimeMark or CDP on the near-line server. This way your data will have periodic snapshots and you will be able to roll back your data when needed. For disaster recovery purposes, you can also enable replication for a near-line disk to replicate the data to another location.

Near-line mirror is created

CDP/NSS Administration Guide

380

Near-line Mirroring

Check near-line mirroring status


You can see the current status and properties of your mirroring configuration by checking the General tab for a mirrored resource.

Current status and properties of mirroring configuration.

CDP/NSS Administration Guide

381

Near-line Mirroring

Near-line recovery
The following is required before recovering data: If you are using the FC protocol, zone the appropriate initiators on your near-line server with the targets on your primary server. You must unassign the primary disk from its client(s). If enabled, disable mirroring for the near-line disk. If enabled, suspend replication for the near-line disk. All SAN resources must be online and accessible. If the near-line mirror is part of a group, the near-line mirror must leave the group prior to recovery. TimeMark must be enabled on the near-line resource and the near-line replica, if one exists. At least one TimeMark must be available to rollback to during recovery. If you have been using CDP and want to rollback to a specific point-in-time, you may want to create a TimeView first and view it to make sure it contains the appropriate data that you want.

Note: If the near-line recovery fails due to a TimeMark rollback failure, device discovery failure, etc., you can retry the near-line recovery by selecting Near-line Mirror Resources --> Retry Recovery on the Near-line Disk.

Recover data from a near-line mirror


Recovery is done in the console from the near-line resource. 1. Right-click on the near-line resource and select Near-line Mirror Resource ->Start Recovery You can also start recovery by selecting TimeMark --> Rollback. 2. Add the near-line server as a client of the primary server. You will go through several screens to add the client: Confirm or specify the IP address the near-line server will use to connect to the primary server as a client. This IP address is used for iSCSI; it is not used for Fibre Channel. Determine if you want to enable persistent reservation for the client (nearline server). This allows clustered clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes. Select the clients protocol(s). If you select iSCSI, you must indicate if this is a mobile client. (FC protocol) Select or add WWPN initiators for the client. (FC protocol) Specify if you want to use Volume Set Addressing (VSA). VSA is used primarily for addressing virtual buses, targets, and LUNs. If your client requires VSA to access a broader range of LUNs, you must enable it for the client.
CDP/NSS Administration Guide 382

Near-line Mirroring

your storage devices use VSA, you must enable it. (iSCSI protocol) Select the initiator that this client uses. If the initiator does not appear, you may need to rescan. You can also manually add it, if necessary. (iSCSI protocol) Add/select users who can authenticate for this client.

3. Click OK to begin the recovery process.

4. Select the point-in-time to which you want to roll back.

Rollback restores your drive to a specific point in time, based on an existing TimeMark or your CDP journal. After rollback, your drive will look exactly like it did at that point in time.

You can select to roll back to any TimeMark. If this resource has CDP enabled and you want to select a specific point-in-time, type in the exact time. Once you click Ok, the system will roll back the near-line mirror to the specified point-in-time and will then synchronize the data back to the primary server. When the process is completed, your screen will look similar to the following:

CDP/NSS Administration Guide

383

Near-line Mirroring

5. When the Mirror Synchronization Status shows the status as Synchronized, you can select Near-line Mirror Resource --> Resume Config to resume the configuration of the near-line mirror. This re-sets the original near-line configuration so that the primary server can begin mirroring to the near-line mirror. 6. Re-assign your primary disk to its client(s).

Recover data from a near-line replica


Another type of recovery is recovering from the TimeMark of the near-line replica disk. The following is required before recovering data from a near-line replica: All of the clients assigned to the primary disk must be removed. The near-line disk and the replica disk must be in sync as required for role reversal. If the Near-line Disk is already enabled with mirror, the mirror must be removed first.

Recovery is performed via the console from the near-line resource as described below: 1. Right-click on the near-line resource and select Replication -->
Recovery --> Prepare/Start

CDP/NSS Administration Guide

384

Near-line Mirroring

2. Click OK to update the configuration for recovery.

3. Click OK to perform role reversal.

CDP/NSS Administration Guide

385

Near-line Mirroring

The Recovery from Near-line Replica TimeMark screen displays.

4. Select the TimeMark to rollback to restore your drive to a specific point in time. Once you click Ok, the system will roll back the near-line mirror to the specified point-in-time. 5. Perform Replication synchronization from the REVERSED Near-line Replica Disk to the near-line disk after successful rollback. This will synchronize the rollback data from the REVERSED replica to Near-line Disk and primary disk since the near-line disk is the replica now and the primary disk is the mirror of the near-line disk. To do this: Right-click on the REVERSED Near-line replica disk. Select Replication-> Synchronize. 6. Perform Role Reversal to switch Near-line Disk back as Replication Primary Disk and resume the Near-line Mirroring configuration. To do this: Right-click on the REVERSED Near-line Replica Disk. Select Replication-> Recovery-> Resume Config.

CDP/NSS Administration Guide

386

Near-line Mirroring

The Resume Near-line Mirroring from Near-line Replica Recovery screen displays.

7. Click OK to switch the role of the Near-line disk and the Near-line replica and resume near-line mirroring. 8. Re-assign your primary disk to its client(s).

Recover from a near-line replica TimeMark using forceful role reversal


Recovery from a near-line replica TimeMark with forceful role reversal can be used when the near-line server is not available. However, this only works if both Near-line Disk and Near-line Replica are enabled with TimeMark. To prepare for recovery: Suspend replication on the near-line server Unassign all of the clients from the primary disk on the primary server Suspend near-line mirror on the primary disk to prevent mirror synchronization of the near-line disk. Suspend CDP on the near-line disk and the near-line replica.

To recover using this method: 1. Perform forceful role reversal on Near-line Replica Right-click on the Replica disk --> Replication --> Reversal The procedure will fail because the server is not available. Click OK. Click the Cancel button at the login screen to exit the login dialog.

CDP/NSS Administration Guide

387

Near-line Mirroring

The Forceful Replication Role Reversal screen displays. Type Yes to confirm and click OK.

Click OK to switch the roles of the replica disk and primary server.

The Replica is promoted. 2. Perform TimeMark rollback on the reversed Near-line Replica. Right-click on the reversed Near-line Replica and select TimeMark --> Rollback.

CDP/NSS Administration Guide

388

Near-line Mirroring

Select the TimeMark you are rolling back to and click OK.

3. Perform Repair Replica from the reversed Near-line Replica after the near-line server is online.
Note: You must set the near-line disk to Recovery Mode before repairing the replica.

4. Right-click on the reversed Near-line Replica and select Replication --> Repair

5. Perform Synchronization on the reversed Near-line Replica. Right-click on the reversed Near-line Replica and select Replication --> Synchronize 6. Once synchronization is finished, perform role reversal from the reversed nearline replica. Right-click on the reversed Near-line Replica and select Replication --> Reversal

CDP/NSS Administration Guide

389

Near-line Mirroring

7. When the Mirror Synchronization Status shows the status as Synchronized, you can select Near-line Mirror Resource --> Resume Config to resume the configuration of the near-line mirror. This re-sets the original near-line configuration so that the primary server can begin mirroring to the near-line mirror. 8. Once near-line mirror configuration has resumed, you can resume the Near-line Mirror, Replication, and CDP. 9. Re-assign your primary disk to its client(s)

Swap the primary disk with the near-line mirrored copy


Right-click on the primary SAN resource and select Near-line Mirror --> Swap to reverse the roles of the primary disk and the mirrored copy. You will need to do this if you are going to perform maintenance on the primary disk or if you need to remove the primary disk.
Note: When swapping the primary disk with the near-line mirrored copy, the mirror will swap back to the primary disk if the mirror is in sync and a set period of time has passed. This is done to reduce the amount of load on the disk from the Near-line server. The time to swap back is based on the global sync option in the console.

Manually synchronize a near-line mirror


The Synchronize option re-synchronizes a mirror and restarts the mirroring process once it is synchronized. This is useful if one of the mirrored disks has a minor failure, such as a power loss. 1. Fix the problem (turn the power back on, plug the drive in, etc.). 2. Right-click on the primary resource and select Near-line Mirror --> Synchronize. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

Rebuild a near-line mirror


The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring process once it is synchronized. After rebuilding the mirror, you would swap the mirror so that the primary server could service clients again. To do this, right-click on a primary resource and select Near-line Mirror --> Rebuild. You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

CDP/NSS Administration Guide

390

Near-line Mirroring

Expand a near-line mirror


Use the Expand SAN Resource Wizard to expand the near-line mirror. Make sure the near-line server is up and running. If the near-line server is down, you will not be able to expand the primary disk or the near-line mirror disk. However, if the primary server is down, you can still expand the near-line mirror and the primary disk will be expanded in the next mirror expansion. You can expand the near-line mirror with or without the near-line replica server. If a near-line replica server exists, both the near-line mirror and the replica disk will be expanded at the same time. To expand a virtualized disk: 1. Right-click on the primary disk or near-line mirror and select Expand. If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. The Expand SAN Resource Wizard will automatically lead you through expanding the near-line mirror disk first. The Expand SAN Resource Wizard screen displays.

CDP/NSS Administration Guide

391

Near-line Mirroring

2. Select the physical storage.

3. Select an allocation method and specify the size to allocate.

CDP/NSS Administration Guide

392

Near-line Mirroring

The near-line mirror and the replica expands.

4. Click finish to confirm the expansion of the near-line mirror and the replica. You are automatically routed back to the beginning of the Expand SAN Resource Wizard to expand the primary server.
Note: Thin provisioning is not supported with near-line mirroring.

Expand a service-enabled disk


To expand the service-enabled disk, the near-line mirror expand size must be greater than or equal than the primary disk expand size. You must expand the storage size on the physical disk first. Then go to the console and rescan the physical disk. Once you have performed a rescan of the physical disk, follow the same steps described above to expand the disk.

CDP/NSS Administration Guide

393

Near-line Mirroring

Suspend/resume near-line mirroring


When you manually suspend a mirror, the system will not attempt to re-synchronize, even if you have a re-synchronization policy. You will have to resume the mirror in order to synchronize. When you resume mirroring, the mirror is synchronized before mirroring is resumed. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit. To suspend/resume mirroring for a resource: 1. Right-click on a primary resource and select Near-line Mirror --> Suspend (or Resume). You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

Change your mirroring configuration options


Set global mirroring options You can set global mirroring options that affect system performance during all types of mirroring (near-line, synchronous, or asynchronous). While the default settings should be optimal for most configurations, you can adjust the settings for special situations. To set global mirroring properties for a server: 1. Right-click on the server and select Properties. 2. Select the Performance tab.
Synchronize Out-of-Sync Mirrors - Determine how often the system should check and attempt to resynchronize active out-of-sync mirrors, how often it should retry synchronization if it fails to complete, and whether or not to include replica mirrors. These settings will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that resynchronization policy will apply instead.

The mirrored devices must be the same size. If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. When you use the Expand SAN Resource Wizard, it will automatically lead you through expanding the near-line mirror disk first.

CDP/NSS Administration Guide

394

Near-line Mirroring

Change properties for a specific primary resource

You can change the following near-line mirroring configuration for a primary resource: Policy for monitoring the mirroring process Conditions for re-synchronization Throughput control policies

To change the configuration: 1. Right-click on a primary resource and select Near-line Mirror --> Properties. 2. Make the appropriate changes and click OK. Change properties for a specific nearline resource For a near-line mirroring resource, you can only change the IP address that is used by the near-line server to connect to the primary server. To change the configuration: 1. Right-click on a near-line resource and select Near-line Mirror Resource --> Properties. 2. Make the appropriate change and click OK.

Remove a near-line mirror configuration


You can remove a near-line mirror configuration from the primary or near-line mirror resource(s). From the primary server, right-click on the primary resource and select Near-line Mirror --> Remove. From the near-line server, right-click on the near-line resource and select Near-line Mirror Resource --> Remove.

CDP/NSS Administration Guide

395

Near-line Mirroring

Recover from a near-line mirroring hardware failure


Replace a failed disk If one of the mirrored disks has failed and needs to be replaced: 1. Right-click on the resource and select Near-line Mirror --> Remove to remove the mirroring configuration. 2. Physically replace the failed disk. 3. Re-run the Near-line Mirroring wizard to create a new mirroring configuration. If both disks fail If a disaster occurs at the site where the primary and near-line server are housed, it is possible to recover both disks if you had replication configured for the near-line disk to a remote location. In this case, after removing the mirroring configuration and physically replacing the failed disks, you can perform a role reversal to replicate all of the data back to the near-line disk. Afterwards, you can recover the data from the near-line mirror back to the primary disk. Fix a minor disk failure If one of the mirrored disks has a minor failure, such as a power loss: 1. Fix the problem (turn the power back on, plug the drive in, etc.). 2. Right-click on the primary resource and select Near-line Mirror --> Synchronize. This re-synchronizes the disks and restarts the mirroring. If the near-line server is set up as a failover pair and is in a failed state If the you are performing a near-line recovery and the near-line server is set up as a failover pair, always add the first and second nodes of the failover set to the primary for recovery. 1. Select the proper initiators for recovery 2. Assign both nodes back to the primary for recovery.
Note: There are cases where the server may not show up in the list since the machine maybe down and the particular port is not logged into the switch. In this situation, you must know the complete WWPN of your recovery initiator(s). This is important in cases where you need to manually enter the WWPN into the recovery wizard to avoid any adverse effects during the recovery process.

CDP/NSS Administration Guide

396

Near-line Mirroring

Replace a disk that is part of an active near-line mirror (Updated Jan.2012)


If you need to replace a disk that is part of an active near-line mirror storage, take the primary disk offline first. Then follow the steps below. If the primary server is part of a High Availability (HA) set, take the disks offline for both servers before proceeding. 1. If you need to replace the primary disk, right-click on the primary resource and select Near-line Mirror --> Swap to reverse the roles of the disks. 2. Take the original primary disk (now the mirror disk) offline. If the primary server is part of a High Availability (HA) set, take the disks offline for both servers before proceeding. 3. Select Near-line Mirror --> Replace Primary Disk. Select Rescan Physical Resources from the console if the Replace Primary Disk option is not available. 4. Replace the disk. 5. Synchronize the mirror and swap the disks to reverse their roles.

Set Recovery Mode


The Set Recovery Mode option should only be used when recovering data from a near-line replica TimeMark using forceful role reversal.

CDP/NSS Administration Guide

397

CDP/NSS Administration Guide

ZeroImpact Backup
FalconStors ZeroImpact Backup Enabler allows you to perform a local raw device tape backup/restore of your virtual drives. A raw device backup is a low-level backup or full copy request for block information at the volume level. Linuxs dd command generates a low-level request. Examples of Linux applications that have been tested with the storage server to perform raw device backups include BakBones NetVault version 7.42 and Symantec Veritas NetBackup version 6.0. Using the FalconStor ZeroImpact Backup Enabler with raw device backup software eliminates the need for the application server to play a role in backup and restore operations. Application servers on the SAN benefit from better performance and the elimination of overhead associated with backup/restore operations because the command and data paths are rendered exclusively local to the storage server. This results in the most optimal data transfer between the disks and the tape, and is the only way to achieve net transfer rates that are limited only by the disks or tapes engine. The backup process automatically leverages the FalconStor snapshot engine to guarantee point-in-time consistency. To ensure full transactional integrity, this feature integrates with FalconStor Snapshot Agents and the Group Snapshot feature.

Configure ZeroImpact backup


You must have a Snapshot Resource for each virtual device you want to back up. If you do not have one, you will be prompted to create one. Refer to Create a Snapshot Resource (Updated April 2012) for more information. 1. Right-click on the SAN resource that you want to back up and select Backup --> Enable.
Note: There is a maximum of 255 virtual devices that can be enabled for ZeroImpact backup.

CDP/NSS Administration Guide

398

ZeroImpact Backup

2. Enter a raw device name for the virtual device that you want to back up.

3. Configure the backup policy.

Use an existing TimeMark snapshot - (This option is only valid if you are using FalconStors TimeMark option on this SAN resource.) If a TimeMark exists for this virtual device, that image will be used for the backup. It may or may not be a current image at the time backup is initiated. If a TimeMark does not exist, a snapshot will be taken. Create a new snapshot - A new snapshot will be created for the backup, ensuring the backup will be made from the most current image.
CDP/NSS Administration Guide 399

ZeroImpact Backup

4. Determine how long to maintain the backup session.

Each time a backup is requested by a third-party backup application, the storage server creates a backup session. Depending upon the snapshot criteria set on the previous window, a snapshot may be taken at the start of the backup session. (If the resource is part of a group, snapshots for all resources in the group will be taken at the same time.) Subsequently, each raw device is opened for backup and then closed. Afterwards, the backup application may verify the backup by comparing the data on tape with that of the snapshot image created for this session. Therefore, it is important to maintain the backup session until the verification is complete. The storage server cannot tell how long your backup application needs to rewind the tape and compare the data, so you must select an option on this screen indicating how long the storage server is to maintain the session. The session length only applies to backups (reading from a raw device), not restores (writing to a raw device). The actual session will end within 60 seconds of the session length specified.
Absolute session length - This option maintains the backup session for a set period of time from the start of the backup session. Use this option when you know approximately how long the backup operation will take. This option can also be used to limit the length of time that a backup can run. The Backup operation will terminate when the Absolute Session Length timeout is reached (whether or not Backup has completed). An Event message is logged that the backup terminated when the Absolute Session Length timeout was reached. Relative session length - This option maintains the backup session for a period of time after the backup completes (the last raw device is opened and closed). This is more flexible than the absolute session length since it may be difficult to estimate how long a back up will take for all devices. With relative time, you can estimate how long to wait after the last device is backed up. If there is a problem during the backup, and the backup cannot complete, the Inactivity timeout tells the storage server how long to wait before ending the backup session.

5. Confirm all information and click Finish to enable backup.


CDP/NSS Administration Guide 400

ZeroImpact Backup

Back up a CDP/NSS logical resource using dd


Below are procedures for using Linuxs dd command to perform a raw device backup. Refer to the documentation that came with your backup software if you are using a backup application to perform the backup. 1. Determine the raw device name of the virtual device that you want to back up. You can find this name from the FalconStor Management Console. It is displayed on the Backup tab when you highlight a specific SAN resource. 2. Execute the following command on the storage server: dd if=/dev/isdev/kisdev# of=/dev/st0 bs=65536 where kisdev# refers to the raw device name of the logical resource.
st0 is the tape device. If you have multiple tape devices, substitute the correct number in place of the zero. You can verify that you have selected the right tape device by using the command: tar -xvf /dev/st0 where 0 is a variable. bs=65536 sets the block size to 64K to achieve faster performance.

You can also back up a logical resource to another logical resource. Prior to doing so, all target logical resources must be detached from the client machine(s), and have backup enabled so that the raw device name for the logical resource can be used instead of specifying st0 for the tape device. When the back up is finished, you will only see one logical resource listed in the Console. This is caused by the fact that when you reserve a hard drive for use as a virtual device, the storage server writes partition information to the header and the Console uses this information to recognize the hard drive. Since a Linux dd will do an exact copy of the hard drive, this partition information will exist on the second hard drive, will be read by the Console, and only one drive will be shown. If you need to make a usable copy of a virtual drive, you should use FalconStors Snapshot Copy option.

CDP/NSS Administration Guide

401

ZeroImpact Backup

Restore a volume backed up using ZeroImpact Backup Enabler


You will need to do the following in order to restore an entire volume that was backed up with the ZeroImpact Backup Enabler. 1. Unassign the volume you will be restoring from the SAN client to which it attaches. This ensures that the client cannot change data while the restore is taking place. 2. Before you start the restore, suspend replication and disable TimeMark. These can hamper the performance of the restore. Before you disable TimeMark be sure to record the current policies. This can be done by right-clicking on the virtual drive and select TimeMark/CDP --> Properties. 3. Once the restore is complete, resume replication and re-enable TimeMark, if necessary.

CDP/NSS Administration Guide

402

CDP/NSS Administration Guide

Multipathing
The Multipathing option may not be available in all IPStor, CDP, and NSS versions. Check with your vendor to determine the availability. This option allows the storage server to intelligently distribute I/O traffic across multiple Fibre Channel (FC) ports to maximize efficiency and enhance system performance. Because it uses parallel active storage paths between the storage server and storage arrays, CDP/NSS can transparently reroute the I/O traffic to an alternate storage path to ensure business continuity in the event of a storage path failure. Multipathing is possible due to the existence of multiple HBAs in the storage server and/or multiple storage controllers in the storage systems that can access the same physical LUN.

The multiple paths cause the same LUN to have multiple instances in the storage server.

CDP/NSS Administration Guide

403

Multipathing

Load distribution
Automatic load distribution allows for two or more storage paths to be simultaneously used for read/write operations, enhancing performance by automatically and equally dispersing data access across all of the available active paths.

Preferred paths
Some storage systems support the concept of preferred paths, which means the system determines the preferred paths and provides the means for the storage server to discover them.

CDP/NSS Administration Guide

404

Multipathing

Path management
From the FalconStor Management Console, you can specify a preferred path for each physical device. Right-click on the device and select Alias.

The Path Status can be Standby - Active (passive) or load-balancing (Active). Changes to the active path configuration become effective immediately, but are not saved permanently until you use the System Preferred Path --> Save option. Each path has either a good or bad state. In most cases when the deployment is an active/passive clustered pair of an NSS Gateway or NSS HC acting as a gateway, there are two load-balancing groups. Single load-balancing group: Once the path is determined to be defective, it will be removed from the load-balanced group and will not be re-used after the path is restored unless there are no more good paths available or a manual rescan is performed. If either occurs, the path will be added back to the load-balanced group. Two load-balancing groups: If there are two load-balanced groups (one is active and the other is passive) for the physical device, then when there are no more good paths left in the active load-balanced group, the device will fail over to the passive load-balancing group.

CDP/NSS Administration Guide

405

Multipathing

You can see multipathing information from the console by checking the Alias tab for a LUN (under Fibre Channel Devices).

For each device, you see the following: Path Status: Current, Standby Active, Standby Passive, or load-balancing Current: Displays if only one path is being used. Standby Active: Displays when a path is in the active group and is ready. A rescan from the console will make it load-balanced. Standby Passive: Displays for all passive paths load-balancing displays for all active paths across which the I/O is being balanced. Standby Passive path(s) cannot be used until the LUN is trespassed. The load is then balanced across the standby passive paths and the earlier load-balanced paths now become standby passive. Connectivity status - indicates whether the device is connected or disconnected.

The SCSI Devices tab displays a table with sizing information. If you are using alias paths for your multi-path Fibre Channel devices, the device size displays as N/A (as shown in the table below):

Only the actual path size is calculated in order to provide an accurate calculation of actual size. The icon for a device using an alias path displays in black and white. For a multi-path device, all SCSI Devices can display under one specific Fibre Channel Adapter. This does not mean load balance is not active; the adapter number is just a place holder.
CDP/NSS Administration Guide 406

CDP/NSS Administration Guide

Command Line Interface


The Command Line Interface (CLI) is a simple interface that allows client machines to perform some of the more common functions currently performed by the FalconStor Management Console. Administrators can use the CLI to automate many tasks, as well as integrate CDP/NSS with their existing management tools. The CLI utility can be downloaded from the FalconStor website (on the customer support portal and TSFTP) under the SAN client category.

Install and configure the CLI


The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path must be set up for Windows clients in order to be able to use the CLI. The path can be set up from the Windows Desktop by performing the following steps: 1. Right-click My Computer and select Properties --> Advanced system settings --> Environment Variables button. 2. Highlight the Path variable in the System Variables box, click the Edit button and add the following to the end of the existing path. ;c:\Program Files\FalconStor\IPStor\Client 3. Click OK to save and exit. For Linux, Solaris, AIX, and HP-UX clients, the path is automatically set during the Client installation. In order to use the CLI, Linux, Solaris, AIX, and HP-UX client users must have exited the current shell to set the new environment at least once after installing the client software.

Use the CLI


CLI command usage help can be obtained by typing: iscli [help] [<command>] [server parameters]. To run a CLI command, type: iscli <command> <parameters>
Note: You should not have a console connected to your storage server when you run CLI commands; you may see errors in the syslog if a console is connected.

Type iscli at a command line to display a list of the existing commands. For example: c:\iscli These commands must be combined with the appropriate long or short arguments (ex. Long: --server-name servername Short: -s servername). If you type the command name (for example, c:\iscli getvdevlist), a list of arguments will be displayed for that command. Refer to Command Line Interface (CLI) error codes in the Troubleshooting / FAQs section for a list of CLI error codes.

CDP/NSS Administration Guide

407

Command Line Interface

Common arguments
The following arguments are used throughout the CLI. For each, a long and short variation is included. You can use either one. The short arguments ARE case sensitive. For arguments that are specific to each command, refer to the section for that command.
Short Argument Long Argument Value/Description

-s

--server-name

storage server Name (hostname or IP address). In order to use the hostname, the server name has to be resolvable on the client side and server side. storage server Username storage server User Password Storage Target Server Name (hostname or IP address) Storage Target Server Username (for replication commands) Storage Target Server User Password (for replication commands) Storage Client Name Storage Virtual Device ID storage server Source Virtual Device ID FalconStor Target Virtual Device ID Client Access Mode to Virtual Device Force the deletion of the virtual device Virtual device name Specify a number between 1 and 30000 seconds for the RPC timeout. The default is 30 seconds if not specified.

-u -p -S -U -P -c -v -v -V -a -f -n -X

--server-username --server-password --target-name --target-username --target-password --client-name --vdevid --source-vdevid --target-vdevid --access-mode --force --vdevname --rpc-timeout

Note: You only need to use the --server-username (-u) and --serverpassword (-p) arguments when you log into a server. You do not need them for subsequent commands on the same server during your current session.

CDP/NSS Administration Guide

408

Command Line Interface

Commands
Below is a list of commands you can use to perform CDP/NSS functions from the command line. You should be aware of the following as you enter commands: Type each command on a single line, separating arguments with a space. You can use either the short or long arguments (as described above). For details and a list of arguments for each command, type iscli and the command. For example c:\iscli getvdevlist Variables are listed in <> after each argument. Arguments listed in brackets [ ] are optional. The order of the arguments is irrelevant. Arguments separated by | are choices. Only one can be selected. For a value entered as a literal, it is necessary to enclose the value in quotes (double or single) if it contains special characters such as *, <, >, ?, |, %, $, or space. Otherwise, the system will interpret the characters with a special meaning before it is passed to the command. Literals cannot contain leading or trailing spaces. Leading or trailing spaces enclosed in quotes will be removed before the command is processed. In order to use the hostname of the storage server instead of its IP address, the server name has to be resolvable on the client side and server side.

The following table provides a summary of the command line interface options along with a description.

Command Line Interface (CLI) description table


Command Description Login/Logout of the storage server
iscli login

This command allows you to log into the specified storage server with a given username and password. This command allows you to log out of the specified storage server. If the server was not logged in or you have already logged out from the server when this command is issued, error 0x0902000f will be returned. After logging out from the server, the -u and p arguments will not be optional for the server commands.

iscli logout

Client Properties
iscli setfcclientprop iscli getclientprop iscli setiscsiclientprop This command allows you to set Fibre Channel client properties. <client-name> is required. This command allows you to get client properties. This command allows you to set iSCSI client properties. <user-list> is in the following format: user1,user2,user3

CDP/NSS Administration Guide

409

Command Line Interface

Command Line Interface (CLI) description table


Command iSCSI Targets
iscli createiscsitarget This command creates an iSCSI target. <client-name>, <ipaddress>, and <access-mode> are required. A default iSCSI target name will be generated if <iscsi-target-name> is not specified. This command deletes an iSCSI target. <client-name> and <iscsitarget-name> are required. This command assigns a virtual device or group to an iSCSI target. A virtual device or group (either ID or name) and iSCSI target are required. All virtual devices in the same group will be assigned to the specified iSCSI target if group is specified. If a virtual device ID is specified and it is in a group, an error will be returned. This command unassigns a virtual device or group from an iSCSI target. This command unassigns a virtual device or group from an iSCSI target. Virtual device and iSCSI target are required. -f (-force) option i8s required when the iSCSI target is assigned to the client and the client is connected or when the virtual device is in a group. An error will be returned if the client is connected and the force option is not specified. This command retrieves information for iSCSI targets. The iSCSI target ID or iSCSI target name can be specified to get the specific iSCSI target information. The default is to get the information for all iSCSI targets. This command sets the iSCSI target properties. Refer to Create iSCSI target above for details about the options.

Description

iscli deleteiscsitarget iscli assigntoiscsitarget

iscli unassignfromiscsitarget

iscli getiscsitargetinfo

iscli setiscsitargetprop

Users and Passwords


iscli adduser

This command allows you to add a CDP/NSS user. You must log in to the server as "root" in order to perform this operation. This command allows you to change a CDP/NSS users password. You must log in to the server as "root" in order to perform this operation if the user is not an iSCSI user.

iscli setuserpassword

Mirroring
iscli createmirror

This command allows you to create a mirror for the specified virtual device. The virtual device can be a SAN, or Replica resource. This command shows the mirror status of a virtual device. The resource name, ID and synchronization status will be displayed if there is a mirror disk configured for the virtual device. This command synchronizes the mirrored disks. This command reverses the roles of the primary disk and the mirrored copy. CDP/NSS Administration Guide 410

iscli getmirrorstatus

iscli syncmirror iscli swapmirror

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli promotemirror

Description
This command allows you to promote a mirror disk to a regular virtual device. The mirror cannot be promoted if the synchronization is in progress or when it is out-of-sync and the force option is not specified. This command allows you to remove a mirror for the specified virtual device. This command enables virtual devices to read from an alternative mirror. This command disables virtual devices so they no longer read from an alternative mirror. This command retrieves and displays information about all virtual devices with the alternative mirror option. This command allows you to copy a virtual device without a snapshot. The original virtual device becomes a new virtual device with a new virtual device ID. The original virtual device name and ID will be kept, but with segments allocated from different storage. If the virtual device does not have mirror, it will create a mirror, sync the mirror, swap the mirror, then promote the mirror. If the virtual device already has mirror, it will swap the mirror, sync the mirror, promote the mirror, then re-create the mirror for the original VID. The following is an example of the output of the command if Mirror Health Monitoring Option is enabled: Mirror Health Monitoring Option Enabled=Yes Monitoring Interval=1 seconds Maximum Acceptable Lagging Time=15 milliseconds Threshold to Report Error=5 % Minimum outstanding IOs to Report Error=20 Mirror Sync Control Policy: Sync Control Policy Enabled=Yes Sync Control Max Sync Time=4 Minute(s) Sync Control Max Resync Interval=1 Minute(s) Sync Control Max IOs for Resync=N/A Sync Control Max IO Size for Resync=20 MB Sync Control Max Resync Retry=0 The Mirror policy is for resources enabled with the mirroring option. You can set the options to check the mirror health status, suspend, resume and re-synchronize the mirror when it is necessary. This command allows you to suspend mirroring.

iscli removemirror

iscli enablealternativereadmirror

iscli disablealternativereadmirror

iscli getalternativereadmirroroption iscli migrate

iscli getmirrorpolicy

iscli setmirrorpolicy

iscli suspendmirror

iscli resumemirror

This command allows you resume mirroring.

CDP/NSS Administration Guide

411

Command Line Interface

Command Line Interface (CLI) description table


Command Description

Server Commands for Virtual Devices and Clients


iscli createvdev

This command allows you to create a SAN resource on the specified server. A SAN resource can be created in one of following categories: virtual or service-enabled. The default category is virtual if the category is not specified. This command retrieves and displays information about all virtual devices or a specific virtual device from the specified server. This command retrieves and displays information about all virtual devices assigned to the client from the specified server. This command allows you to rename a virtual device. However only SAN resource and SAN replica can be renamed. Specify the id and new name of the resource to be renamed. This command allows you to assign a virtual device or a group on a specified server to a SAN client. If this is an iSCSI client, you can use this command to assign an iSCSI target to a client, but not a device. Use CLI assigntoiscsitarget to assign a device. This command allows you to unassign a virtual device or a group on the specified server from a SAN client. If the client is an iSCSI client, iSCSI target should be specified. Otherwise, virtual device should be specified. This command allows you to expand the size of a virtual device on the specified server. SAN resources can be expanded but not a replica disk by itself or a TimeView resource. This command allows you to delete a SAN resource, or SAN TimeView Resource on the specified server. If the resource is assigned to a SAN client, the assignment(s) will be removed first. If a Snapshot Resource is created for the virtual device, it will be removed. This command allows you to set properties for assigned virtual devices. Device properties can only be changed when the client is not connected. This command allows you to add a client to the specified server. This command allows you to delete a client. <client-name> is the client to be deleted. This command allows you to add a protocol to a client. This command allows you to remove a protocol from a client. This command allows you to get the corresponding virtual device ID when you enter a serial number (a 12-character long alphanumeric string).

iscli getvdevlist

iscli getclientvdevlist

iscli renamevdev

iscli assignvdev

iscli unassignvdev

iscli expandvdev

iscli deletevdev

iscli setassignedvdevprop

iscli addclient iscli deleteclient

iscli enableclientprotocol iscli disableclientprotocol iscli getvidbyserialno

CDP/NSS Administration Guide

412

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli addthindiskstorage

Description
This command allows you to add additional storage to the resource configured for Thin Provisioning without changing the maximum disk size seen by the client host. The resource can be SAN, or a replica. This command allows you to set the thin disk properties. This command allows you to get thin disk properties. This command retrieves the serial number of the specified devices from the server. This command allows you to replace the Fibre Channel Client World Wide Port Name (WWPN). This command allows you to notify the Fibre Channel client to rescan the devices.

iscli setthindiskproperties iscli getthindiskproperties iscli getvdevserial

iscli replacefcclientwwpn

iscli rescanfcclient

Email Alerts
iscli enablecallhome iscli disablecallhome

This command allows you to enable Email Alerts. This command allows you to disable Email Alerts.

Failover
iscli getfailoverstatus

This command shows you the current status of your failover configuration. It also shows all Failover settings, including which IP addresses are being monitored for failover.

Replication
iscli createreplication iscli startreplication

This command allows you to set up a replication configuration. This command allows you to start replication on demand for a virtual device or a group. You can only specify one identifier, -v <vdevid>, -g <group-id>, or -G <group-name>. This command allows you to stop the replication that is in progress for a virtual device or a group. If a group is specified, and the group is enabled with replication, the replication for all resources in the group will be stopped. If replication is not enabled for the group, but some of the resources in the group are configured for replication, replication for the resources in the group will be stopped. This command allows you to suspend scheduled replications for a virtual device or a group that will be triggered by your replication policy. It will not stop a replication that is currently in progress. This command allows you to resume replication for a virtual device or a group that was suspended by the suspendreplication command. The replication will then be triggered by the replication policy once it is resumed.

iscli stopreplication

iscli suspendreplication

iscli resumereplication

CDP/NSS Administration Guide

413

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli promotereplica

Description
This command allows you to promote a replica to a regular virtual device if the primary disk is available and the replica disk is in a valid state. This command allows you to remove the replication configuration from the primary disk on the primary server and delete the replica disk on the target server. Either a primary server with a primary disk or a target server with a replica disk can be specified. This command shows the replication status. The target server name and the replica disk ID are required to get the replication status. This command allows you to set the replication policy for a virtual device or group configured for replication. This command allows you to get the replication properties for a virtual device or group configured for replication. This command relocates a replica after the replica disk has been physically moved to a different server. This command scans a replica server. This command allows you to view the throttle configuration information. This command allows you to configure the throttle level for target sites or windows. Can accept a file. The path of the file in the command must be the full path This command allows you to view the information of a particular Target Site. This command allows you to change the window start/end time. Can accept a file. The path of the file in the command must be the full path This command removes a custom window. Can accept a file. The path of the file in the command must be the full path. Creates a custom throttle window with a specific time duration. Can accept a file. The path of the file in the command must be the full path. This command allows you to create a custom Link Type. This command allows you to view the information of a particular Target Site. This command allows you to add a target server to an existing Target site. Can accept a file. The path of the file in the command must be the full path.

iscli removereplication

iscli getreplicationstatusinfo

iscli setreplicationproperties iscli getreplicationproperties iscli relocate iscli scanreplica iscli getreplicationthrottles iscli setreplicationthrottles

iscli getthrottlewindows iscli setthrottlewindows

iscli removethrottlewindows iscli addthrottlewindows

iscli addlinktypes iscli gettargetsitesinfo iscli addtargetservertotargetsite

CDP/NSS Administration Guide

414

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli deletereplicationtargetsite iscli createreplicationtargetsite

Description
This command deletes / removes a target site from server. This command creates a target site. You can create a target site with multiple target servers within at once by listing in their host names in the command or use a file. The format of the file is one server per line. The path of the file in the command must be the full path. This command allows you to remove a target server from an existing Target site. Can accept a file. The path of the file in the command must be the full path. This command allows you to remove a custom Link Type. This command allows you to configure an existing custom Link Type. This command allows you to view the available Link Types on server.

iscli removetargetserverfromtargetsi te iscli removelinktypes iscli setlinktypes iscli getlinktypes

Server configuration
iscli getserverversion

This command allows you to view the storage version and build number.

Snapshot Copy
iscli snapcopy

This command allows you to issue a snapshot copy between two virtual devices of the same size. This command allows you to get the status of snapshot copy.

iscli getsnapcopystatus

Physical Device
iscli getpdevinfo iscli getadapterinfo

This command provides you with physical device information. This command allows you to get HBA information on a selected adapter.

CDP/NSS Administration Guide

415

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli rescandevices

Description
This command allows you to rescan the physical resource(s) on the specified server to get the proper physical resource configuration. The adapter number can be specified to rescan only the devices on that adapter. If an adapter is not specified, all adapters will be rescanned. In addition to the adapter number, you can also specify the SCSI range to be rescanned. If the range is not specified, all SCSI IDs of the specified adapter(s) will be rescanned. Furthermore, the LUN range can be specified to narrow down the rescanning range. The range is specified in this format: #-#, e.g. 1-10. If you want the system to rescan the device sequentially, you can specify the L ([--sequential) option. The default is not to rescan sequentially. This command allows you to import a foreign disk to the specified server. A foreign disk is a virtualized physical device containing CDP/NSS logical resources previously set up on a different storage server. If the previous server is no longer available, the disk can be set up on a new storage server and the resources on the disk can be imported to the new server to make them available to clients. Either the GUID or SCSI address can be specified for the physical device to be imported. This information can be retrieved through the getpdevinfo command.

iscli importdisk

iscli preparedisk

This command allows you to prepare a physical device to be used by a CDP/NSS server or reserve a physical device for other usage. The <guid> is the unique identifier of the physical device. <ACSL> is the SCSI address of the physical device in this format: #:#:#:# (adapter:channel:scsi id:lun). You can specify either the <guid> or <ACSL> for the disk to be prepared.

iscli renamephysicaldevice

This command allows you to rename a physical device. (When a device is renamed on a server in a failover pair, the device gets renamed on the partner server also.) This command allows you to remove a physical device. This command allows you to restore the system preferred path for a physical device.

iscli deletephysicaldevice iscli restoresystempreferredpath

TimeMark/CDP
iscli enabletimemark

This command allows you to enable the TimeMark option for an individual resource or for a group. TimeMark can be enabled for a resource as long as it is not yet enabled.

CDP/NSS Administration Guide

416

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli createtimemark

Description
This command allows you to create a TimeMark for a virtual device or a group. A timestamp will be associated with each TimeMark. A notification will be sent to the SAN client to stop writing data to its virtual devices before the TimeMark is created. The new TimeMark is not immediately available after a successful createtimemark command. The TimeMark creation status can be retrieved with the gettimemarkstatus command. The TimeMark timestamp information can be retrieved with the gettimemark command. This command allows you to disable the TimeMark option for a virtual device or a group. This command is only available in version 5.1 or later and lets you add a comment or change the priority of an existing TimeMark. A TimeMark timestamp is required to update the TimeMark information. This command allows you to delete a TimeMark for a virtual device or a group. <timemark-timestamp> is the TimeMark timestamp to be selected for the deletion in the following format: YYYYMMDDhhmmss. This command allows you to copy the specified TimeMark to an existing or newly created virtual device with the same size. The copying status can be retrieved with the gettimemarkstatus command. This command allows you to select a TimeMark and create a raw device on the server to be accessed directly. Only one raw device can be created per TimeMark. The corresponding delselecttimemark command should be issued to release the raw device when the raw device is no longer needed. This command allows you to release the raw device associated with the TimeMark previously selected via the selecttimemark command. This command allows you to rollback a virtual device to a specific point-in-time. The rollback status can be retrieved with the gettimemarkstatus command. This command allows you to enumerate the TimeMarks and view the TimeMark information for a virtual device or for a group. This command allows you to change the TimeMark properties, such as the automatic TimeMark creation schedule and maximum TimeMarks allowed for a virtual device or a group. This command allows you to view the current TimeMark properties associated with a virtual device or a group. When the virtual device is in a group, the TimeMark properties can only be retrieved for the group.

iscli disabletimemark

iscli updatetimemarkinfo

iscli deletetimemark

iscli copytimemark

iscli selecttimemark

iscli deselecttimemark

iscli rollbacktimemark

iscli gettimemark

iscli settimemarkproperties

iscli gettimemarkproperties

CDP/NSS Administration Guide

417

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli gettimemarkstatus

Description
This commands allows you to retrieve the TimeMark creation state and TimeMark rollback or copying status. This command allows you to create a TimeView virtual device associated with specified virtual device and TimeMark. This command remaps a TimeView associated with a specified virtual device and TimeMark. The original TimeView is deleted and all changes to it are gone. A new TimeView is created with the new TimeMark using the same TimeView device ID. All of the connection assignments are retained. This option suspends CDP. After the CDP journal is suspended, data will not be written to it until it is resumed. This option resumes CDP after it has been suspended. This command gets the current size and status of your CDP journal, including all policies. This command allows you to remove TimeView data resources individually or by source virtual devices. This command allows you to retrieve CDP Journal information. This command lets you manually add a tag to the CDP journal. The -A (--cdp-journal-tag) tag can be up to 64 characters long and serves as a bookmark in the CDP journal. Instead of specifying the timestamp, the tag can be used when creating a TimeView. This command allows you to retrieve CDP Journal tags.

iscli createtimeview

iscli remaptimeview

iscli suspendcdpjournal

iscli resumecdpjournal iscli getcdpjournalstatus

iscli removetimeviewdata

iscli getcdpjournalinfo iscli createcdpjournaltag

iscli getcdpjournaltags

Snapshot Resource
iscli createsnapshotresource

This command allows you to create a snapshot resource for a virtual device. A snapshot resource is required in order for a virtual device to be enabled with the TimeMark or Backup options. It is also required for replication, snapshot copy, and for joining a group. A snapshot resource is not needed if the virtual device is not enabled for the TimeMark or Backup options, is configured for replication or is in a group. You can delete the snapshot resource to free up space when it is not needed. This command allows you to expand the snapshot resource on demand. The maximum size allowed that is specified in the snapshot policy only applies to the automatic expansion. The size limit does not apply when the snapshot resource is expanded on demand. This command allows you to modify the existing snapshot policy for the specified resource. The new policy will take effect with the next snapshot operation.

iscli deletesnapshotresource

iscli expandsnapshotresource

iscli setsnapshotpolicy

CDP/NSS Administration Guide

418

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli getsnapshotpolicy

Description
This command allows you to view the snapshot policy settings for the specified resource. This command allows you to set the reclamation policy settings for the specified resource. This command allows you to disable the reclamation policy settings for the specified resource. This command allows you to manually start the reclamation process for the specified resource. This command allows you to manually stop the reclamation process for the specified resource. This command allows you to update the reclamation policy settings for the specified resource. This command allows you to retrieve and view the reclamation status for the specified resource. Snapshot Resource cannot be deleted when the virtual device is in a Snapshot Group, or when the snapshot is online. This command allows you to view snapshot resource status information. The output will be similar to the following: Virtual Device Name=Sarah-00457 ID=457 Type=SAN Snapshot Resource Size=58827 MB Snapshot Resource Status=Accessible Used Size=47.54 GB(82%) This command allows you to set the reclamation policy on a selected virtual device. This command allows you to set the global reclamation policy. This command allows you to retrieve group information for all groups or a specific group on the specified server. The default output format is a list of groups and a list of group members in each group. This command allows you to create a group, where <group-name> is the name for the group. The maximum length for the group name is 64. The following characters are invalid for the group name: <>"&$/\

iscli enablereclamationpolicy

iscli disablereclamationpolicy

iscli startreclamation

iscli stopreclamation

iscli updatereclaimpolicy

iscli getreclamationstatus

iscli reinitializesnapshotresource iscli getsnapshotresourcestatus

iscli setreclamationpolicy

iscli setglobalreclamationpolicy

iscli getsnapshotgroups

iscli createsnapshotgroup

CDP/NSS Administration Guide

419

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli deletesnapshotgroup

Description
This command allows you to delete a group. A group can only be deleted when there are no group members in it. If the group is configured for replication, both the primary group and replica group have to be deleted. The force option is required if one of the following conditions applies: Deleting the replica group on the target server when the primary server is not available. Deleting the primary group on the primary server when the target server is not available. An error will be returned if the force option is not specified for these conditions. This command allows you to add a virtual device to the specified group. <vdevid> is the virtual device to join the group. Either <group-id> or <group-name> can be specified for the group. This command allows you to remove a virtual device from a group. If the group is configured for replication, both the primary and target servers need to be available because the system will remove the primary disk from the group on the primary server and the replica disk from the group on the target server. You can use the force option to allow the primary disk to leave the group on the primary server without connecting to the target server, or allow the replica disk to leave the group on the target server without connecting to the primary server. The force option should only be used when either the primary disk is not in the primary group anymore or when the replica disk is not in the replica group anymore

iscli joinsnapshotgroup

iscli leavesnapshotgroup

iscli enablereplication

This command allows you to enable replication for a group. Specify the <group-id> or <group-name> for the group that should have replication enabled. All of the resources in the group have to be configured with replication in order for the group to be enabled for replication. Use the -E (--enable-resource-option) option to allow the system to configure the non-eligible resources with replication first before enabling the group replication option. A target server must be specified. A group for the replica disks will be created on the target server. You can specify the <target-groupname> or use the default. The default is to use the same group name. This command allows you to disable replication for a group. All replica disks will leave the replica group and the replica group on the target server will be deleted. The replication configuration of all resources in the group will remain the same, but TimeMarks will not be taken for all resources together anymore. All replication operations will be applied to the individual resource only. CDP/NSS Administration Guide 420

iscli disablereplication

Command Line Interface

Command Line Interface (CLI) description table


Command Cache resources
iscli createcacheresource

Description

This command creates a cache resource for a virtual device or a group. This command gets the status of a cache resource. This command sets the properties of a cache resource. This command displays the properties of a cache resource. This command suspends a cache resource. After the cache resource is suspended, no more new data will be written to it. The data on the cache resource will be flushed to the source resource. This command resumes a suspended cache resource. This command deletes a cache resource. The data on the cache resource has to be flushed before the cache resource can be deleted. The system will suspend the cache resource first if it is not already suspended.

iscli getcacheresourcestatus iscli setcacheresourceprop iscli getcacheresourceprop iscli suspendcacheresource

iscli resumecacheresource iscli deletecacheresource

Report data
iscli getreportdata

This command allows you to get report data from the specified server and save the data to an output file in csv or text file format.

Event log
iscli geteventlog

The date range can be specified to get the event log for a specific range. The default is to get all of the event log messages if a date range is not specified.

Backup
iscli enablebackup

This command allows you to enable the backup option for an individual resource or for a group. Backup can be enabled for a resource as long as it is not already enabled. This command allows you to disable backup for a virtual device or a group. Backup of a resource cannot be disabled if the resource is in a group enabled for backup. A groups backup can be disabled as long as there is no group activity using the snapshot resource. Individual resources in the group will remain backup-enabled after the groups backup is disabled.

iscli disablebackup

CDP/NSS Administration Guide

421

Command Line Interface

Command Line Interface (CLI) description table


Command
iscli stopbackup

Description
This command allows you to stop the backup activity for a virtual device or a group. If a group is specified and the group is enabled for backup, the backup activity for all resources in the group is stopped. If the backup option is not enabled for the group, but some of the resources in the group are enabled for backup, the backup activity for the resources in the group is stopped. This command allows you to change the backup properties, such inactivity timeout, closing grace period, backup window, and backup life span, for a virtual device or a group. When the virtual device is in a group, the backup properties can only be set for the group. To remove the inactivity timeout or backup life span, specify 0 as the value. This command allows you to view the current backup properties associated with a virtual device or a group enabled for backup. When the virtual device is in a group, the backup properties can only be retrieved for the group.

iscli setbackupproperties

iscli getbackupproperties

Xray
iscli getxray

This command allows you to get X-ray information from the storage server for diagnostic purposes. Each X-ray contains technical information about your server, such as server messages and a snapshot of your server's current configuration and environment. You should not create an X-ray unless you are requested to do so by your Technical Support representative.

CDP/NSS Administration Guide

422

CDP/NSS Administration Guide

SNMP Integration
CDP/NSS provides SNMP support to integrate CDP/NSS management into an existing enterprise management solution such as HP OpenView, HP Network Node Manager (NNM), Microsoft System Center Operations Manager (SCOM), CA Unicenter, IBM Tivoli NetView, and BMC Patrol. For Dell appliances, SNMP integration with Dell OpenManage is supported. Information can be obtained via your MIB browser (i.e. query Dells OID with OpenView) or via the Dell OpenManage software. For HP appliances, SNMP integreation with HP Advanced Server Management (ASM) is supported. Information can be obtained via your MIB browser or from the HP Systems Insight Manager (SIM). CDP/NSS uses the MIB (Management Information Base) to determine what data can be monitored. The MIB is a database of information that you can query from an SNMP agent. A MIB module contains actual specifications and definitions for a MIB. A MIB file is just a text file that contains one or more MIB modules. There are three major areas of management:
Accounting management (including discovery) Locates all storage servers and Windows clients. It shows how all the resources are aggregated, virtualized, and provisioned, including the number of adapters, physical devices, and virtual devices attached to a Server. Most of the information comes from the storage servers configuration file (ipstor.conf). Performance management (including statistics) Shows information about your storage servers and clients, including the number of clients being serviced by a server, server memory used, CPU load, and the total MB transferred. Most of the information comes from the / proc/ipstor directory on the servers or the client monitor on the clients. For more information about each of the statistics, please refer to the IPSTORMIB.txt file that is in the Servers /usr/local/ipstor/etc/snmp/mibs directory. Fault management This allows a trap to be generated when certain conditions occur.

CDP/NSS Administration Guide

423

SNMP Integration

SNMPTraps
Simple Network Management Protocol (SNMP) is used to monitor systems for fault conditions, such as disk failures, threshold violations, etc. Essentially, SNMP agents expose management data on the managed systems as variables. The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata (such as type and description of the variable), are described by Management Information Bases (MIBs). An SNMP-managed network consists of three key components: Managed device Agent software which runs on managed devices Network management system (NMS) software which runs on the manager

An SNMP trap is an asynchronous event indicating that a significant event has occurred. There are statistic traps, disk-full traps, failover/recovery traps, and process-down traps. Statistics traps allow you to set a threshold for an Object Identifier (OID) so that a trap is sent when the threshold is met. In order to integrate with some thirdparty SNMP managers, you may need to load the MIB file. To load the MIB file, navigate to $ISHOME/etc/snmp/mibs/IPSTOR-MIB.TXT and copy the IIPSTORMIB.TXT file to the machine running the SNMP manager. An SNMP trap message is sent when triggered by an event. The message contains the OID, time stamp, and specific information for each trap. Process down traps allow you to monitor the status of the CDP/NSS modules (or processes) so that a trap is sent when a CDP/NSS component is down. The following table lists the name and description of the modules (or processes) that can be configured to be monitored: CDP/NSS Event Log messages CDP/NSS Event Log messages can be sent to your SNMP manager. By default, Event Log messages (informational, warnings, errors, and critical errors) will not be sent. From the FalconStor Management Console, you can determine which type of messages should be sent. To select the Trap level: 1. Right-click on the server and select Properties --> SNMP Maintenance --> Trap Level. 2. After selecting a Trap Level, click Add to enter the name of the server receiving the traps (or IP address if the name is not resolvable), and a Community name. Five levels are available: None (Default) No messages will be sent. Critical - Only critical errors that stop the system from operating properly will be sent. Error Errors (failure such as a resource is not available or an operation has failed) and critical errors will be sent.

CDP/NSS Administration Guide

424

SNMP Integration

Warning Warnings (something occurred that may require maintenance or corrective action), errors, and critical errors will be sent. Informational Informational messages, errors, warnings, and critical error messages will be sent.

Implement SNMP support


The SNMP software is installed on the Server and Windows clients during the CDP/ NSS installation.
Note: CDP/NSS installs an SNMP module that stops the native SNMP agent on the storage server. The CDP/NSS SNMP module is customized for use with your SNMP manager. If you do not want to use the CDP/NSS SNMP module, you can stop it by executing: ./ipstor stop snmpd. However, the next time the server is rebooted, it will start again. Contact technical support if you do not want it to restart on boot up.

To complete the implementation, you must install software on your SNMP manager machine and then configure the manager to support CDP/NSS. Since this process is different for each SNMP manager, please refer to the appropriate section below.

CDP/NSS Administration Guide

425

SNMP Integration

Microsoft System Center Operations Manager (SCOM)


Microsoft SCOM is a Microsoft management server with SNMP functionality. CDP/ NSS supports SNMP trap integration with Microsoft SCOM 2007 R2. SNMP integration requires that you manually create a rule and discover the SNMP device from the Microsoft SCOM console. To do this: 1. From the Microsoft SCOM console, navigate to Authoring --> Management Pack Object -> Rules. Right-click and select Create a new rule. The Create Rule Wizard displays. 2. Select the type of rule to create. Alert Generating Rules -> Event based -> SNMPTrap ( Alert ) and click Next. 3. Select the rule name, description, and select the rule target for the snmp network device. The Select a Target Type screen displays allowing you select for the populated list or use the Look for field to filter down to a specific target or sort the targets by Management Pack. 4. In the configure the trap OIDs to collect, select the Use discovery community string option and enter the OID. For example: 1.3.6.1.4.1.7368.0.9 5. Configure Alerts by specifying the information that will be generated by the alert and click Create. Once the rule is created, you will be able to discover the SNMP network device. 6. Discover the SNMP network device. From the Administration node, navigate to Device Management --> Network Devices and select Discovery Wizard from the right-click menu. 7. Click Next at the Computer and Device Management Wizard screen. Then select Advanced discovery. Select network device in the Computer & Device Types field. 8. Select the discovery method. Specify the IP address (i.e. 172.11.22.333 to 172.11.22.333), type the community string (i.e. public), select the SNMP version (i.e. v2172), and click Discover. After discovery you should see the network device. You can right-click on it and select Open --> Alert View to see trap information on the Alert properties screen.

CDP/NSS Administration Guide

426

SNMP Integration

HP Network Node Manager (NNM) i9


CDP/NSS provides SNMP trap integration and MIB upload for the HP management server - Network Node Manager I 9 (NNMi9). NMM i9 trap The trap configuration can be set by logging into the NNMi9 console from web and following the steps below: 1. From the HP Network Node Manager console, navigate to Workspaces --> Configuration, and select Incident Configuration. 2. Select the New icon under the SNMP Traps tab. Enter the Basics and then click Save and Close: Name : IPSTOR-information SNMP ObJect ID : .1.3.6.1.4.1.7368.0.9 Category : IPStor Family : IPStor Severity : Critical Message Format: $oid Navigate to Incident Browsing --> SNMP Traps to see the trap collection information Upload MIB The MIB browser can be launched from the HP Network Node Manager console by selecting Tools --> MIB Browser. 1. Upload the MIB file from the HP Network Node Manager console by selecting Tools --> Upload Local MIB File. The Upload Local MIB File window launches. 2. Browse to select the MIB file from the CDP/NSS storage server and click Upload MIB. The Upload MIB File Data Results screen displays an upload summary.

CDP/NSS Administration Guide

427

SNMP Integration

HP OpenView Network Node Manager 7.5


Install
The software installation media includes software that must be installed on your HP OpenView Network Node Manager (NNM) machine. This software adds several CDP/NSS menu options into your NNM and adds a CDP/NSS MIB tree so that you can set traps. 1. Launch the software installation package. 2. Select Install Products --> Install SNMP for HP OpenView. If not automatically launched, navigate to the \SNMP\OpenView directory and run setup.exe to launch the SNMP install program. 3. Start the NNM when the installation is finished. Under the Tools menu you will see a new CDP/NSS menu option.

Configure
You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this: 1. In the NNM, highlight a storage server and select Tools --> SNMP MIB Browser. 2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer --> trapReg and highlight trapSinkSettingTable. The default read-only community is public. The default read-write community is falcon. Set the Community name to "falcon" so that you will be allowed to change the configuration. Click the Start Query button to query the configuration. From the MIB values field, select a host to receive traps. You can set up to five hosts to receive traps. If the value is 0, the host is invalid or not set. In the SNMP set value field, enter the IP address or machine name of the host that will receive traps. Click the Set button to save the configuration in snmpd.conf. 3. In the SNMP MIB Browser, select private --> enterprises --> ipstor --> ipstorServer --> alarmTable. Click the Start Query button to query the alarms. In the MIB values field, select which CDP/NSS components to monitor. You will be notified any time the component goes down. A description of each is listed in the SNMPTraps section. In the SNMP set value field, enter enable or 1 to enable. Click the Set button to enable the trap you selected.
CDP/NSS Administration Guide 428

SNMP Integration

View statistics in NNM


In addition to monitoring CDP/NSS components and receiving alerts, you can view CDP/NSS statistics in NNM. There are two ways to do this: CDP/NSS menu 1. Highlight a storage server or Client and select Tools --> IPStor. 2. Select the appropriate menu option. These reports are provided by CDP/NSS as a convenient way to view statistical information without having to go through the MIB browser. You can add your own reports to the menu by selecting Options --> MIB Application Builder: SNMP. Refer to OpenViews documentation for details on using the MIB Application Builder. MIB browser 1. Highlight a storage server or Client and select Tools --> SNMP MIB Browser. 2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer. If this is a Client, select ipstorClient. From here you can view information about this storage server. If you run a query at the ipstorServer level, you will get a superset of all of the information from all of the sub-categories. For more specific information, expand the sub-categories. For more information about each of the statistics, you can click the Describe button or refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory.

CDP/NSS Administration Guide

429

SNMP Integration

CA Unicenter TNG 2.2


Install
The software installation media includes software that must be installed on your CA Unicenter TNG 2.2 machine. This software creates a CDP/NSSSNMP class in Unicenter and adds a CDP/NSS MIB tree so that you can set traps. 1. Launch the software installation media. 2. Select Install Products --> Install SNMP for CA Unicenter. If not automatically launched, navigate to the \SNMP\Unicenter directory and run setup.exe to launch the SNMP install program.

Configure
You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this: 1. Run Unicenters Auto Discovery. If you have a repository with existing machines and then you install the storage server software, Unicenter will not automatically re-classify the machine and mark it as a storage server. 2. If you need to re-classify a machine, open the Unicenter TNG map, highlight the machine, select Reclassify Object, select Host --> IPStor SNMP and then change the Alarmset Name to IPStorAlarm. If you want to re-align the objects on the map after re-classification, select Modes --> Design --> Folder --> Arrange Objects and then the appropriate network setting. 3. Restart the Unicenter TNG map. 4. To define hosts, right-click on Storage server --> Object View. 5. Click Object View, select Configure Toolbar, set the Get Community and Set Community to falcon, and set the Model to ipstor.mib. The default community name (password) is falcon. If it was changed in the snmpd.conf file (on the storage server), enter the appropriate community name here. 6. Expand Vendor Information and highlight trapSinkSettingEntry. 7. To define a host to receive traps, highlight the trHost field of an un-defined host, right-click and select Attribute Set. You can set up to five hosts to receive traps.

CDP/NSS Administration Guide

430

SNMP Integration

8. In the New Value field, enter the IP address or machine name of the host that will receive traps (such as your Unicenter TNG server). Your screen will now show that machine. 9. Highlight alarmEntry. 10. Highlight the alarmStatus field for a component, right click and select Attribute Set. 11. Set the value to enable for on or disable for off.

View traps
1. From your Start --> Programs menu, select Unicenter TNG --> Enterprise Management --> Enterprise Managers. 2. Double-click on the Unicenter machine. 3. Double-click on Event. 4. Double-click on Console Logs.

View statistics in TNG


You can view statistics about CDP/NSS directly from the ObjectView screen. To do this, highlight a category in the tree and the CDP/NSS information will be displayed in the right pane.

Launch the FalconStor Management Console


If the FalconStor Management Console is installed on your Unicenter TNG machine, you can launch it directly from the Unicenter map by right-clicking on a storage server and selecting Launch FalconStor Management Console.

CDP/NSS Administration Guide

431

SNMP Integration

IBM Tivoli NetView 6.0.1


Install
The software installation media includes software that must be installed on your Tivoli NetView machine. This software adds several CDP/NSS menu options into NetView and adds a CDP/NSS MIB tree so that you can set traps. 1. Launch the software installation media. 2. Select Install Products --> Install SNMP for IBM Tivoli. If not automatically launched, navigate to the \SNMP\Tivoli directory and run setup.exe to launch the SNMP install program. 3. Start NetView when the installation is finished. You will see a new CDP/NSS menu option on NetViews main menu.

Configure
You need to define which hosts will receive traps from your storage server(s). To do this: 1. In NetView, highlight a storage server on the map and click the Browse MIBs button. 2. In the tree, expand enterprises --> ipstor --> ipstorServer --> trapReg --> trapSinkSettingTable --> trHost. The default read-only community is public. The default read-write community is falcon. 3. Set the Community Name so that you will be allowed to change the configuration. 4. Click the Get Values button. 5. Select a host to receive traps. You can set up to five hosts to receive traps. If the value is 0, the host is invalid or not set. 6. In the New Value field, enter the IP address or machine name of the Tivoli host that will receive traps. 7. Click the Set button to save the configuration in snmpd.conf.

CDP/NSS Administration Guide

432

SNMP Integration

View statistics in Tivoli


In addition to monitoring CDP/NSS components and receiving alerts, you can view CDP/NSS statistics in NetView. There are two ways to do this: CDP/NSS menu 1. Highlight a storage server or Client and select IPStor from the menu. 2. Select the appropriate menu option. For a server, you can view: Memory used CPU load SCSI commands MB read/written Read/write errors For a client, you can view: SCSI commands Error report These reports are provided by CDP/NSS as a convenient way to view statistical information without going through the MIB browser. You can add your own reports to the menu by using NetViews MIB builder. Refer to NetViews documentation for details on using the MIB builder. MIB browser 1. Highlight a storage server or Client and click Tools --> MIB --> Browser. 2. In the tree, expand private --> enterprises --> ipstor -->Server. If this is a Client, select ipstorClient. 3. Select a category. 4. Click the Get Values button. The information is displayed in the bottom section of the dialog.

CDP/NSS Administration Guide

433

SNMP Integration

BMC Patrol 3.4.0


Install
The software installation media includes software that must be installed on your BMC Patrol machine. This software adds several CDP/NSS icon options into Patrol and adds several CDP/NSS MIB items so that you can retrieve information and set traps. 1. Launch the software installation media. 2. Select Install Products --> Install SNMP for BMC Patrol. If not automatically launched, navigate to the \SNMP\Patrol directory and run setup.exe to launch the SNMP install program. 3. Start Patrol when the installation is finished. 4. Click Hosts --> Add on the Patrol main menu and enter the Host Name (IP preferred), Username (Patrol administrator name of the storage server), Password (Patrol administrator password of the storage server), and Verify Password fields to add the storage server. 5. Click Hosts --> Add on the Patrol main menu and input the Host Name, Username (administrator name of the Patrol machine), Password (administrator password of the Patrol machine), and Verify Password fields to add the Patrol Console machine. 6. Click File --> Load KM on the Patrol main menu and load the IPSTOR_MODULE.kml module. 7. Click File --> Commit KM --> To All Connected Hosts on the Patrol main menu to send changed knowledge (IPSTOR_MODULE.kml) to all connected agents, including the storage server and Patrol Console machine. 8. Expand the storage server tree. You will see three new CDP/NSS sub-trees with several icons on the Patrol console.

Configure
You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this: 1. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the IPS_Server subtree of one storage server and select KM Commands --> trapReg --> trapSinkSettingEntry. The default read-only community is public. The default read-write community is falcon.
CDP/NSS Administration Guide 434

SNMP Integration

2. Select a host to receive traps. You can set up to five hosts to receive traps. If the value is '0', the host is invalid or not set. 3. In the Host fields, enter the IP address or machine name of the host that will receive traps. 4. In the Community fields, enter the community. 5. Click the Set button to save the configuration in snmpd.conf. 6. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the IPS_Server subtree of one storage server and select KM Commands --> alarmTable --> alarmEntry. Set the status value to enable(1) for on or disable(0) for off.

View traps
1. In the Patrol Console, on the Desktop tab, right-click the IPS_Trap_Receiver --> SNMPTrap_Receiver of the Patrol Console machine and select KM Commands -> Start Trap Receiver to let the Patrol Console machine start receiving traps. 2. After turning the trap receiver on, you can double-click the SNMP_Traps icon in the SNMPTrap_Receiver subtree of the Patrol Console machine to get the results of the traps that have been received.

View statistics in Patrol


In addition to monitoring CDP/NSS components and receiving alerts, you can view storage server statistics in Patrol. There are two ways to do this: IPStor icon 1. Highlight a storage server and totally expand the IPS_ProcessMonitor subtree and the IPS_Server subtree from the storage server. 2. Select the appropriate icon option. For a Server, you can view: Processes Status (Authentication Process, Communication Process, Logger Process, Self Monitor Process, SNMPD Process, etc.). To monitor more processes, you can change to KM tab on the Patrol Console and right-click one process from Knowledge Module --> Application Classes --> IPS_ProcessMonitor --> Global --> Parameters and click the Properties item on the menu. You can check the Active option to let the specified process to be monitored. After, change to Desktop tab and you can see the specified process is visible in the IPS_ProcessMonitor subtree. Server Status (ipsLaAvailLoad, ipsMemAvailSwap and ipsMemAvaiReal) These reports are provided by CDP/NSS as a convenient way to view statistical information without having to go through the MIB browser.
CDP/NSS Administration Guide 435

SNMP Integration

MIB browser

1. Highlight a storage server and right-click ServerInfor from IPS_Server subtree and select KM commands. In KM commands, several CDP/NSS integrated MIB items are inside. 2. Click one of the MIB items to retrieve the information related to the storage server.

Advanced SNMP topics


The following topics apply to all SNMP managers.

The snmpd.conf file


The snmpd.conf file is located in the /usr/local/ipstor/etc/snmp directory of the storage server and contains SNMP configuration information, including the CDP/ NSS community name and the network over which you are permitted to use SNMP (the default is the network where your storage server is located). If your SNMP manager resides on a different network, you will have to modify the snmpd.conf file before you can implement SNMP support through your SNMP manager. In addition, you can modify this file if you want to limit SNMP communication to a specific subnet or change the community name. The default read-write community is falcon. This is the only community you should change.

Use an SNMP configuration for multiple storage servers


To re-use your SNMP configuration for multiple storage servers, go to /usr/ local/ipstor/etc/snmp and copy the following files to the same directory on each storage server. snmpd.conf - contains trapSinkSettings IPStorSNMP.conf - contains trapSettings

Note: In order for the configuration to take effect, you must restart the SNMPD module on each storage server to which you copied these files.

CDP/NSS Administration Guide

436

SNMP Integration

IPSTOR-MIB tree
Once you have loaded the IPSTOR-MIB file, MIB Browser parses it into a tree hierarchy structure. The table below describes many of the tables and fields. Refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory for a complete list.
Table / Field descriptions Server Information
serverName loginMachineName serverVersion osVersion kernelVersion processorTable The hostname which the storage server is running. Identifies which storage server you are logged into. The storage server version and build number. The operation system version of the host the storage server is running. The kernel version of the host the storage server is running. A table containing the information of all processors in the host which the storage server is running. processorInfo: The specification of a processor type and power. The amount of memory which the storage server is running The swap space of the host which the storage server is running A table containing the information of all network interfaces in the host which the storage server is running. netInterfaceInfo: The specification containing MAC, IP address and MTU of a network interface. A table containing the failover information which is currently configured of the storage server. foName: The property of a failover configuration. foValue: The setting value of a failover configuration. foConfType: The Configuration Type of a failover configuration. foPartner: The Failover Partner of a failover configuration. foPrimaryIPRsource: The Primary Server IP Resource of a failover configuration foSecondaryIPResource: The Secondary Server IP Resource of a failover configuration. foCheckInterval: The Self Check Interval of a failover configuration foHearbeatInterval: The Hearbeat Interval of a failover configuration. foRecoverySetting: The Recovery Setting of a failover configuration. foState: The Failover State of a failover configuration. foPrimaryCrossLinkIP: The Primary Server CrossLink IP of a failover configuration. foSecondaryCrossLinkIP: The Secondary Server CrossLink IP of a failover configuration.

memory swap netInterfaceTable

FailoverInformationTable

CDP/NSS Administration Guide

437

SNMP Integration

Table / Field descriptions


failoverInformationTable (continued) serverOption foSuspended: The Suspended status of a failover configuration foPowerControl: The Power Control of a failover configuration fofcWWPN: The Fibre Channel WWPN of a failover configuration. nasOption:Indicates the status of NAS option is enable or disable of the storage server. fibreChannelOption: Indicatess the status of Fibre Channel option is enable or disable of the storage server. replicationOption: It indicates the status of Replication option is enable or disable of the storage server. syncMirroringOption: Indicates the status of synchronized Mirroring option is enable or disable of the storage server. timemarkOption: Indicates the status of Timemark option is enable or disable of the storage server. zeroimpactOption: Indicates the status of Zero Impact Backup option - if it is enabled or disabled on the storage server. The MTCP Version which the storage server uses. A table containing the information of the performance in the host which the storage server is running. performanceMirrorSyncTh: The Mirror Synchronization Throttle of the performance table. performanceSyncMirrorInterval The Synchronize out-of-sync mirrors Interval of the performance table. performanceSyncMirrorRetry The Synchronize out-of-sync mirrors retry times of the performance table. performanceSyncMirrorUpnum The Synchronize out-of-sync mirrors up numbers at each interval of the performance table. performanceInitialMirrorSync The option of starting initial synchronize when mirror is added of the performance table. performanceIncludeReplicaMirror: The option of including replica mirror in the automatic synchronize process of the performance table. performanceReplicationMicroScan It indicates the MicroScan option of Replication is enable or disable of the storage server. The storage server role. The storage server SMI-S option. A table containing the information of the IP alias in the host which the storage server is running. ServerIPAliasIP: The storage server IP Alias

MTCPVersion performanceTable

serverRole smioption ServerIPaliasTable

PhysicalResources
numOfAdapters numOfDevices The amount of physical adapters configured by the storage server. The amount of physical devices configured by the storage server.

CDP/NSS Administration Guide

438

SNMP Integration

Table / Field descriptions


scsiAdapterTable A table containing the information of all the installed SCSI adapters of the storage server. adapterNumber: The SCSI adapter number. adapterInfo: The model name of the SCSI adapter. A table containing all the SCSI devices of the storage server. deviceNo: The sequential digit number as a index key of the device table. deviceType:: Represents the access type that the device attached to the storage server. vendorID: The product vendor ID. produtcID: The product model name. firmwareRev: The firmware version of the device. adapterNo: The configured SCSI adapter number. channelNo: The configured SCSI channel number. scsiID: The configured SCSI ID. lun: The configured SCSI LUN number. totalSectors: The amount of sectors or blocks of the device. sectorSize: The size of bytes for each sector or block. totalSize: The size of the device represented in megabytes. configStatus: : Represents the attaching status of the device. totalSizeQuantity: The quantity size of the device. totalSizeUnit: The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. totalSectors64: The amount of sectors or blocks of the device. totalSize64: The size of the device represented in megabytes. A table containing the information of Storage Pool of the storage server. PoolName: The name of the Storage Pool. PoolID: The Pool ID of the Storage Pool. PoolType: The Pool Type of the Storage Pool. DeviceCount: The device Count in the Storage Pool. PoolCount: The Storage Pool counts. PoolTotalSize: The total Size of the Storage Pool. PoolUsedSize: The amount of Storage Pool space used. PoolAvailableSize: The available Size of the Storage Pool. PoolTotalSizeQuantity: The total Size quantity of the Storage Pool. PoolTotalSizeUnit The total Size unit of the Storage Pool. 0 = KB. 1 = MB. 2 = GB. 3 = TB. PoolUsedSizeQuantity: The quantity used of the Storage Pool. PoolUsedSizeUnit: The amount of space used unit of the Storage Pool. 0 = KB. 1 = MB. 2 = GB. 3 = TB. PoolAvailableSizeQuantity: The available Size quantity of the Storage Pool. PoolAvailableSizeUnit: The available Size unit of the Storage Pool. 0 = KB. 1 = MB. 2 = GB. 3 = TB. PoolTatalSize64: The total Size of the Storage Pool. PoolUsedSize64: The amount of Storage Pool space used. PoolAvailableSize64: The available Size of the Storage Pool.

scsiDeviceTable

StoragePoolTable

CDP/NSS Administration Guide

439

SNMP Integration

Table / Field descriptions LogicalResources


numOfLogicalResources SnapshotReservedArea The amount of logical resources which including the SAN, NAS, and Replica devices are available in the storage server. numOfSnapshotReserved: The amount of the shareable snapshot reserved areas. snapshotReservedTable : Table containing the snapshot reserved areas information. ssrName : The name of the snapshot reserved area. ssrDeviceName : The physical device name of the snapshot reserved area. ssrSCSIAddress : The SCSI address of the physical device which the snapshot reserved area created. ssrFirstSector : The first sector of the snapshot reserved area. ssrLastSector : The last sector of the snapshot reserved area. ssrTotalSectors : The amount of sectors that the snapshot reserved area created. ssrSize : The amount of resource size which is representing with megabyte unit of the snapshot reserved area. ssrSizeQuantity : The amount quantity of resource size of the snapshot reserved area. ssrSizeUnit : The resource size unit of the snapshot reserved area. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. ssrFirstSector64 : The first sector of the snapshot reserved area. ssrLastSector64 : The last sector of the snapshot reserved area. ssrTotalSector64 : The amount of sectors that the snapshot reserved area created. ssrSize64 : The amount of resource size which is representing with megabyte unit of the snapshot reserved area.

CDP/NSS Administration Guide

440

SNMP Integration

Table / Field descriptions Logical Resources --> SANResources


numOfSANResources SANResourceTable The amount of SAN resources are available by the storage server. A table containing the SAN resources information. sanResourceID : The SAN resource ID assigned by the storage server. sanResourceName : The SAN resource name created by the user. srAllocationType : Represents the resource type when user allocating the SAN device. srTotalSectors : The amount of sectors allocated by the SAN resource. srTotalSize : The amount of device size which is representing with megabyte unit of the SAN resource. srConfigStatus: Represents the attaching status of the SAN resource. srMirrorSyncStatus: Represents the mirror synchronization status of the SAN resource. srReplicaDevice : Represents the target replica server and device as the format <hostname of target>:<virtual device id>, if the replication option is enabled of the SAN resource srReplicatingSchedule: Represents the current status of the replicating schedule(On-schedule, Suspended, or N/A) set for the SAN resource. srSnapshotCopyStatus : The snapshot copy status of the SAN resource. srPhysicalAllocLayoutTable : Table containing the physical layout information for the SAN resources. srpaSanResourceName : The SAN resource name created by the user. srpaSanResourceID : The SAN resource ID assigned by the storage server. srpaName : The physical device name. srpaType: Represents the type(Primary, or Mirror) of the physical layout. srpaAdapterNo : The SCSI adapter number of the physical device. srpaChannelNo : The SCSI channel number of the physical device. srpaScsiID : The SCSI ID of the physical device. srpaLun : The SCSI LUN number of the physical device. srpaFirstSector : The first sector of the physical device which is allocated by the SAN resource. srpaLastSector : The last sector of the physical device which is allocated by the SAN resource. srpaSize : The amount of the allocated size which is representing with megabyte unit within a physical device. srpaSizeQuantity : The amount of the allocated size quantity within a physical device. srpaSizeUnit : The amount of the allocated size unit within a physical device. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srpaFirstSector64 : The first sector of the physical device which is allocated by the SAN resource. srpaLastSector64 : The last sector of the physical device which is allocated by the SAN resource.

CDP/NSS Administration Guide

441

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srpaSize64 : The amount of the allocated size which is representing with megabyte unit within a physical device. srClientInfoTable : Table containing the SAN clients information. srClientNo : The SAN client ID assigned by the storage server. srcName : The SAN client name assigned by the storage server. srcSANResourceID : SAN resource ID assigned by the storage server. srcSANResourceName : The SAN resource name created by the user. srcAdapterNo : The adapter number of the SAN client. srcChannelNo : The channel number of the SAN client. srcScsiID : The SCSI ID of the SAN client. srcLun : The SCSI LUN number of the SAN client. srcAccess : SAN resource accessing mode assigned to the SAN client. srcConnAccess : Identifies the connecting and accessing status with a resource of the SAN client. srFCClientInfoTable: : Table containing the Fibre Channel clients information. srFCClientNo : Fibre Channel client ID assigned by the storage server. srFCName : Fibre Channel client name assigned by the storage server. srFCSANResourceID : SAN resource ID assigned by the storage server. srFCSANResourceName : The SAN resource name created by the user. srFCInitatorWWPN : The world wide port name(WWPN) of the Fibre Channel client's initator HBA. srFCTargetWWPN : The world wide port name(WWPN) of the Fibre Channel client's target HBA. srFCLun : The SCSI LUN number of the Fibre Channel client. srFCAccess : The SAN resource accessing mode assigned to the Fibre Channel client. srFCConnAccess : Identifies the connecting and accessing status with a resource of the Fibre Channel client. srSnapShotTable : Table containing the snapshot resources created by the SAN resource. srSnapShotResourceID : SAN resource ID assigned by storage server. srSnapShotResourceName : SAN resource name created by the user. srSnapShotOption : The status represents the snapshot option is enable or disable of the SAN resource. srSnapShotSize : The allocated size when creating the SAN resource at first time. srSnapShotThreshold : The value represents the threshold setting which is in percentage(%) format of the SAN resource. srSnapShotReachTh : The policy is setting for expanding resource automatically or manually while reaching the threshold. srSnapShotIncSize : The incremental size for each time when is running out the resource. This is meaningful when expanding resource is automatically. srSnapShotMaxSize : The maximum resource size which is represented in megabyte unit is allowed for allocating.

CDP/NSS Administration Guide

442

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srSnapShotUsedSize64 : The resource size which is representing in kilobyte unit have been used. srSnapShotFreeSize64 : The free resource size which is representing in megabyte unit before reaching the threshold. srSnapShotReclaimPolicy : The status represents the snapshot Reclaim option is enabled or disabled of the SAN resource. srSnapShotReclaimTime : The initial time when the snapshot Reclaim option is enabled of the SAN resource. srSnapShotReclaimInterval : The schedule interval to start the snapshot Reclaim of the SAN resource. srSnpaShotReclaimWaterMark : The threshold for the minimum amount of space that can be reclaimed per TimeMark of the SAN resource. srSnapShotReclaimMaxTime : The maximum time for the reclaim process of the SAN resource. srSnapShotShrinkPolicy : The status represents the snapshot Shrink option is enabled or disabled of the SAN resource. srSnapShotShrinkThresHold : The minimum disk space to shrink the snapshot resource of the SAN resource. srSnapShotShrinkMinSize : The minimum size for the snapshot resource to shrink. srSnapShotShrinkMinSizeQuantity : The minimum size quantity for the snapshot resource to shrink. srSnapShotShrinkMinSizeUnit : The minimum size unit for the snapshot resource to shrink. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srSnapShotShrinkMinSize64 : The minimum size for the snapshot resource to shrink. srSnapShotResourceStatus : The snapshot resource status of the SAN resouce. srTimeMarkTable : Table containing the timamark resources created by the SAN resource. srTimeMarkResourceID : The SAN resource ID assigned by the storage server. srTimeMarkResourceName : The SAN resource name created by the user. srTimeMarkOption : The status represents the timemark option is enable or disable of the SAN resource. srTimeMarkCounts : The maximum timemarks that is allowed to create of the SAN resource. srTimeMarkSchedule : The time interval creates one new timemark. srTimeMarkLastTimeStamp : The lately timestamp creates timemark. srTimeMarkSnapshotImage : The time of each day creates snapshot-image automatically. srTimeMarkSnapshotNotificationOption : This option triggers the snapshot notification schedule. srTimeMarkReplicationOption : The replication option after the timemark is taken.

CDP/NSS Administration Guide

443

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srBackupTable : Table containing the backup resources created by the SAN resource. srBackupResourceID : The SAN resource ID assigned by the storage server. srBackupResourceName : The SAN resource name created by the user. srBackupOption : The status represents the backup option is enable or disable of the SAN resource. srBackupWindow : The daytime allows for opening one backup sesion. srBackupSessionLen : The time interval allows for one backup session in each time. srBackupRelativeTime : The time interval waits before closing the backup session which is in inactivity status. srBackupWaitTime : The time interval which is represnting in minute unit waits before closing the backup session after completion. srBackupSelectCriteria : The snapshot image selection criteria that could be new or latest for the backup session. New represents that it always creates new snapshot image for backup, and latest represents that it uses the last created snapshot image for backup. srBackupRawDeviceName : The SAN Backup Resource Raw Device Name created by the user. srReplicationTable : Table containing the replication resources created by the SAN resource. srReplicationResourceID : The SAN resource ID assigned by the storage server. srReplicationResourceName : The SAN resource name created by the user. srReplicationOption : The status represents the replication option is enable or disable of the SAN resource. srReplicaServer : The target replia server name. srReplicaDeviceID : The target replica device ID. srReplicaSchedule : Represents Current status of the replicating schedule(On-schedule, Suspended, or N/A) set for the SAN resource. srReplicaWatermark : The watermark sets to generate one new replication automatically. srReplicaWatermarkRetry : The retry interval which is representing in minute unit if the replication failed. srReplicaTime : The daytime of each day creates one new replication. srReplicaInterval : The time interval creates one new replication. srReplicationContinuousMode : The status represents the Continuous Mode of Replication is enable or disable. srReplicationCreatePrimaryTimeMark : Allows you to create the primary TimeMark when a replica TimeMark is created. srReplicaSyncTimeMark : Allows you to synchronize the replica TimeMark when a primary TimeMark is created. srReplicationProtocol : Allows you to synchronize the replica TimeMark when a primary TimeMark is created. srReplicationCompression : The status represents the Compression option is enable or disable of Replication.

CDP/NSS Administration Guide

444

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srReplicationEncryption : The status represents the Encryption option is enable or disable of Replication. srReplicationMicroScan : The status represents the MicroScan option is enable or disable of Replication. srReplicationSyncPriority : The Priority setting when Replication Synchronize of the SAN resource. srReplicationStatus : The Replication status of the SAN resource. srReplicationMode : The Replication mode of the SAN resource. srReplicationContinuousResourceID : The Continuous Replication Resource ID of the SAN resource. srReplicationContinuousResourceUsage : The Continuous Replication Resource Usage of the SAN resource. srReplicationDeltaData : The Accumulated Delta Data of replication of the SAN resource. srReplicationUseExistTM When Continuous Mode is disabled, the option about using existing TimeMark of the replication. srReplicationPreserveTm When Continuous Mode is disabled, the option about perserving TimeMark of the replication. srReplicaLastSuccessfulSyncTime : The last successful synchronize time of the replication. srReplicaAverageThroughput : The average throughput (MB/s) of the replication. srReplicaAverageThroughputQuantity : The average throughput quantity of the replication. srReplicaAverageThroughputUnit : The average throughput unit of the replication. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srCacheTable : Table containing the cache resource created by the SAN device. srCacheResourceID : The SAN resource ID assigned by the storage server. srCacheResourceName : The SAN resource name created by the user. srCacheOption : The status represents the cache option is enable or disable of the SAN resource. srCacheSuspend : The cache resource is currently suspended or not. srCacheTotalSize : The allocated size when creating the cache resource. srCacheFreeSize : The free resource size which is representing in megabyte unit before reaching the maximum resource size. srCacheUsage : The percentage of the used resource size. srCacheThresHold : The data needs to be in the cache before beginning flushing the cache. srCacheFlushTime : The number of milliseconds before cache begins to flush when below the data threshold level. srCacheFlushCommand : The outstanding commands will be sent at one time during the flush process. srCacheSkipWriteCommand This option allows the system to skip multiple pending write commands targeted for the same block.

CDP/NSS Administration Guide

445

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srCacheFlushSpeed : The flush speed will be sent at one time during the flush process. srCacheTotalSizeQuantity : The allocated size quantity when creating the cache resource. srCacheTotalSizeUnit : The allocated size unit when creating the cache resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srCacheFreeSizeQuantity : The free resource size quantity before reaching the maximum resource size. srCacheFreeSizeUnit : The free resource size unit before reaching the maximum resource size. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srCacheOwnResourceID : The Cache resource ID assigned by the storage server. srCacheTotalSize64 : The allocated size when creating the cache resource. srCacheFreeSize64 : The free resource size which is representing in megabyte unit before reaching the maximum resource size. srCacheStatus : Current safecache device's status of the SAN resource. srWriteCacheproperty : The property represents the write cache is enabled or disabled of the SAN resource. srMirrorTable : Table containing the mirror property created by the SAN device. srMirrorResourceID : The SAN resource ID that enables the mirror property. srMirrorType : The mirror type when a SAN resource enable the mirror property. srMirrorSyncPriority : The mirror synchronization priority when a SAN resource enable the mirror property. srMirrorSuspended Whether the mirror is suspended. srMirrorThrottle : The mirror throttle value for SAN resource. srMirrorHealthMonitoringOption : The status represents the mirror health monitoring option is enable or disable. srMirrorHealthCheckInterval : The Interval to Check and report mirror health status. srMirrorMaxLagTime : The Maximum acceptable lag time for mirror I/O. srMirrorSuspendThPercent: Suspends mirroring when the threshold of failure reaches the percentage of the failure conditions. srMirrorSuspendThIOnum: Suspends mirroring when the outstanding IOs is greater than or equal to the threshold. srMirrorRetryPolicy : The status represents the mirror synchronization retry policy is enable or not. srMirrorRetryInterval : The mirror synchronization retry at specified interval. srMirrorRetryActivity : The mirror synchronization retry when I/O activity is below or at threshold. srMirrorRetryTimes : The maximum mirror synchronization retry times.

CDP/NSS Administration Guide

446

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srMirrorSychronizationStatus : Represents the mirror synchronization status of the SAN resource. srMirrorAlterReadMirror : Represents the alternative read mirror option of the SAN resource. srMirrorAverageThroughput : The average throughput (MB/s) of the mirror synchronization operation. srMirrorAverageThroughputQuantity : The average throughput quantity of the mirror synchronization operation. srMirrorAverageThroughtputUnit : The average throughput unit of the mirror synchronization operation. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srThinProvisionTable : Table containing the Thin Provision of the SAN device. srThinProvisionOption : Represents the Thin Provisioning option is enable or disable of the resource. srThinProvisionCurrAllocSize : Current Allocated Size of the Thin Provision resource on the storage server. srThinProvisionUsageSize : Current usage size of the Thin Provision resource. srThinProvisionUsagePercentage : Current usage percentage of the Thin Provision resource. srThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of the Thin Provision resource on the storage server. srThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the Thin Provision resource on the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srThinProvisionUsageSizeQuantity : Current usage size quantity of the Thin Provision resource. srThinProvisionUsageSizeUnit : Current usage size unit of the Thin Provision resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srThinProvisionCurrAllocSize64 : Current Allocated Size of the Thin Provision resource on the storage server. srThinProvisionUsageSize64 : Current usage size of the Thin Provision resource. srCDPJournalTable : Table containing the CDP Journal resources created by the SAN resource. srCDPJournalResourceID : The CDP Journal ID assigned by the storage server. srCDPJournalSANResourceID : The CDP Journal SAN resource ID assigned by the storage server. srCDPJournalOption : The status represents the CDP Journal option is enable or disable of the SAN resource. srCDPJournalTotalSize : The CDP Journal Total size of the SAN resource. srCDPJournalStatus : The status represents current CDP Journal of the SAN resource. srCDPJournalPerformanceLevel : The setting Performance level for the CDP Journal of the SAN resource. srCDPJournalTotalSizeQuantity : The CDP Journal Total size quantity of the SAN resource.

CDP/NSS Administration Guide

447

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srCDPJournalTotalSizeUnit : The CDP Journal Total size unit of the SAN resource. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srCDPJournalTotalSize64 : The CDP Journal Total size of the SAN resource. srCDPJournalAvalibleTimerange : The CDP Journal Avaiable time range of the SAN resource. srCDPJournalUsageSize : The CDP Journal usage size(MB) of the SAN resource. srCDPJournalUsagePercentage : The CDP Journal usage percentage of the SAN resource. srCDPJournalUsageQuantity : The CDP Journal Usage size quantity of the SAN resource. srCDPJournalUsageUnit : The CDP Journal Usage size unit of the SAN resource. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srCDPJournalUsageSize64 : The CDP Journal Usage size of the SAN resource. srNearLineMirrorTable : Table containing the Near-Line Mirror property of the SAN device. srNearLineMirrorRemoteServerName : The remote server name of NearLine mirror resource sets on the storage server. srNearLineMirrorRemoteServerAlias : The remote server Alias of Near-Line mirror resource sets on the storage server. srNearLineMirrorRemoteID : The remote resource ID of Near-Line mirror resource sets on the storage server. srNearLineMirrorRemoteGUID : The remote resource GUID of Near-Line mirror resource sets on the storage server. srNearLineMirrorRemoteSN : The remote resource serial number of NearLine mirror resource sets on the storage server. srTotalSizeQuantity : The amount of device size quantity of the SAN resource. srTotalSizeUnit : The amount of device size quantity of the SAN resource. srTotalSectors64 : The amount of sectors allocated by the SAN resource. srTotalSize64 : The amount of device size which is representing with megabyte unit of the SAN resource srISCSIClientInfoTable : Table containing the iSCSI clients information. srISCSIClientNO : The iSCSI client ID assigned by the storage server. srISCSIName : The iSCSI client name assigned by the storage server. srISCSISANResourceID : The SAN resource ID assigned by the storage server. srISCSISANResourceName : The SAN resource name created by the user. srISCSIAccessType : The resource access type of the iSCSI client. srISCSIConnectAccess : Identifies the connecting and accessing status with a resource of the iSCSI client.

CDP/NSS Administration Guide

448

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srPhysicalTotalAllocLayoutTable : Table containing the total physical layout information for the SAN resources. srpaAllocSANResourceName : The SAN resource name created by the user. srpaAllocName : The physical device name. srpaAllocType : Represents the type(Primary, or Mirror) of the physical layout. srpaAllocAdapterNo : The SCSI adapter number of the physical device. srpaAllocChannelNo : The SCSI channel number of the physical device. srpaAllocScsiID : The SCSI ID of the physical device. srpaAllocLun : The SCSI LUN number of the physical device. srpaAllocFirstSector : The first sector of the physical device which is allocated by the SAN resource. srpaAllocLastSector : The last sector of the physical device which is allocated by the SAN resource. srpaAllocFirstSector64 : The first sector of the physical device which is allocated by the SAN resource. srpaAllocLastSector64 : The last sector of the physical device which is allocated by the SAN resource. srpaAllocSize : The amount of the allocated size which is representing with megabyte unit within a physical device. srpaAllocSizeQuantity : The amount of the allocated size quantity within a physical device. srpaAllocSizeUnit : The amount of the allocated size unit within a physical device. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srpaAllocSize64 : The amount of the allocated size which is representing with megabyte unit within a physical device. srHotZonePrefetchInfoTable : Table containing the HotZone Prefetch information. srHotZonePrefetchSANResourceID : The SAN resource ID that assigned by storage server. srHotZonePrefetchMaximumChains : The maximum number of sequential read chains to detect. srHotZonePrefetchMaximumReadAhead : The maximum size to read ahead representing with KB. srHotZonePrefetchReadAhead : The size of read the read command issued when reading ahead representing with KB. srHotZonePrefetchChainTimeout : The time before the chain is removed and the readahead buffers are freed. srHotZoneReadCacheInfoTable : Table containing the HotZone Read Cache information. srHotZoneCacheResourceID : The Resource ID that assigned by storage server. srHotZoneCacheSANResourceID : The SANResource ID that assigned by storage server.

CDP/NSS Administration Guide

449

SNMP Integration

Table / Field descriptions


SANResourceTable (continued) srHotZoneCacheTotalSize : The amount of device size which is representing with megabyte unit of the HotZone read cache resource. srHotZoneCacheStatus : The status represents current HotZone read cache of the SAN resource. srHotZoneCacheSuspended : The Suspended status represents current HotZone read cache of the SAN resource. srHotZoneCacheAccesType : The zone's access type policy of the SAN resource. srHotZoneCacheAccessIntensity : The access intensity to determine how the zone is accessed. srHotZoneCacheMinimumStayTime : The minimum time that how long a zone can stay at least in the HotZone before it is swapped out. srHotZoneCacheEachZoneSize : The size of each zone setting. srHotZoneCacheTotalZones : The total zones that is allocated of the SAN resource. srHotZoneCacheUsedZones : Current used zones of the SAN resource. srHotZoneCacheHitRatio : The hit ratio represents current HotZone read cache of the SAN resource. srHotZoneCacheTotalSizeQuantity : The amount of device size quantity which is representing with megabyte unit of the HotZone read cache resource. srHotZoneCacheTotalSizeUnit : The amount of device size unit of the HotZone read cache resource. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. srHotZoneCacheTotalSize64 : The amount of device size which is representing with megabyte unit of the HotZone read cache resource.

CDP/NSS Administration Guide

450

SNMP Integration

Table / Field descriptions Logical Resources --> replicaResources


numOfReplica ReplicaResourceTable The amount of replica resources created by the storage server. A table containing the replica resources. rrVirtualID : The resource ID assigned by the storage server. rrVirtualName : The resource name created by the user. rrAllocationType : Represents the resource type when user allocating the resource. rrSectors : The amount of sectors allocated by the resource. rrTotalSize : The amount of device size which is representing with megabyte unit of the resource. rrConfigurationStatus : Represents the attaching status of the resource. rrGUID : The GUID string of the replica resource. rrPrimaryVirtualID : Represents the source replication server and device as the format <hostname of source>:<virtual device id>, if the replication option is enabled of the resource. rrReplicationStatus : Represents the current status(Replication failed, New, Idle, and Merging) of the replication schedule. rrLastStartTime : The latest timestamp of the replication. rrMirrorSyncStatus : Represents the mirror synchronization status of the resource. rrWriteCache : Represents the write cache option is enable or disable of the resource. rrThinProvisionOption : Represents the Thin Provisioning option is enable or disable of the resource. rrThinProvisionCurrAllocSize : Current Allocated Size of the resource which enables Thin Provisioning. rrThinProvisionUsageSize : Current usage size of the resource which enables Thin Provisioning. rrThinProvisionUsagePercentage : Current usage percentage of the resource which enables Thin Provisioning. rrTotalSizeQuantity : The amount of device size quantity of the resource. rrTotalSizeUnit : The amount of device size unit of the resource.0 = KB. 1 = MB. 2 = GB. 3 = TB rrThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of the resource which enables Thin Provisioning. rrThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the resource which enables Thin Provisioning. 0 = KB. 1 = MB. 2 = GB. 3 = TB. rrThinProvisionUsageSizeQuantity : Current usage size quantity of the resource which enables Thin Provisioning rrThinProvisionUsageSizeUnit : Current usage size unit of the resource which enables Thin Provisioning. 0 = KB. 1 = MB. 2 = GB. 3 = TB. rrSectors64 : The amount of sectors allocated by the resource. rrTotalSize64 : The amount of device size which is representing with megabyte unit of the resource

CDP/NSS Administration Guide

451

SNMP Integration

Table / Field descriptions


ReplicaResourceTable (continued) rrThinProvisionCurrAllocSize64 : Current Allocated Size of the resource which enables Thin Provisioning. rrThinProvisionUsageSize64 : Current usage size of the resource which enables Thin Provisioning. rrLastSuccessSyncTime : The last successful synchronize timestamp of the replication. rrAverageThroughput : The average throughput (MB/s) of the replication. rrAverageThroughputQuantity : The average throughput quantity of the replication. rrAverageThroughputUnit : The average throughput unit of the replication. 0 = KB. 1 = MB. 2 = GB. 3 = TB A table containing the physical layout information for the replica resources. rrpaVirtualID : The replica resource ID assigned by the storage server. rrpaVirtualName : The replica resource name created by the user. rrpaName : The physical device name. rrpaType : Represents the type(Primary, or Mirror) of the physical layout. rrpaSCSIAddress : The SCSI address with <Adapter:Channel:SCSI:LUN> format of the replica resource. rrpaFirstSector : The first sector of the physical device which is allocated by the replica resource. rrpaLastSector : The last sector of the physical device which is allocated by the replica resource. rrpaSize : The amount of the allocated size which is representing with megabyte unit within a physical device. rrpaSizeQuantity : The amount of the allocated size quantity within a physical device. rrpaSizeUnit : The amount of the allocated size unit within a physical device. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB. rrpaFirstSector64 : The first sector of the physical device which is allocated by the replica resource. rrpaLastSector64 : The last sector of the physical device which is allocated by the replica resource. rrpaSize64 : The amount of the allocated size which is representing with megabyte unit within a physical device

ReplicaPhyAllocLayoutTable

Logical Resources --> Snapshot Group Resources


numOfGroup The amount of snapshot groups created by the storage server.

CDP/NSS Administration Guide

452

SNMP Integration

Table / Field descriptions


snapshotgroupInfoTable snapshotgroupName : The user-created snapshot group resource name. snapshotgroupType : The property of the snapshot group, which it can be one of the following types. timemark, backup, replication, timemark + backup, timemark + replication, backup + replication, and timmemark + backup + replication. snapshotgroupTimeMarkInfoTable : Table containing the timemark properties of snapshot groups. snapshotgroupTimeMarkGroupID : The snapshot group resource ID assigned by the storage server. snapshotgroupTimeMarkOption : The status represents the timemark option is enable or disable of the snapshot group resource. snapshotgroupTimeMarkCounts : The maximum timemarks that is allowed to create of the snapshot group resource. snapshotgroupTimeMarkSchedule : The time interval creates one new timemark. snapshotgroupTimeMarkSnapshotImage : The time of each day creates snapshot-image automatically. snapshotgroupTimeMarkSnapshotNotificationOption : The option of triggering snapshot notification schedule. snapshotgroupTimeMarkReplicationOption : The replication option after the timemark is taken. snapshotgroupBackupInfoTable - : Table containing the backup properties of snapshot groups. snapshotgroupBackupGroupID : The snapshot group resource ID assigned by the storage server. snapshotgroupBackupOption : The status represents the backup option is enable or disable of the snapshot group resource. snapshotgroupBackupWindow : The daytime allows for opening one backup sesion. snapshotgroupBackupSessionLen : The time interval allows for one backup session in each time. snapshotgroupBackupRelativeTime : The time interval waits before closing the backup session which is in inactivity status. snapshotgroupBackupWaitTime : The time interval which is represnting in minute unit waits before closing the backup session after completion. snapshotgroupBackupSelectCriteria : The snapshot image selection criteria that could be new or latest for the backup session. New represents that it always creates new snapshot image for backup, and latest represents that it uses the last created snapshot image for backup. snapshotgroupReplicationInfoTable - : Table containing the replication properties of snapshot snapshotgroupReplicationGroupID: The snapshot group resource ID assigned by the storage server snapshotgroupReplicationOption : The status represents the replication option is enable or disable of the snapshot group resource.

CDP/NSS Administration Guide

453

SNMP Integration

Table / Field descriptions


snapshotgroupInfoTable (continued) snapshotgroupReplicaServer : The target replia server name. snapshotgroupReplicaGroupID : The target replica group ID. snapshotgroupReplicaWatermark : The watermark sets to generate one new replication automatically. snapshotgroupReplicaTime : The daytime of each day creates one new replication. snapshotgroupReplicaInterval : The time interval creates one new replication. snapshotgroupReplicawatermarkRetry : The retry interval which is representing in minute unit if the replication failed. snapshotgroupReplicaContinuousMode : The status represents the Continuous Mode of Replication is enable or disable. snapshotgroupReplicaCreatePrimaryTimeMark : Allows you to create the primary TimeMark when a replica TimeMark is created. snapshotgroupReplicaSyncTimeMark : Allows you to synchronize the replica TimeMark when a primary TimeMark is created. snapshotgroupReplicaProtocol It states the Protocol which Replication uses. snapshotgroupReplicaCompression : The status represents the Compression option is enable or disable of Replication. snapshotgroupReplicaEncryption : The status represents the Encryption option is enable or disable of Replication. snapshotgroupReplicaMicroScan : The status represents the MicroScan option is enable or disable of Replication. snapshotgroupReplicaSyncPriority : The Priority setting when Replication Synchronize of the SAN resource. snapshotgroupReplicaMode : The Replication mode of the SAN resource. snapshotgroupReplicaUseExistTM: When Continuous Mode is disabled, the option about using existing TimeMark of the replication. snapshotgroupReplicaPreserveTM: When Continuous Mode is disabled, the option about perserving TimeMark of the replication. A table containing the cdp properties of snapshot groups. snapshotgroupCDPInfoGroupID : The snapshot group resource ID assigned by the storage server. snapshotgroupCDPInfoOption : The status represents the snapshot group CDP Journal option is enable or disable of the storage server. snapshotgroupCDPInfoTotalSize : The total size of snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoStatus : The status of the snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoPerformanceLevel : The performance level setting of the snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoAvailableTimerange : The available time range of the snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoUsageSize : The usage size(MB) of snapshot group CDP Journal of the storage server.

snapshotgroupCDPInfoTable

CDP/NSS Administration Guide

454

SNMP Integration

Table / Field descriptions


snapshotgroupCDPInfoTable (continued)

snapshotgroupCDPInfoUsagePercent: The usage percentage of snapshot


group CDP Journal of the storage server. snapshotgroupCDPInfoTotalSizeQuantity : The total size quantity of snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoTotalSizeUnit : The total size unit of snapshot group CDP Journal of the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB. snapshotgroupCDPInfoTotalSize64 : The total size 64 bit long of snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoUsageSizeQuantity : The usage size quantity of snapshot group CDP Journal of the storage server. snapshotgroupCDPInfoUsageSizeUnit : The usage size unit of snapshot group CDP Journal of the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB. snapshotgroupCDPInfoUsageSize64 : The usage size 64 bit long of snapshot group CDP Journal of the storage server.

CDP/NSS Administration Guide

455

SNMP Integration

Table / Field descriptions


snapshotgroupSafeCache InfoTable A table containing the safecache properties of snapshot groups. snapshotgroupSafeCacheInfoGroupID : The snapshot group resource ID assigned by the storage server. snapshotgroupSafeCacheInfoOption : The status represents the snapshot group safecache option is enable or disable of the storage server. snapshotgroupSafeCacheInfoSuspend : The group safecache resource is currently suspended or not. snapshotgroupSafeCacheInfoTotalSize : The allocated size when creating the cache resource. snapshotgroupSafeCacheInfoFreeSize : The free resource size which is representing in megabyte unit before reaching the maximum resource size. snapshotgroupSafeCacheInfoUsage : The percentage of the used resource size. snapshotgroupSafeCacheInfoThreshold : The data needs to be in the cache before beginning flushing the cache. snapshotgroupSafeCacheInfoFlushTime : The number of milliseconds before cache begins to flush when below the data threshold level. snapshotgroupSafeCacheInfoSkeipWriteCommands This option allows the system to skip multiple pending write commands targeted for the same block. snapshotgroupSafeCacheInfoFlushSpeed : The flush speed will be sent at one time during the flush process. snapshotgroupSafeCacheInfoTotalSizeQuantity : The allocated size quantity when creating the cache resource. snapshotgroupSafeCacheInfoTotalSizeUnit : The allocated size unit when creating the cache resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB. snapshotgroupSafeCacheInfoFreeSizeQuantity : The free resource size quantity before reaching the maximum resource size. snapshotgroupSafeCacheInfoFreeSizeUnit : The free resource size unit before reaching the maximum resource size. 0 = KB. 1 = MB. 2 = GB. 3 = TB. snapshotgroupSafeCacheInfoResourceID : The Cache resource ID assigned by the storage server. snapshotgroupSafeCacheInfoTotalSize64 : The allocated size when creating the cache resource.

snapshotgroupSafeCache InfoTable (continued)

snapshotgroupSafeCacheInfoFreeSize64 : The free resource size which is


representing in megabyte unit before reaching the maximum resource size. snapshotgroupSafeCacheInfoStatus : The status of the snapshot group safecache of the storage server.

CDP/NSS Administration Guide

456

SNMP Integration

Table / Field descriptions


snapshotgroupMembers The snapshot group member counts of the storage server. snapshotgroupAssignClients : The snapshot group assign client counts of the storage server. snapshotgroupCacheOption : The status represents the snapshot group cache option is enable or disable of the storage server. snapshotgroupReplicationOption : The status represents the snapshot group replication option is enable or disable of the storage server. snapshotgroupTimeMarkOption : The status represents the snapshot group timemark option is enable or disable of the storage server. snapshotgroupCDPOption : The status represents the snapshot group cdp option is enable or disable of the storage server. snapshotgroupBackupOption : The status represents the snapshot group backup option is enable or disable of the storage server. snapshotgroupSnapShotOption : The status represents the snapshot group snapshot notification option is enable or disable of the storage server. snapshotgroupMemberTableGroupID: The snapshot group resource ID assigned by the storage server. snapshotgroupMemberTableName : Virtual resource name created by the user.

snapshotgroupMemberTable

CDP/NSS Administration Guide

457

CDP/NSS Administration Guide

Email Alerts
Email Alerts is a unique FalconStor customer support utility that proactively identifies and diagnoses potential system or component failures and automatically notifies system administrators via email. With Email Alerts, the performance and behavior of servers can be monitored so that system administrators are able to take corrective measures within the shortest amount of time, ensuring optimum service uptime and IT efficiency. Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.). With its open architecture, administrators can easily register new elements to be monitored by these scripts. When an error is triggered, Email Alerts uses the built-in CDP/NSS X-ray feature to capture the appropriate information. This includes the CDP/NSS event log, as well as a snapshot of the CDP/NSS appliances current configuration and environment. The technical information needed to diagnose the reported problem is then sent to a system administrator.

Configure Email Alerts (Updated January 2012)


Email Alerts can be configured to meet your business needs. You can specify who should be notified about which events. The triggers can be defined to combine any of the scripts listed below. For example, it can be used to monitor a particular Thin disk or all Thin disks. To configure Email Alerts: 1. In the Console, right-click on your storage server and select Options --> Enable Email Alerts.

CDP/NSS Administration Guide

458

Email Alerts

2. Enter general information for your Email Alerts configuration.

SMTP Server - Specify the mail server that Email Alerts should use to send out notification emails. SMTP Port - Specify the mail server port that Email Alerts should use. SMTP Username/Password - Specify the user account that will be used by Email Alerts to log into the mail server. User Account - Specify the email account that will be used in the From field of emails sent by Email Alerts. Target Email - Specify the email address of the account that will receive emails from Email Alerts. This will be used in the To field of emails sent by Email Alerts. CC Email - Specify any other email accounts that should receive emails from Email Alerts. Subject - Specify the text that should appear on the subject line. The general subject defined during setup will be followed by the trigger specific subject. If the trigger does not have a subject, the trigger name and parameters are appended to the general email subject. For the syslogcheck.pl trigger, the first alert category is appended to the general email subject. If the email is sent based on event severity, the event ID will be appended to the general email subject. Interval - Specify the time period between each activation of Email Alerts. The Test button allows you to test the configuration by sending a test email.

CDP/NSS Administration Guide

459

Email Alerts

3. Enter the contact information that should appear in each Email Alerts email.

4. Set the triggers that will cause Email Alerts to send an email.

CDP/NSS Administration Guide

460

Email Alerts

Triggers are the scripts/programs that perform various types of error checking when Email Alerts activates. By default, FalconStor includes scripts/programs that check for low system memory, changes to the CDP/NSS XML configuration file, and relevant new entries in the system log.
Note: If the system log is rotated prior to the Email Alerts checking interval and contains any triggers but the new log does not have any triggers in it, then no email will be sent. This is because only the current log is checked, not the previous log.

The following are the some of the default scripts that are provided: activity.pl - (Activity check) - This script checks to see if an fsstats activity statistics file exists. If it does, an email alert is sent with the activity file attached. cdpuncommiteddatachk.pl -t 90 - This script checks for uncommitted data on CDP and generates an email alert message if the percentage of uncommitted data is more than that specified. By default, the trigger gets activated when the percentage of uncommitted data is 90%. chkcore.sh 10 (Core file check) - This script checks to see if a new core file has been created by the operating system in the bin directory of CDP/ NSS. If a core file is found, Email Alerts compresses it, deletes the original, and sends an email report but does not send the compressed core file (which can still be large). If there are more than 10 (variable) compressed core files under $ISHOME/bin directory, it will keep latest 10 compressed core files and delete the oldest ones. defaultipchk.sh eth0 10.1.1.1 (NIC IP address check) - This script checks that the IP address for the specified NIC matches what is specified here. If it does not, Email Alerts sends an email report. You can add multiple defaultipcheck.sh triggers for different NICs (for example eth1 could be used in another trigger). Be sure to specify the correct IP address for each NIC. diskusagechk.sh / 95 (Disk usage check) - This script checks the disk space usage at the root of the file system. If the percentage is over the specified percentage (default is 95), Email Alerts sends an email report. You can add multiple diskusagechk.sh triggers for different mount points (for example, /home could be used in another trigger). fccchk.pl - (QLogic HBA check) - This script checks each QLogic adapter initiator port and sends an email alert if there is a status change from Online to Not Online. The script also checks QLogic link status and sends an email alert if the status of FC Link Down changes from OK to Not OK.fmchk.pl and smchk.pl - These scripts (for checking if the fm and ipstorsm modules are responding) are disabled. ipstorstatus.sh (IPStor status check) - This script checks if any module of CDP/NSS has stopped. If so, Email Alerts sends an email report.

CDP/NSS Administration Guide

461

Email Alerts

kfsnmem.sh 10 (CDP/NSS memory management check) - This script checks to see if the maximum number of memory pages has been set. If not, Email Alerts sends an email report. If it is set, the script checks the available memory pages. If the percentage is lower than specified percentage (default is 10), Email Alerts sends an email report. memchk.sh (Memory check) - This script takes in a percentage as the parameter and checks whether the available system memory is below this percentage. If yes, Email Alerts sends an email report. netconfchk.pl - (Inactive network interfaces/invalid broadcasts check) This script uses the ifconfig command to check network configuration once a day (by default) and sends an email alert if there are any network devices set to '_tmp' or any broadcast addresses that do not match the IP and netmask rules. neterrorchk.pl - (Network configuration check) - This script uses the ifconfig command to check network configuration and sends an email alert if there are any network errors, overruns, dropped events, or network collisions. powercontrolchk.pl - This script checks system configuration file and reports absent power control in a failover setup once a day, by default. processchk.pl - (System process check) This script checks system processes (via the ps command) and sends an email alert if there are processes using more than 1 GB of non-swapped physical memory. This script also sends an email alert if there are processes using more than 90% of CPU usage. promisecheck.pl - (Promise storage check) - This script checks events reported by Promise storage hardware every 10 minutes (by default) and sends an email alert if there is an event with a category other than Info. This trigger needs to be enabled on-site and requires the IP address and user/password account needed to access the storage via ssh. The ssh service must be enabled and started on the Promise storage. repmemchk.sh (Memory check) - This script checks memory usage by continuos replication resources. If data in the CDR resource is using more than 1GB of kernal memory, it triggers an email alert. reportheartbeat.pl (Heartbeat check) - This script checks to see if the server is active. If it is, Email Alerts sends an email every 24 hours, by default, to report that the server is alive. You can change the default interval with the parameter -interval<value in minutes>. reposit_check.pl - This script checks the configuration repositorys current configuration. If it is not updated, generates an email alert. However, this trigger works only in case of failover pair. This trigger does not generate an email alert for a CDP/NSS server with quorum repository but not in failover mode. serverstatus.sh (Server status check) - This script checks the server module status. If any module has stopped, an email alert is sent.

CDP/NSS Administration Guide

462

Email Alerts

snapshotreschk.pl (Snapshot resource are usage check) - This script checks the snapshot resource area usage. If the usage corresponds to the actual percentage threshold minus the margin value (default 10%), an email alert is sent to warn users to take remedy actions before the actual threshold is reached. swapcheck.pl 80 (Memory swap usage check) - This script checks available swap memory. If the percentage is below the specified value (default 80), an email alert is sent with the total swap space and the swap usage. syslogchk.pl (System log check) - This script looks at the last 20 MB of messages in the system log. If any message matches what was defined on the System Log Check dialog and does not match what was defined on the System Log Ignore dialog, an email alert is sent with an attachment that includes all files in $ISHOME/etc and $ISHOME/log. If you want to limit the number of email alerts for the same System log event or category of events, set the -memorize parameter to the number of minutes to remember each event. If the same event is detected in the previous Email Alerts interval, no email alert is sent for that event. If an event is detected several times during the current interval, the first occurrence is reported in the email that is sent for that interval and the number of repetitions is indicated at the end of the email body with the last occurrence of the message. The default value is the same as the Email Alerts interval that was set on the first dialog (or the General tab if Email Alerts is already configured). Some of the common events in Syslogchk are as follows: Fail over to the partner Take over the partner Replication failure Mirrored primary device failure Mirror device failure Mirror swap SCSI Error Stack Abandoned commands FC pending commands Busy FC Storage logout iSCSI client reset because of commands stuck in IO Core Kernel error Kernel memory swap thindevchk.pl -t 200 -s 200 -n 48 - This script monitors total free storage, storage pool free space, free space for thin device expansion, and number of segments of a thin device. The trigger parameters are: -t threshold of percentage of global free space: if the (global free storage space/global total storage space) is less than the given percentage, send an alert.

CDP/NSS Administration Guide

463

Email Alerts

-i threshold of percentage of free space of each storage pool: if the (free storage space/total storage space) of any storage pool is less than the given percentage, send an alert. -s threshold of free space for expansion of thin-provisioning devices: if the available GB storage to expand each thin-provisioning device is less than the given value, send an alert. if the thin device VID is provided by "-v", then only check that device. -v vid: The vid of a thin-provisioning device that needs to be checked for free storage for expansion. -n threshold of number of segments of a thin-provisioning disk: If the number of segments on primary disk or mirror disk of a thinprovisioning device exceeds the given threshold, send an alert. -interval: enter this parameter followed by the number of minutes to trigger this script every n minutes. This parameter applies to all triggers. This interval overrides the global setting. tmkusagechk - This script monitors TimeMark memory usage. It checks the values of 'Low Total Memory' and 'Total Memory reserved by IOCore'. When TimeMark memory usage goes over the lower of these two values, by the percentage defined in the trigger, an Email Alert is generated. xfilechk.sh - This script checks and notifies changes in executable files on the server. If an executable file is added, removed, renamed, or modified, it sends an email alert. It does not monitor non-executable files. zombiechk.pl (Defunct process check) - This script checks system processes once a day (by default) and sends an email alert if there are 10 (default) or more defunct processes.

5. Select the components that will be included in the X-ray.


Note: Because of its size (minimum of 2 MB), the X-ray file is not sent by default with each notification email. It will, however, be available, should the system administrator require it.

The following options are available to customize your x-ray. Regardless of which option you choose, the bash_history file is created containing a history of the

CDP/NSS Administration Guide

464

Email Alerts

commands typed. This is useful in obtaining the history of commands typed before an issue occurred.

System Information - When this option is selected, the X-ray creates a file called info which contains information about the entire system, including: host name, disk usage, operating system version, mounted file systems, kernel version, CPU, running processes, IOCore information, uptime, and memory. In addition, if an IPMI device is present in the server, the X-ray info file will also include the following files for IPMI: ipmisel - IPMI system event log ipmisensor - IPMI sensor information ipmifru - IPMI built-in FRU information IPStor Configuration - This information is retrieved from the /usr/local/ ipstor/etc/<hostname> directory. All configuration information (ipstor.conf, ipstor.dat, IPStorSNMP.conf, etc.), except for shared secret information, is collected. SCSI Devices - SCSI device information included in the info file. IPStor Virtual Device - Virtual device information included in the info file. Fibre Channel - Fibre Channel information. Log File - The Linux system message file, called messages, is located in the /var/log directory. All storage server messages, including status and error messages are stored in this file. Loaded Kernel - Loaded kernel modules information is included in the info file. Network Configuration - Network configuration information is included in the info file.

CDP/NSS Administration Guide

465

Email Alerts

Kernel Symbols - This information is collected in the event it will need to be used for debugging purposes. Core File - The /usr/local/ipstor path will be searched for any core files that might have been generated to further help in debugging reported problems. Scan Physical Devices - Physical devices will be scanned and information about them will be included. You can select to Scan Existing Devices or Discover New Devices.

6. Indicate the terms that should be tracked in the system log by Email Alerts.

The system log records important events or errors that occur in the system, including those generated by CDP/NSS. This dialog allows you to rule out entries in the system log that have nothing to do with CDP/NSS, and to list the types of log entries generated by CDP/NSS that Email Alerts needs to examine. Entries that do not match the entries entered here are ignored, regardless of whether or not they are relevant to CDP/NSS. The trigger for monitoring the system log is syslogchk.pl. To inform the trigger of which specific log entries need to be captured, you can specify the general types of entries that need to be inspected by Email Alerts. On the next dialog, you can enter terms to ignore, thereby eliminating entries that match these general types, yet can still be disregarded. The resulting subset contains all entries for which Email Alerts needs to send out email reports. Each line is a regular expression. The regular expression rules follow the pattern for AWK (a standard Unix utility).
Note: By default, the system log file is included in the X-ray file which is not sent with each notification email.

CDP/NSS Administration Guide

466

Email Alerts

7. Indicate which categories of internal messages should not be included.

By default, all categories are disabled except the syslog.ignore.customized. If a category is checked, it will ignore any messages related to that category. Select the Customized System Log Ignore tab to add customized ignore entries. You can enter terms to ignore, thereby eliminating entries that will cause Email Alerts to send out email reports. Each line is a regular expression. The regular expression rules follow the pattern for AWK (a standard Unix utility).

CDP/NSS Administration Guide

467

Email Alerts

8. Select the severity level of server events for which you want to receive an email alert.

By default, the alert security level is set to None. You can select one of the following severity levels Critical - checks only the critical severity level Error - checks the error and any severity level higher than error. Warning - checks the warning and any severity level higher than warning. Informational - checks all severity levels. 9. Confirm all information and then click OK to enable Email Alerts.

CDP/NSS Administration Guide

468

Email Alerts

Modify Email Alerts properties


Once Email Alerts is enabled, you can modify the information by right-clicking on your storage server and selecting Email Alerts.

Click on the appropriate tab to update the desired information. The General tab displays server and message configuration and allows you to send a test email. The Signature tab allows you to edit the contact information that appears in each Email Alerts email. The Trigger tab allows you to set triggers that will cause Email Alerts to send an email as well as set up an alternate email. The Attachment tab allows you to select the information (if any) to send with the email alert. You can send log files or X-Ray files. The System Log Check tab allows you to add, edit, or delete syntax from the log entries that need to be captured. You can also specify the general types of entries that need to be inspected by Email Alerts. The System Log Ignore tab allows you to select system log entries to ignore, thereby eliminating entries that will cause Email Alerts to send out email reports.

CDP/NSS Administration Guide

469

Email Alerts

Email format
The email body contains the messages return by the triggers. The alert text starts with the category followed by the actual message coming from the system log. The first 30 lines are displayed. If the email body is more than 16 KB, it will be compressed and sent as an attachment to the email. The signature defined during Email alerts setup appears at the end of email body.

Limiting repetitve Emails


To limit repetitive emails, you have the option to limit the number of email alerts for the same event ID. By using the -memorize parameter for the syslogcheck.pl trigger, you can have the Email Alerts module memorize IDs and timestamps of events for which an alert is sent. In this case, an event detected with the same event ID as an event in the previous interval, will not trigger an email alert for that same event. However, if an event is detected several times during the current checking interval, all those events are reported in the email that is sent for that interval. The parameter -memorize for the syslogcheck.pl trigger allows you to set the trigger memorization logic and set the number of hours to remember each event. The default value is 24 hours that results in sending alerts for the same event once a day.

Script/program trigger information


Email Alerts uses script/program triggers to perform various types of error checking. By default, FalconStor includes several scripts/programs that check for low system memory, changes to the CDP/NSS XML configuration file, and relevant new entries in the system log. Custom email destination You can specify an email address to override the default Target Email or a text subject to override the default Subject. To do this: 1. Right-click on your storage server and select Email Alerts --> Trigger tab.

CDP/NSS Administration Guide

470

Email Alerts

2. For an existing trigger, highlight the trigger and click Edit.

The alternate email address along with the Subject is saved to the $ISHOME/ etc/callhome/trigger.conf file when you have finished editing.
Note: If you specify an email address, it overrides the return code. Therefore, no attachment will be sent, regardless of the return code.

New script/ program

The trigger can be a shell script or a program (Java, C, etc.). If you create a new script/program, you must add it in to the $ISHOME/etc/callhome/ trigger.conf file so that Email Alerts knows of its existence. Return codes determine what happens as a result of the scripts/programs execution. The following return codes are valid: 0: No action is required and no email is sent. 1: Email Alerts sends email without any attachments. 2: Email Alerts attaches all files in $ISHOME/etc and $ISHOME/log to the email. 3: Email Alerts sends the X-ray file as an attachment (which includes all files in $ISHOME/etc and $ISHOME/log). Because of its size (minimum of 2 MB), it is recommended that you do not attach the X-ray file with each notification email.

Return codes

The $ISHOME/etc directory contains a CDP/NSS configuration file (containing virtual device, physical device, HBA, database agent, etc. information). The $ISHOME/log directory contains Email Alerts logs (containing events and output of triggers). Output from trigger Sample script In order for a trigger to send useful information in the email body, it must redirect its output to the environment variable $IPSTORCLHMLOG. The following is the content of the storage server status check trigger, ipstorstatus.sh:

CDP/NSS Administration Guide

471

Email Alerts
#!/bin/sh RET=0 if [ -f /etc/.is.sh ] then . /etc/.is.sh else echo Installation is not complete. Environment profile is missing in /etc. echo exit 0 # don't want to report error here so have to exit with error code 0 fi $ISHOME/bin/ipstor status | grep STOPPED >> $IPSTORCLHMLOG if [ $? -eq 0 ] ; then RET=1 fi exit $RET

If any CDP/NSS module has stopped, this trigger generates a return code of 2 and sends an attachment of all files under $ISHOME/etc and $ISHOME/log.

CDP/NSS Administration Guide

472

CDP/NSS Administration Guide

BootIP
FalconStors boot over IP services for Windows and Linux-based storage servers allows you to maximize business continuity and return on investment. BootIP enables IT Managers to provision disk storage and its related services to achieve maximum return on investment (ROI). BootIP leverages the proven SAN management infrastructure and storage services available in FalconStors network storage infrastructure to ensure business continuity, high availability and effective disaster recovery planning.

Set up BootIP
Setting up BootIP involves several steps, which are outlined below: 1. Prepare a sample computer with the operating system and all the applications installed. 2. Install CDP or NSS on a server computer. 3. Install Microsoft iSCSI initiator boot version and DiskSafe on the sample computer. The Microsoft iSCSI Software Initiator enables connection of a Windows host to an external iSCSI storage array using Ethernet NICs. For boot version, using configurations to boot Windows Server 2003/vista/2008 hosts. When installing Microsoft iSCSI Software Initiator, check the item Configure iSCSI Network Boot Support and select the network interface driver for the NIC that will be used to boot via iSCSI. 4. Install the FalconStor Management Console. You can also create a boot image for client computers that do not have disks. To do this, you need to prepare a computer to be used for your boot image. 1. Make sure everything is installed on the computer, including the operating system and the applications that the client computers will use. 2. Once you have prepared the computer, use DiskSafe to backup the computer to create a boot image for diskless client computers. 3. After preparing the boot image, create TimeMarks from the boot image, and then mount the TimeMarks as individual TimeViews and respectively assign them to the diskless computers. 4. Configure the diskless computers to boot up from the network. Using DiskSafe can help you to clone a boot image from the sample computer and put the image on an IPStor-managed virtual disk. You can then set up the BootIP from the server and use the boot image to boot the diskless client computers.

CDP/NSS Administration Guide

473

BootIP

Prerequisites
A valid Operating System (OS) image must be prepared for iSCSI remote boot. The conditions of a valid OS image for an iSCSI boot client are listed below: The OS must be one of the following: Windows 2003 with Microsoft iSCSI initiator boot version installed. Windows Vista SP1 with Microsoft iSCSI initiator enabled manually. Windows 2008 with Microsoft iSCSI initiator enabled. The network adapter used by remote boot must be certified by Microsoft for iSCSI Boot Component Test. In Local Area Connection Properties of this network adapter, Internet Protocol (TCP/IP) must be checked. In Windows 2003, make sure the iSCSI Boot (BootIP) sequence is correct using command c:\iscsibcg /verify /fix Make sure the network interface card is the first boot device in the client machines BIOS. In addition to a valid OS image and client BIOS configuration, the mirrored iSCSI disk should set the following properties before remote boot: Assign LUN 0 to the iSCSI disk used for remote boot. The iSCSI disk must be assigned to the first iSCSI target with the smallest target ID. If the iSCSI disk contains Windows 2008 or Windows Vista OS, the iSCSI disks disk signature changed by DiskSafe during backup must be changed back to the original signature to match the local disk backed up by DiskSafe. You can use the following IPStor iscli command to change the disk signature: # iscli setvdevsignature -s 127.0.0.1 -v VID F

Note: The VID should be the virtual device ID of the iSCSI disk.

CDP/NSS Administration Guide

474

BootIP

Create a boot image for a diskless client computer


To create a boot image that can be used to boot up a single diskless client computer, follow the steps below: 1. Prepare the storage and user access for the storage server from the FalconStor Management Console. For details, see Initialize the configuration of the storage Server. 2. Enable the IPStor BootIP via the FalconStor Management Console. For details, seeEnable the BootIP from the FalconStor Management Console. 3. Create a boot image by using DiskSafe to clone a virtual disk and set up the BootIP properties from FalconStor Management Console. For details, see Use DiskSafe to clone a boot image, Set BootIP properties,and Set the Recovery Password 4. Shutdown the sample computer and remove the system disk. 5. Boot up the iSCSI disk remotely on the original client computer. For details, see Remote boot the diskless computer 6. Use the System Preparation Tool to configure the automatic deployment windows OS on your remote boot client computer. For details, refer to Set BootIP properties. 7. Create a TimeMark of boot image. For details, see Create a TimeMark. 8. Create a TimeView from the TimeMark. For details, see Create a TimeView. 9. Assign the TimeView to this SAN client. For details, see Assign a TimeView to a diskless client computer. 10. Set up the BootIP properties from the FalconStor Management Console. For details, see Set BootIP properties. 11. Boot up the diskless computer client remotely.

CDP/NSS Administration Guide

475

BootIP

Initialize the configuration of the storage Server


Initializing the configuration of the storage server involves several steps, including: Entering the license keycodes Preparing the storage and adding your virtual device to the storage pool Creating an IPStor user account Selecting users who will have access the storage pool you have created.

Enable the BootIP from the FalconStor Management Console


You will need to enable the BootIP function before you can use it. You must also set the BootIP properties of the SAN clients. To do this: 1. Log into the storage server from the FalconStor Management Console. 2. Right-click on the [HostName] and select Options Enable BootIP 3. If you have external DHCP, DHCP will not be enabled on the storage server. Therefore, keep the Enable DHCP option unchecked. 4. Click OK to start the BootIP daemon.

Use DiskSafe to clone a boot image


You can use DiskSafe to clone a boot image to be used at a later date. To do this: 1. While running DiskSafe on the sample computer, right-click on Disks and select Protect. 2. Click Next to launch the Protect Disk Wizard 3. Choose the system disk and Click Next 4. Click New Disk 5. Click Add Server 6. Enter the storage server name (or IP), User name and Password; Check the iSCSI protocols. Then click OK. 7. Click OK to allocate Disk. 8. Click Next to continue finishing the following wizard setting 9. After synchronizing finished, Right-Click on the disk you protected and select Advanced -->Take Snapshot When the disk is protected from DiskSafe, an IPStor-managed virtual disk within the boot image will be generated and assigned to the sample computer from the FalconStor Management Console.

CDP/NSS Administration Guide

476

BootIP

Set BootIP properties


To set the BootIP properties, follow the instructions below. 1. From FalconStor Management Console, navigate to SAN Clients. 2. Right-Click on the Client host name and select Boot properties. The Boot Properties dialog box appears. 3. Select the Boot type as BootIP. The options become available. 4. Uncheck Boot from the local disk. 5. Optional: Select the Boot from Local Disk check box if you want the computer to boot up locally by default. 6. Type the Mac address of remote boot client and click OK.

Set the Recovery Password


Once you have finished setting DiskSafe protection, you can set two authentication modes for remote boot: Un-authentication mode. CHAP mode.

Set the Recovery password from the iSCSI user management


To set the Recovery password from the iSCSI user management, follow the instructions below: 1. Right-Click on the [Server Host Name], select iSCSI Users. An iSCSI user management window displays. 2. Select the appropriate iSCSI user. 3. Click Reset CHAP secret, type the secret, confirm it and click OK.

Set the authentication and Recovery password from iSCSI client properties
You can also set the authentication and Recovery password from iSCSI client properties. To do this: 1. Navigate to [Client Host Name] and expand it. 2. Right-Click on iSCSI and select Properties. An iSCSI Client Properties window displays. 3. Select User Access to set authentication. 4. Optional: Select Allow unauthenticated access. The user neednt to authenticate for remote boot.

CDP/NSS Administration Guide

477

BootIP

5. Optional: Select users who can authenticate for the client. You will be prompted to enter the user name, CHAP secret and confirm CHAP secret. You will also be prompted to type the Recovery password for remote boot.
6. Click OK.

Note: Mutual CHAP secret is not currently supported for iSCSI authentication.

Remote boot the diskless computer


For Windows 2003
To enable your client computer to boot remotely, you need to configure the BIOS of the computer and set the network interface card (NIC) as the first boot device. For details, about configuring the BIOS, refer to the user documentation of your main board. 1. After shutdown the sample computer, remove the system disk. 2. Boot up the diskless sample computer. 3. The client will boot from network and get the IP from DHCP server. 4. Click F8 to enter into boot menu. 5. If you didnt click F8, the default auto-selection should be Remote Boot (gPXE), Click Enter 6. Then, it will start booting remotely.

For Windows Vista/2008


If the iSCSI disk contains Windows 2008 or Windows Vista OS, the iSCSI disks disk signature changed by DiskSafe during backup must be changed back to the original signature so that it is the same as the local disk backed up by DiskSafe. You can use the following IPStor iscli command to change the disk signature:
# iscli setvdevsignature -s 127.0.0.1 -v VID F.

VID is the virtual ID of mirror disk or Time View device. You can confirm the VID from the SAN Resource mirror disk or from the TimeView you assigned for remote boot General tab in the FalconStor Management Console.

CDP/NSS Administration Guide

478

BootIP

Use the Sysprep tool


Sysprep is a Microsoft tool that allows you to automate a successful Windows operating system deployment on multiple computers. Once you have performed the initial setup steps on a single machine, you can run Sysprep to prepare the sample computer for cloning. The Factory mode of Sysprep is a method of pre-configuring installation options to reduce the number of images to maintain. You can use the Factory mode to install additional drivers and applications at the stage after the reboot that follows Sysprep. Normally, running Sysprep as the last step in the pre-installation process prepares the computer for delivery. When rebooted, the computer displays Windows Welcome or MiniSetup. By running Sysprep with the factory option, the computer reboots in a network enabled state without starting Windows Welcome or MiniSetup. In this state, Factory.exe processes its answer file, Winbom.ini, and performs the following actions: 1. Copies drivers from a network source to the computer. 2. Starts Plug and Play enumeration. 3. Stages, installs, and uninstalls applications on the computer from source files located on either the computer or a network source. 4. Adds customer data.

For Windows 2003:


To prepare a reference computer for Sysprep deployment in Windows 2003, follow these steps: 1. On a reference computer, install the operating system and any programs that you want installed on your destination computers. 2. Click Start, click Run, type cmd, and then click OK. 3. At the command prompt, change to the root folder of drive C, and then type cmd Sysprep. 4. Open the Deploy.cab file and Copy the Sysprep.exe file and the Setupcl.exe file to the Sysprep folder. If you are using the Sysprep.inf file, copy this file to the Sysprep folder. In order for the Sysprep tool to function correctly, the Sysprep.exe file, the Setupcl.exe file, and the Sysprep.inf file must all be in the same folder For remote boot, add LegacyNic=1 into Sysprep.inf file under [Unattended] section. 5. To run the Sysprep tool, type the following command at the command prompt:

CDP/NSS Administration Guide

479

BootIP

Cmd: Sysprep /optional parameter


Note: For a list of parameters, see the "Sysprep parameters" section. http://technet.microsoft.com/en-us/library/cc758953.aspx

If you run the Sysprep.exe file from the %systemdrive%\Sysprep folder, the Sysprep.exe file removes the folder and the contents of the folder. 6. On the system preparation tool, choose the shutdown mode as shutdown and click Reseal to prepare the computer. The computer should shutdown itself. 7. Optional: You can use Snapshot Copy or TimeView to assign them to the other clients and remote boot to initialize the other systems of windows 2003.

Use the Setup Manager tool to create the Sysprep.inf answer file
Once you have automated the deployment of windows 2003, you can use the sysprep.ini to customize the windows initial settings, such as user name, organization, host name, product key, networking component, workgroup, timezone, etc. To install the Setup Manager tool and to create an answer file, follow these steps: 1. Navigate to the Deploy.cab file that you replaced and double-click on it to open it. 2. On the Edit menu, click Select All 3. On the Edit menu, click Copy to Folder. 4. Click Make New Folder and enter a name for the Setup Manager folder. For example, type setup manager, and then press Enter. 5. Click Copy. 6. Open the new folder that you created, and double-click the Setupmgr.exe file. The Windows Setup Manager Wizard launches. 7. Follow the instructions in the wizard to create a new answer file. 8. Select the Sysprep setup to generate the sysprep.inf 9. Select Yes, fully automate the installation. Later, you will be prompted to enter the license keycode. 10. Select to automatically generate computer name or specify a computer name. 11. Save the sysprep.inf to the C:\Sysprep\ 12. Click Finish to exit the Setup Manager wizard.

CDP/NSS Administration Guide

480

BootIP

For Windows Vista/2008


Use the Windows system Image Manager to create the Sysprep.xml answer file In order to begin creating a Sysprep.xml file, you will need to load a Windows Image File (WIM), Install the Automated Installation Kit (AIK) for Windows Vista SP1 and Windows Server 2008: http://www.microsoft.com/downloads/details.aspx?FamilyID=94bb6e34-d890-493281a5-5b50c657de08&DisplayLang=en
Prepare a reference computer for Sysprep deployment

1. On a reference computer, install the operating system and any programs that you want installed on your destination computers. 2. Use DiskSafe to clone the system disk to the storage server 3. Boot the mirror disk remotely (Setting related BootIP configuration) 4. Open the Windows system Image Manager (Start --> All Programs --> Microsoft Windows AIK --> Windows System Image Manager) 5. Copy Install.wim from the product installation package (source) to your disk. 6. Create a catalog on the WAIK. 7. On the File menu, click Select Windows Image. 8. Navigate to the location where you saved install.wim, and then click Open. You are prompted to select an image. 9. Select the appropriate version of windows Vista/2008, and then click OK. 10. On the File menu, click New Answer File. 11. If a message displays that a catalog does not exist, click OK to create one. 12. From the windows image, choose the proper component. 13. From the Answer file, you can set the following options: Auto-generate a computer name Add or edit Organization and Owner Information Set the language and locale Set the initial tasks screen not to show at logon Set server manager not to show at logon Set the Administrator password Create a second administrative account and set the password Run a post-image configuration script under the administrator account at logon Set automatic updates to not configured (to be configured post-image) Configure the network location Configure screen color/resolution settings Set the time zone

CDP/NSS Administration Guide

481

BootIP

1. Press Control + S and choose C:\windows\system32\sysprep\ as the save location and file name as sysprep.xml. 2. Click Save to continue. 3. Navigate to C:\Windows\System32\Sysprep and enter one of the following:
sysprep /generalize /oobe /shutdown /unattend:sysprep.xml or sysprep /generalize /audit /shutdown /unattend:sysprep.xml Note: /generalize must be run. After reboot, a new SID is created and the clock for windows activation resets.

To apply the settings in auditSystem and auditUser, boot to Audit mode by using the sysprep /audit command. The machine will shutdown and you can use Snapshot Copy from the FalconStor Management Console to clone the mirror disk and remote boot to initialize the other systems of windows Vista/2008.

Create a TimeMark
Once your boot image has been created and is on the storage server, it can be used as a base image for your diskless client computers. You will need to create separate boot images for each computer that you want to boot up remotely. In order to create a separate boot image for a computer, you need to create a TimeMark of the base image first, then create a TimeView from the TimeMark. The TimeView can be assigned to an individual client computer for remote boot. To create a TimeMark of the base boot image: 1. Launch the FalconStor Management Console if you have not done so yet. 2. Select your virtual disk under SAN Resources. 3. Right-click the on the disk and select TimeMark --> Enable. A message box appears, prompting you to create the SnapShot Resource for your virtual disk. 4. Click OK and follow the instructions of the Create SnapShot Resource Wizard to create the SnapShot Resource. 5. Click Finish when you are done with the creation process. The Enable TimeMark Wizard appears. 6. Click Next and specify the schedule information if you want to create TimeMarks regularly. You can skip the next two steps if you have specified the schedule information as TimeMarks will be created automatically based on your schedule. 7. Click Finish when you are done. The Wizard closes and you are returned to the main window of FalconStor Management Console.

CDP/NSS Administration Guide

482

BootIP

8. From the FalconStor Management Console, right-click your virtual disk and select TimeMark --> Create. The Create TimeMark dialog box appears. 9. Type a comment for the TimeMark and click OK. The TimeMark is created.

Create a TimeView
After creating a TimeMark of your base boot image, you can create a TimeView from the TimeMark, and then assign the TimeView to a diskless computer for remote boot. To create a TimeView from a TimeMark: 1. Start the FalconStor Management Console - if it is not running yet. 2. Right-click your virtual disk and select TimeMark --> TimeView. The Create TimeView dialog box appears. 3. Select the TimeMark from which you want to create a TimeView and click OK. 4. Type a name for the TimeView in the TimeView Name box and click OK. The TimeView is created.
Note: Only one TimeView can be created per TimeMark. If you want to create multiple TimeViews for multiple diskless computers, you will need to create multiple TimeMarks from the base boot image first.

Assign a TimeView to a diskless client computer


After creating a TimeView from your base boot image, you can assign it to a specific diskless client computer so that the computer can be booted up remotely from the TimeView. To assign a TimeView to a client computer for remote boot, you must perform the following tasks in FalconStor Management Console: 1. Add a SAN Client. 2. Assign the TimeView to the SAN Client. 3. Associate the SAN Client with a diskless computer and configure it for remote boot.

Add a SAN Client


1. Start the FalconStor Management Console if you have not done so yet. 2. Right-click SAN Clients and select Add. The Add Client Wizard appears.
CDP/NSS Administration Guide 483

BootIP

3. Click Next and enter a name in the Client Name box. 4. Select SAN/IP as the protocol for the client and then click Next. 5. Review the settings and click Finish. The SAN Client is added.

Assign a TimeView to the SAN Client


1. Start the FalconStor Management Console if you have not done so yet. 2. Right-click the TimeView and select Assign. The Assign a SAN Resource Wizard appears. 3. Click Next and assign LUN0 to the target. 4. Click Next and review the settings, then click Finish.
Note: Only LUN 0 is supported for iSCSI remote boot.

The BootIP boots the image that is assigned to the smallest target ID with LUN 0.

Recover Data via Remote boot


DiskSafe is used to protect the clients system and data disks/partitions. In the event of system failure, the client can boot up from the iSCSI disk or selected TimeView, including the OS image, and restore the system or disk data to the local disk or new disk using DiskSafe. A valid operating system image is prepared for DiskSafe to clone to the iSCSI disk for remote boot. To recovery data using DiskSafe when the client boots up from an iSCSI disk, refer to Remote boot the diskless computer on page 478. 1. After remotely booting, hot plug-in the local disk (original disk) to restore. 2. Rescan the disks from disk management. 3. Open the DiskSafe console and remove the existing system disk protection on DiskSafe. 4. Create a new DiskSafe protection to the recovery disk by right-clicking on the disk and selecting Protect. 5. Select remote boot disk (disk 0) to be the Primary disk then click Next. For windows Vista/2008 only: Before recovering the system to the local disk, you must flip the disk signature first for local boot. So, please type the IPStor command # iscli setvdevsignature -s 127.0.0.1 -v VID -F'. VID should be the virtual ID of remote boot disk.
CDP/NSS Administration Guide 484

BootIP

6. Check Allow mirror disks with existing partitions to restore to the original disk, and then click Yes. 7. Select the original primary disk from the eligible mirror disks list and click Next. 4. The system will warn you that the mirror disk is a local disk. 8. Click Yes. 9. Finish the protect disk wizard and DiskSafe starts to synchronize the current data to local disk. 10. Once synchronization has finished, and the restore process succeeds, you can shutdown the server normally. 11. Disable the BootIP from the iSCSI client or setting Boot from local disk. 12. Local boot the client with the disk you restored. 13. Once the system successful boots up, open the DiskSafe Management Console and remove the protection that you just created for recovery. 14. Re-protect the disk.
Note: After the remote boot, verify the status of services and applications to make sure everything is up and ready after start up.

Make sure your boot up disk is from the FALCON IPSTOR DISK SCSI Disk Device. To do this, navigate to Disk Management and right-click on the first disk (Disk 0). It should show FALCON IPSTOR DISK SCSI Disk Device.

CDP/NSS Administration Guide

485

BootIP

Remotely boot the Linux Operating System


Remotely install CentOS to an iSCSI disk
Remote boot the iSCSI disk and install the CentOS5.x on it. Before you begin, make sure you have a CentOS 5.x installation package and have prepared a diskless client computer with PXE boot supported NIC adapter.

Remote boot from the FalconStor Management Console


From the FalconStor Management Console: 1. Right-Click on SAN clients to Add a customized client name. The Add Client Wizard displays. 2. Select iSCSI protocol and click Next 3. Click Add to add iSCSI initiator name and click OK 4. Check the iSCSI initiator name you created and click Next. 5. Set the authentication for the client to Allow unauthenticated access for the client and click Next. 6. Keep the client IP address as empty and finish the Add client Wizard. 7. Create a New (empty) SAN resource with size 6 ~ 10 GB (depending upon the size of the Linux system). 8. Assign the New SAN resource to the client machine 9. From the FalconStor Management Console, navigate to SAN Clients and rightclick on the client host name you added and select Boot properties. 10. The Boot Properties dialog box appears. 11. Select the Boot type as BootIP. The options become available. 12. Keep the Boot from the local disk unchecked. 13. Type the Mac address of diskless client and click OK.

CDP/NSS Administration Guide

486

BootIP

Remote boot from the Client


1. For the diskless client, set the boot sequence in the BIOS to boot from PXE first and then from the DVD ROM. 2. Boot up the client machine remotely and launch the CentOS 5.x installation package at the same time. After the remote boot, the installation package starts loading. 3. Select Advanced storage configuration when prompted to select the drive(s) to use for installation, 4. Select Add iSCSI target and click Add drive. The Enable network interface wizard appears. 5. Keep the default setting, and click OK. The Configure iSCSI Parameters wizard appears. 6. Enter the Target IP Address (your storage server IP) and click Add target. 7. Click Yes to initialize the iSCSI disk and erase all data. A sda disk (FALCON IPSTOR DISK) in drive list displays. 8. If you would like to Review and modify partitioning layout, check it and click Next. 9. Finish the installation setup wizard and install the OS. 10. Once the installation finishes, click Reboot and remote boot again to boot up the CentOS on iSCSI disk.
Note: Per Microsoft http://technet.microsoft.com/zh-tw/library/ ee619722%28WS.10%29.aspx, PXE boot from iSCSI disk on client versions of Windows, such as Windows Vista or Windows 7, are not supported.

BootIP and DiskSafe


If you plan to perform BootIP before using DiskSafe to protect the system are running on Windows 2008 R2 environment, refer to the following Microsoft knowledge base article: KB 976042, http://support.microsoft.com/kb/976042 to unbind the WFP Lightweight filter for NIC before protecting your system disk.

Remote boot and DiskSafe


To perform remote boot for DiskSafe version 3.7 snapshot images, there is no need to perform a flip disk signature operation. You can simply mount the snapshot to the TimeView, assign it to the corresponding SAN Client, and perform a remote boot.

CDP/NSS Administration Guide

487

CDP/NSS Administration Guide

Troubleshooting / FAQs
This section contains Error codes and helps you through some frequently asked questions and issues that may be encountered when setting up and running the CDP/NSS storage network. Click on one of the following topics for more information:
Troubleshooting topics
Logical resources Network connectivity NIC Port Bonding Virtual devices Storage server X-ray Storage Server iSCSI Downstream Configuration Failover Replication TimeMark Snapshot SafeCache SNMP BootIP Event log Fibre Channel target mode and storage iSCSI Downstream Configuration SCSI adapters and devices Service-Enabled Devices Multipathing method: MPIO vs. MC/S Cross-mirror failover on a virtual appliance Windows client debug information

Frequently Asked Questions (FAQ)


The following tables contain some general and specific questions and answers that may arise while managing your CDP or NSS servers.
Question
Why did my storage server not automatically start after rebooting? What should I do?

Answer
If your CDP or NSS server detects a configuration change during startup, autostart will not occur without user interaction. Typing YES allows the server to continue to start. Typing NO prevents the server from starting. Typing nothing (no user input) results in the server aborting the auto start process. If the server does not automatically start after a reboot, you can manually start it from the command line using the ipstor start all command. Snapshot resources will be marked off-line if the physical resource they have been created from is disconnected from a single server in a failover set prior to a failing over to the secondary server. When the Client starts, it reads all of the LUN entries in the /kernel/drv/sd.conf file. It can take several minutes for the client to load if there are a lot of entries. It is recommended that the /kernel/drv/sd.conf file only contain entries for LUNS that are physically present so that time is not spent scanning LUNs that may not be present. In order to remove them, execute the following command from the /usr/local directory: rm rf IPStor

Why are my snapshot resources marked off-line?

Why does it take so long (several minutes) for my Solaris SAN client to load?

I used the rpm e command to uninstall the storage server. How do I now remove the IPStor directory and its subdirectories?

CDP/NSS Administration Guide

488

Troubleshooting / FAQs

Question
How can I make sure information is updated correctly if I change storage server IP addresses using a third-party utility, like yast? I changed the hostname of the storage server. Why are all block devices now marked offline and appear as foreign devices? My IPStor directory and its subdirectories are still visible after using rpm e to uninstall the storage server. How do I remove them? I changed a storage server IP addresses using yast. Why was the information not updated correctly? Why am I having trouble completing offline license activation?

Answer
You can change storage server IP addresses, through the FalconStor Management Console using System Maintenance --> Network Configuration. You cannot change the hostname of the storage server if you are using block devices. In order to remove them, execute the following command from the /usr/local directory: rm rf IPStor Changing a storage server IP address using a third-party utility is not supported. You will need to change storage server IP addresses via the console System Maintenance --> Network Configuration. If you are unable to complete offline activation successfully, try the following solutions:

1. In order to prevent the possibility of unsuccessful


email delivery to the FalconStor activation server, disable Delivery Status Notification (DSN) before you send the activation request email to Activate.Keycode@falconstor.com.

2. If you do not receive a reply to your offline activation


email from the FalconStor activation server within one hour after sending it, check your email encoding and change it to UNICODE (UTF-8) if set otherwise, then send the email again.

NIC Port Bonding


Question
What if I need to change an IP address for NIC port bonding?

Answer
During the bonding process, you will have the option to enter/select a new IP address. Right-click on the server and select System Maintenance --> Bond NIC Port.

Event log
Question
Why is the event log displaying event messages as numbers rather than text?

Answer
You may be low on space. Check to make sure that there is at least 5 MB of free space on the file system on which the console is installed. If not, free up some space.

CDP/NSS Administration Guide

489

Troubleshooting / FAQs

SNMP
Question
The trap, ucdShudown, appears as a raw message at the management console. Is this a problem? How do I load the MIB file?

Answer
When stopping the SNMP daemon, the daemon itself will issue a trap, ucdShudown. You can ignore the extra trap. To load the MIB file, navigate to $ISHOME/etc/snmp/ mibs/IPSTOR-MIB.TXT and copy the IPSTOR-MIB.TXT file to the machine running the SNMP manager.

Virtual devices
Question
Why wont my virtual device expand?

Answer
You may have exceeded your quota. If you have a set quota and have allocated a disk greater than your quota, then enabling any feature that uses auto-expansion (i.e. Snapshot Resource or CDP), those specific resources will not expand, because the quota has been exceeded.

FalconStor Management Console


Question
Why am I getting an error while attempting to install the FalconStor Management Console?

Answer
If you experience an error installing the FalconStor Management Console, select the Install Windows Console link again and select Save Target or Save link in the browser. Then right-click installation package name and select Properties. In the Program Compatibility tab, check Run this program as administrator. The console might not launch under the following conditions: System display settings are configured for 16 colors. The install path contains characters such as !, %, {, }. Font specified in font.properties not found message is displayed. This indicates that the jdk font.properties file is not properly set for the Linux operating system. To fix this, change the font.properties files to get the correct symbol font name. To do this, replace all lines containing --symbol-medium-r-normal--*-%d-*-*-p-*adobe-fontspecific with --standard symbols lmedium-r-normal--*-%d-*-*-p-*-urw-fontspecific. The console must be run from a directory with write access. Otherwise, the host name information and message log file retrieved from the storage server cannot be saved to the local directory. As a result, the console will display event messages as numbers and console options will not be able to be saved.

Now that I have installed the console, why will it not launch?

CDP/NSS Administration Guide

490

Troubleshooting / FAQs

Multipathing method: MPIO vs. MC/S


Question
When should I use Microsoft Multipath I/O (MPIO) vs. Multiple Connections per Session (MC/S) for multipathing?

Answer
While MPIO is usually the preferred method for multipathing, there are a number of things to consider when decinging to use MCS or Microsoft MPIO for multipathing. If your configuration uses hardware iSCSI HBA then Microsoft MPIO should be used. If your target does not support MCS, then Microsoft MPIO should be used. (Most iSCSI target arrays support Microsoft MPIO.) If you need to specify different load balance policies for different LUNs then Microsoft MPIO should be used. Reasons for using MCS include the following: If your target does support MCS and you are using the Microsoft software initiator driver then MCS is the best option. There may be some exceptions where you desire a consistent management interface among multipathing solutions and already have other Microsoft MPIO solutions installed that may make Microsoft MPIO an alternate choice in this configuration. If you are using Windows XP or Windows Vista, MCS is the only option since Microsoft MPIO is only available with Windows Server SKUS.

What are the advantages and disadvantages fo using each method?

The advantages of using Microsoft MPIO is that MPIO is a tried and true method of multipathing that supports software and hardware iSCSI initiators (HBAs). MPIO also allows you to mix protocols (iSCSI/FC). In addtion, each LUN can have its own load balance policy. The disadvantage is that an extra multipathing technology layer is required. The advantages of using MCS are that MCS is part of the iSCSI specification and there is no extra vendor multipathing software required. The disadvantages of using MCS are that this method is not currently supported by iSCSI initiator HBAs, or for MS software initiator boot. Another shortfall is the load balance policy is set on a per-session basis; thus all LUNs in an iSCSI session share the same load balance policy.

CDP/NSS Administration Guide

491

Troubleshooting / FAQs

What is the default MPIO timeout and how do I change it?

The default MPIO timeout is 20 seconds. This is usually enough time, but there are certain situations where you may want to increase the timeout value. For example, when configuring multipathing with MPIO in a Windows 2008 environment, you may need addtional time to enable Windows 2008 to survive a failover taking more than 20 seconds. To increase the timeout value, you will need to modify the PDORemovePeriod, the setting that controls the amount of time (in seconds) that the multipath pseudo-LUN will continue to remain in system memory, even after losing all paths to the device. To increase the timeout , follow the steps below: // increase disk timeout from default 60 seconds to 5 minutes HKEY_LOCAL_MACHINE-System-CurrentControlSetServices-Disk-TimeOutValue: 300 // increase iSCSI timeout to from default 60 seconds to 5 minuGetting Startedtes HKEY_LOCAL_MACHINE-System-CurrentControlSetControl-Class-{4D36E97B-xxxxxxxxxxxxxxxx}-xxxxParameters-MaxRequestHoldTime: 300 // due to the increased disk timeout, enable NOPOut to early detect connection failure HKEY_LOCAL_MACHINE-System-CurrentControlSetControl-Class-{4D36E97B-xxxxxxxxxxxxxxxx}-xxxxParameters-EnableNOPOut: 1

CDP/NSS Administration Guide

492

Troubleshooting / FAQs

BootIP
Question
Why does windows keep logging off during remote boot? How do I confirm if the system has booted remotely? Can I change the IP address after remote boot? Why do I sometimes see a blue screen after a remote boot. Error code: 0x0000007B?

Answer
This happens when you remote boot the mirror disk and keep the local disk inside, Try to re-protect (or re-sync) the local disk. Go to Disk management and right-click on disk0. It should show the disk is an FalconStor IPStor disk, but not the local disk. No, you cannot change the IP Address because the iSCSI needs the original IP address for communication. Check if the boot sequence is correct by typing the following command on the sample computer before remotely booting: #iscsibcg /verify /fix These messages show that the system cannot boot the disk successfully. Check to make sure you are using the boot disk. And make sure the mirror disk has been synced completely and that you have protected the correct system disk or system partition.

Why do the following messages display on the screen during a PXE boot and not allow a boot to the iSCSI disk? Registered as BIOS driver 0x80 Booting from BIOS drive Boot failed Unregistering BIOS drive 0x80 No more network devices

Is iSCSI boot supported in an UEFI environment?

No, this version does not support iSCSI boot in an Unified Extensible Firmware Interface BIOS (UEFI) environment.

CDP/NSS Administration Guide

493

Troubleshooting / FAQs

SCSI adapters and devices


Since CDP and NSS relies on SCSI devices for storage, it is often helpful to be able to discover the state of the SCSI adapters and devices locally attached to the storage server. Verification requires that the administrator be logged into the storage server. Refer to Log into the CDP/NSS appliance.
Question
How do I verify the healthy state of the SCSI adapters that the storage server is up?

Answer
If you do not see the appropriate driver for your SCSI adapter, it may not have been loaded properly or it may have been unloaded. Once it is determined that the SCSI adapter and driver are properly installed, the next step is to check to see if the individual SCSI devices are accessible on the SCSI bus. To check to see what devices are recognized by the storage server, execute the following command on a CDP/NSS Server. cat /proc/scsi/scsi. These commands display the SCSI devices attached to the storage server. For example, you will see something similar to the following: [0:0:0:0] disk 3ware Logical Disk 0 1.2 /dev/sda [0:0:1:0] disk 3ware Logical Disk 1 1.2 /dev/sdb [2:0:1:0] disk IBM-PSG ST318203FC !# B324 [2:0:2:0] disk IBM-PSG ST318203FC !# B324 [2:0:3:0] disk IBM-PSG ST318304FC !# B335 If the operating system cannot see a device, it may not have been installed properly or it may have been replaced while the storage server was running. If the Server was not rebooted, Linux will not recognize the drive because it does not have plug-and-play capabilities. Remove the SCSI device from the Linux OS by executing: echo "scsi remove-single-device x x x x">cat /proc/ scsi/scsi (where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN number.) Then execute the following to re-add the device so that Linux can recognize the drive: echo "scsi add-single-device x x x x">cat /proc/ scsi/scsi. (where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN number.) To ensure that the SCSI drivers are loaded on a Linux machine, type the following command for Turbo Linux: modprobe <SCSI card name> For example: modprobe aic7xxx For Caldera Open Linux, type: insmod scsi_mod

How do I replace a physical disk?

How do I ensure that the SCSI drivers are loaded on a Linux SAN Client?

CDP/NSS Administration Guide

494

Troubleshooting / FAQs

Question
What if I have LUNs greater than zero?

Answer
By default, Linux will not automatically discover devices with LUNs greater than zero. You must either manually add these devices or you can edit your modules.conf file to automatically scan them. To do this: 1. Type the following command to edit the modules.conf file: vi /etc/modprobe.conf 2. If necessary, add the following line to modprobe.conf: option scsi_mod max_luns=x where x is the LUN number that you want the server to scan up to. 3. After exiting from vi, make a new image file. mkinitrd newimage.img X where 'X' is the kernel version (such as 2.4.21-IPStor) and newimage can be any name. 4. Make a new entry to point to the new .img file you created in the step above and make this your default. use /boot/grub/grub.conf 5. Save and close the file. 6. Reboot the machine so that the scan will take place. 7. Verify that all LUNs have been scanned by typing: cat / proc/scsi/scsi

Failover
Question
How can I verify the health status of a server in a failover configuration?

Answer
You can verify the health status of a server by connecting to the server via SSH using the heartbeat address, and running the sms command.

CDP/NSS Administration Guide

495

Troubleshooting / FAQs

Fibre Channel target mode and storage


Question
What is VSA?

Answer
Some storage devices (such as EMC Symmetric storage controller and older HP storage) use VSA (Volume Set Addressing) mode. This addressing method is used primarily for addressing virtual buses, targets, and LUNs. If your client requires VSA to access a broader range of LUNs, you must enable it for the client. This can be done via the Fibe Channel Client Properties screen by selecting the Options tab. Incorrect use of VSA can lead to problems seeing the LUNs (disks) at the HBA level. If the HBA cannot see the disks, the storage server is not able to access and manage them. This is true both ways: (1) the storage requires VSA, but it is not enabled and (2) the storage does not use VSA, but it is enabled. For upstream, you can set VSA for the client at the time it is created or you modify the setting afterwards by right-clicking on the client.

What is Persistent binding?

Persistent binding is automatically configured for all QLogic HBAs connected to storage device targets upon the discovery of the device (via a Console physical device rescan with the Discover New Devices option enabled). However, persistent binding will not be SET until the HBA is reloaded. You can reload HBAs using the IPStor start hba or IPStor restart all commands. The console will display the Persistent Binding Tab for QLogic Fibre Channel HBAs even if the HBAs were not loaded using those commands. In addition, you will not be able to enable Fibre Channel target mode on those HBAs. To resolve this, load the driver using the IPStor start hba or IPStor restart all commands.

How can I determine the WWPN of my Client?

here are a couple of methods to determine the WWPN of your clients: 1. Most Fibre Channel switches allow administration of the switch through an Ethernet port. These administration applications have utilities to reveal or allow you to change the following: Configuration of each port on the switch, zoning configurations, the WWPNs of connected Fibre Channel cards, and the current status of each connection. You can use this utility to view the WWPN of each client connected to the switch. 2. When starting up your client, there is usually a point at which you can access the BIOS of your Fibre Channel card. The WWPN can be found there. 3. The first time a new client connects to the storage server, the following message appears on the server screen: FSQLtgt: New Client WWPN Found: 21 00 00 e0 8b 43 23 52

CDP/NSS Administration Guide

496

Troubleshooting / FAQs

Question
Is ALUA supported?

Answer
Yes, Asymmetric Logical Unit Access (ALUA) is fully supported for both targets and initiators. Upstream: ALUA support is included for QLogic Fibre Channel and iSCSI targets with implicit mode only. Downstream: ALUA support is included for QLogic Fibre Channel and iSCSI initiators with explicit or implicit modes.

Power control option


Question
What causes a failure to communicate with a power control device?

Answer
Failure to communicate to your power control devices may be caused by one the following reasons: Authentication error (password and/or username is incorrect) Network connectivity issue Server power cable is unplugged Wrong information used for power control device such as incorrect IP

Replication
Question
Is replication supported between version 7.00 and earlier versions of CDP/ NSS?

Answer
The following replication matrix will help you determine which versions are supported for replication.

CDP/NSS Administration Guide

497

Troubleshooting / FAQs

iSCSI Downstream Configuration


Question
Does the CDP/NSS software iSCSI initiator have a target limitation? Will I have the same limitation when using a hardware iSCSI initiator? Why is there a 32 target limitation when using the software iSCSI initiator? How can I get more information on properly configuring my CDP/NSS appliance to use dedicated iSCSI downstream storage using an iSCSI initiator (software HBA)? How can I get more information on properly configuring my CDP/NSS appliance to use dedicated iSCSI downstream storage using a hardware HBA? Which HBAs can I use on my NSS or CDP appliance? What utility can I use to configure an iSCSI HBA on my NSS or CDP appliance and where can I get it?

Answer
The CDP/NSS software iSCSI initiator has a limitation of 32 targets. When using a hardware iSCSI initiator you will not have this limitation. The reason for this limitation is that when the software iSCSI initiator logs into a target it creates a new SCSI host per iSCSI target to which it is connected. Refer to Configuring iSCSI software initiator for details regarding requirements and procedures needed to properly configure a CDP/NSS device to use dedicated iSCSI downstream storage using an iSCSI initiator (software HBA). Refer to Configuring iSCSI hardware HBA for details regarding requirements and procedures needed to properly configure a CDP/NSS device to use dedicated iSCSI downstream storage with a hardware iSCSI HBA. Only QLogic iSCSI HBA's are currently supported on a CDP or NSS appliance. The QLogic "iscli" (SANSurfer CLI) utility is provided on the appliance to configure the iSCSI HBA's. The QLogic SANSurfer CLI is located at "/opt/ QLogic_Corporation/SANsurferiCLI/". To configure the HBA, run "iscli" from the path as shown below: [root@demonstration ~]# /opt/ QLogic_Corporation/SANsurferiCLI/iscli

Does the hardware initiator require any special configuration for multipath support?

The hardware initiator does not require any special configuration for multipath support. The only configuration required is to connect multiple HBA ports to a downstream iSCSI target. The driver used for the QLogic iSCSI HBA is specially handled by CDP/NSS for multipath.

Protecting data in a Windows environment


Question
How do I protect my data in a windows environment?

Answer
FalconStor DiskSafe for Windows protects Windows application servers, desktops, and laptops (referred to as hosts) by copying the local disks or partitions to a mirroranother local disk or a remote virtual disk managed by a storage server application such as CDP. Refer to the DiskSafe User Guide for further information.

CDP/NSS Administration Guide

498

Troubleshooting / FAQs

Protecting data in a Linux environment


Question
How do I protect my data in a Linux environment?

Answer
FalconStor DiskSafe for Linux is a disk mirroring backup and recovery solution designed to protect data from disaster or accidental loss on Linux platform. Local disks and remote virtual disks managed by a storage server application such as CDP can be used for protection. However features such as snapshots are available only when a mirror disk is a virtual CDPVA disk. Linux LVM logical volume protection is also supported by DiskSafe. Refer to the DiskSafe User Guide for further information.

Protecting data in an AIX environment (updated May 2012)


Question
How do I protect my data in an AIX environment?

Answer
FalconStor provides AIX scripts to simplify and automate the protection and recovery process of logical volumes on AIX platforms. Once you have prepared the AIX host machine, you can: Install AIX FalconStor Disk ODM Fileset Install the AIX SAN Client and Filesystem Agent Use LVM for protection and recovery.

Protecting data in an HP-UX environment (updated May 2012)


Question
How do I protect my servers/data in an HP-UX environment?

Answer
Protecting your servers in a HP Unix environment requires that you establish a mirror relationship between the HP-UX (LVM and vxVM) Volume Group's Logical Volumes and the mirror LUNs from the CDP/NSS appliance. To protect your data: Install the HP-UX file system Snapshot Agent Confirm that the package installation was successful by listing system installed packages: swlist | grep VxFSagent Authenticate the client to the storage server by running ipstorclient monitor. Use LVM for protection and recovery.

CDP/NSS Administration Guide

499

Troubleshooting / FAQs

Logical resources
The following table describes the icons that are used to show the status of logical resources:
Icon Description

This icon indicates a warning, such as: Virtual device offline (or has incomplete segments) Mirror is out of sync Mirror is suspended TimeMark rollback failed Replication failed One or more supporting resources is not accessible (SafeCache, CDP, Snapshot resource, HotZone, etc.) This icon indicates an alert, such as: Replica in disaster recovery state (after forcing a replication reversal) Cross-mirror need to be repaired on the virtual appliance Primary replica is no longer valid as a replica Invalid replica If you see one of these icons, check through the tabs to determine the problem.

Network connectivity
Storage servers, clients and consoles are all attached to one another through an Ethernet network. In order for all of the components to work properly together, their network connectivity should be configured properly. To test connectivity between machines (servers, clients and consoles,) there are several things that can be done. This example shows a user testing connectivity from a client or console to a server named knox. To test connectivity from one machine to the storage server, you can execute the ping utility from a command line prompt. For example, if your storage server is named knox, execute: ping knox

CDP/NSS Administration Guide

500

Troubleshooting / FAQs

If the storage server is running and attached to the network, you should receive a response like this: Pinging knox [10.1.1.99] with 32 bytes of data: Reply from 10.1.1.99: bytes=32 time<10ms TTL=255 Reply from 10.1.1.99: bytes=32 time<10ms TTL=255 Reply from 10.1.1.99: bytes=32 time<10ms TTL=255 Reply from 10.1.1.99: bytes=32 time<10ms TTL=255 Ping statistics for 10.1.1.99: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms If the Server is not available, you may get a response like this: Pinging knox [10.1.1.99] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 10.1.1.99: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms This means that either the machine is not running, or is not properly attached to the network. If you get a response like this: Unknown host knox. This means that your machine cannot find the storage server by name. There could be two reasons for this. First, it may be that the storage server is not running or connected to the network, and therefore has not registered itself to the name service on your network. Second, it may be that the storage server is running, but is not known by name, possibly because the name service, such as DNS, is not running, or your machine is not referring to the proper name service. Refer to your networks reference material on how to configure your networks name service. If your storage server is available, you can execute the following command on the Server to verify that the CDP/NSS ports are both up: netstat a |more
CDP/NSS Administration Guide 501

Troubleshooting / FAQs

Both ports 11576 and 11577 should be listed. In addition, port 11576 should be listening. Linux SAN Client You may see the following message when executing ./IPStorclient start or ./IPStorclient restart if the Linux Client cannot locate the storage server on the network:
Creating IPStor Client Device [FAILED] Failed to connect to Storage Server 0, -1

To resolve, restart the services on both the storage server and the Linux Client.

Jumbo frames support


To determine if a machine supports jumbo frames, use the ping utility from a command line prompt to ping with the packet size. If your storage server is named knox, execute one of the following commands: On Linux systems: ping s 8000 knox On Windows 2000 systems: ping l 8000 knox

Diagnosing client connectivity issues


Problems connecting clients to their SAN resources may occur due to several causes, including network configuration and storage server configuration. Check the General Info tab for the Client in the Console to see if the Client has been authenticated. In order for a Client to be able to access storage, you must establish a trusted relationship between the Client and Server and you must assign storage resources to the Client. If you make any Client configuration changes in the Console, you must restart the Client in order for the changes to take effect. Clients may not achieve the maximum throughput when writing over gigabit. If you are noticing slower than expected speeds when writing over gigabit, you can do the following: Turn on TCP window scaling on the storage server: /proc/sys/net/ipv4/tcp_window_scaling

1 is on. 0 is off. On Windows, go to Run and type regedit. Add the following: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ Parameters] "Tcp1323Opts"=dword:00000001
CDP/NSS Administration Guide 502

Troubleshooting / FAQs

"GlobalMaxTcpWindowSize"=dword:01d00000 "TcpWindowSize"=dword:01d00000

To see if the storage server client has connectivity to the storage server over the Ethernet network, refer to Network connectivity.

Windows Client
Issue Cause/Suggested Action

The SAN Client hangs when the storage containing its virtual disk goes offline.

To prevent the CDP/NSS Client from hanging when there is a storage problem on the storage server, change the default I/O error response sense key from medium error to unit not ready by the running the following command: echo "IPStor set-parameter default-ioerror-sense-key 2 4 0" > /proc/IPStor/IPStor

Windows client debug information


You can configure the amount of detail about the storage server Clients activity and performance that will be written to the Windows Event Viewer. In addition, you can enable a system tracer. When enabled, the trace information will be logged to a file called FSNTrace.log located in the \FalconStor\IPStor\Logs directory. 1. To filter the events and/or configure the tracer, select Tools --> Options.

CDP/NSS Administration Guide

503

Troubleshooting / FAQs

2. To filter the events being written to the Event Viewer, select one of the levels in the Log Level field. Note that regardless of which level you choose, there are several events that will always be written to the Event Viewer (driver not loaded, service failed to start, service started, service stopped). Five levels are available for use: Off No activity will be recorded. Errors only Only errors will be recorded. Brief Errors and warnings will be recorded. Detailed (Default) Errors, warnings and informational messages will be recorded. Trace This is the highest level of activity tracing. Debugging messages will be written to the trace log. In addition, all errors, warnings and informational messages will be recorded in the Event Viewer. 3. If you select the Trace level, specify which portions of the storage server Client will be traced.
Warning: Adjusting these parameters can impact system performance. They should not be adjusted unless directed to do so by FalconStor technical support.

CDP/NSS Administration Guide

504

Troubleshooting / FAQs

Clients with iSCSI protocol


Issue
(iSCSI protocol) After rebooting, the client loses its file shares.

Cause/Suggested Action
This is a timing issue. To reconnect to shares: Open a command prompt and type the following for commands: net stop browser net stop server net start server net start browser You may want to create a batch file to do this. The Microsoft iSCSI initiator has a default retry period of 60 seconds. Changing it to 300 seconds will sustain the disk for five minutes during network disconnection events, meaning applications will not be disrupted by temporary network problems (such as during a failover or recovery). This setting is changed through the registry. 1. Go to Start --> Run and type regedit. 2. Find the following registry key: HKEY_LOCAL_MACHINE\system\CurrentControlS et\control\class\4D6E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\ where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, ..... 3. Right-click Parameters and select Export to create a backup of the parameter values. 4. Double-click MaxRequestHoldTime. 5. Pick Decimal and change the Value data to 300. 6. Click OK. 7. Reboot Windows for the change to take effect. The Microsoft iSCSI initiator can only connect to an iSCSI target if the target name is no longer than 221 characters. It will fail to connect if the target name is longer than this.

(iSCSI protocol) Intermittent iSCSI disconnections on the client. or The client cannot see the disk.

The Microsoft iSCSI initiator fails to connect to a target.

CDP/NSS Administration Guide

505

Troubleshooting / FAQs

Clients with Fibre Channel protocol


Issue
An initiator times out with the following message: FStgt: SCSI command aborted.

Cause/Suggested Action
Certain Fibre Channel hosts and/or HBAs are not as aggressive as others, which can affect the balancing of each host's pending commands. We recommend that the value for the Execution Throttle (QLogic) or Queue Depth (Emulex) for all client initiators using the same target(s) not exceed 240. If an initiator's Execution Throttle or Queue Depth is configured too high, it could result in slow response time from the storage subsequently causing the initiator to timeout. To resolve this issue, decrease the value of the initiator's Execution Throttle or Queue Depth.

Linux SAN Client


Issue
You see the following message when viewing the Client's current configuration: On command 12 data received 36 is not equal to data expected 256. You see the following message when executing ./IPStorclient start or ./IPStorclient restart: Creating IPStor Client Device [FAILED] Failed to connect to Storage Server 0, -1 You see the following message continuously: SCSI: Aborting due to timeout: PID ######..

Cause/Suggested Action
This is an informational message and can be ignored.

The SAN Client cannot locate the storage server on the network. To resolve this, restart the services on both the storage server and the Linux Client.

You cannot un-assign devices while a Linux client is accessing those devices (i.e. mounted partitions).

CDP/NSS Administration Guide

506

Troubleshooting / FAQs

Storage Server
Storage server X-ray
The X-ray feature is a diagnostic tool used by your Technical Support team to help solve system problems. Each X-ray contains technical information about your server, such as server messages and a snapshot of your server's current configuration and environment. You should not create an X-ray unless you are requested to do so by your Technical Support representative. To create the X-ray file for multiple servers: 1. Right click on the Servers node in the console and select X-ray. A list of all of your storage servers displays.

2. Select the servers for which you would like to create an X-ray and click OK. If the server is not listed, click the Add button to add the server to the list.

CDP/NSS Administration Guide

507

Troubleshooting / FAQs

3. Select the X-ray options based upon the discussion with your Technical Support representative and set the file name.

Filter out and include only storage server messages from the System Event Log.

To To create an X-ray for an individual server: 1. Right click on the server in the console and select X-ray. The X-ray options screen (shown above) displays. 2. Select the X-ray options based upon the discussion with your Technical Support representative and set the file name.

CDP/NSS Administration Guide

508

Troubleshooting / FAQs

Failover
Issue
After restarting failover servers, CDP/NSS starts but will not come back online.

Cause/Suggested Action
This can happen if: There was a communication problem (i.e.network error) between the servers. Both failover servers were down and then only one is brought up. If failover was suspended and you restart one of the servers. To resolve this: 1. At a Linux command prompt, type sms to determine if the system is in a ready state. 2. As soon as it becomes ready, type the following: IPStorsm.sh recovery You are connecting to the server with an IP address that is not part of the failover configuration or with the heartbeat IP address and you are seeing the status from before the failover. You should only use those IP addresses that are configured as part of the failover configuration to connect to the Server in the Console. When performing a near-line recovery and the Near-Line server is setup in a failover configuration, always add the first and second nodes of the failover set to the primary for recovery. Select the proper initiators for recovery Assign both nodes back to the primary for recovery. Note: There are cases where the server WWPN may not show up in the list since the machine maybe down and the particular port is not logged into the switch. In this situation, you must know the complete WWPN of your recovery initiator(s). This is important in cases where you need to manually enter the WWPN into the recovery wizard to avoid any adverse effects during the recovery process. When adding a near-line mirror to a device, make sure you do not select initiators that have already been set as a standby initiator during failover setup. Doing so will cause loss of connection to the mirror disk in the event of failover causing mirror to break. The partner server has a network connection failure on the same subnet preventing it from successfully taking over. You can manually trigger failover from the console by right-clicking on the server and selecting Failover --> Start take over <server name>.

After failover, when you connect to the newlypromoted primary server, the failover status is not correct.

You need to perform recovery on the near-line server when it is set up as a failover pair and is in a failed state.

A server failure and failover occurred and the standby initiator assumed the WWPN of the failed servers target WWPN. losing the connection to the near-line mirror disk. Failover partner fails to take over when primary network connection associated with iSCSI client fails. Failover partner fails to take over when primary server fails.

CDP/NSS Administration Guide

509

Troubleshooting / FAQs

Issue
The IP address for the primary server conflicts with the secondary servers IP address. For example, both Storage Cluster Interlink ports share the same IP address.

Cause/Suggested Action
The primary servers network interface is using an IP address that is being used by the same interface on the partner server. Check the IP addresses being used by your servers. Modify the IP address on one of the servers by rightclicking on the server and selecting System Maintenance --> Configure Network. Refer to the Network configuration section for details. Use the Sync Standby Devices menu item to manually synchronize both servers metadata after the Storage Cluster Interlink is reconnected. Attempt to login via the console and bring up the server. Type YES in the popup window that displays to forcefully bring up the server.

The Storage Cluster Interlink connection is broken in a failover pair and both servers cannot be synchronized Failover has been suspended on server B and restarts server A. After the server is restarted, it does not come up automatically, but is in a ready state.

Failover is suspended on server A and server B for maintenance. Both servers are powered off. After the maintenance period both servers are restarted but they do not come up automatically.

Cross-mirror failover on a virtual appliance


Issue
During cross-mirror configuration, the system reports a mismatch of physical disks on the two appliances even though you are sure that the configuration of the two appliances is exactly the same, including the ACSL, disk size, CPU and memory.

Cause/Suggested Action
An iSCSI initiator must be installed on the storage server and is included on FalconStor cross-mirror appliances. If you are not using a FalconStor cross-mirror appliance, you must install the iSCSI initiator RPM from the Linux installation media before running the IPStorinstall installation script. The script will update the initiator.

CDP/NSS Administration Guide

510

Troubleshooting / FAQs

Replication
Issue
Replication is set to use compression but replication fails. You may see error messages in the syslog like this: __alloc_pages: 4-order allocation failed (gfp=0x20/0) IOCORE1 expand_deltas, cannot allocate for 65536 bytes IOCORE1 write_replica, error expanding deltas, return -EINVAL IOCORE1 replica_write_parser, server returned -22 IOCORE1 replica_read_post, stopping because status is -22 Replication Primary server cannot connect to the Replica server due to different TCP Protocols. The primary server console event log will print messages like shown below: Mar 12 10:56:28 fs18626 kernel: MTCP2 ctrl hdr's signature is mismatch with 00000000, check the network protocol(MTCP2). You perform a role reversal and get the following error: "The group for replica disks on the target server is no longer valid. Reversal cannot proceed". Replication fails.

Cause/Suggested Action
Compression requires 64K of contiguous memory. If the memory in the storage server is very fragmented, it will fail to allocate 64K. When this happens, replication will fail.

Check your Replication MTCP Version from the FalconStor Management Console by Clicking on the ServerName and the General Tab Both servers should have same versions of MTCP (either 1 or 2) If you see two different versions, contact Technical Support.

If you attempt to perform a role reversal on a resource that belongs to a non-replication group, the action will fail. To resolve this issue, remove the resource from the group and perform the role reversal. Do not initiate a TimeMark copy while replication is in progress. Doing so will result in the failure of both processes. If replication fails for one group member, it is skipped and replication continues for the rest of the group. In order for the group members that were skipped to have the same TimeMark on its replica, you will need to remove them from the group, use the same TimeMark to replicate again, and then re-join the group.

Replication fails for a group member.

CDP/NSS Administration Guide

511

Troubleshooting / FAQs

TimeMark Snapshot
Issue
TimeMark rollback of a raw device fails.

Cause/Suggested Action
Do not initiate a TimeMark rollback to a raw device while data is currently being written to the raw device. The rollback will fail because the device will fail to open. Do not initiate a TimeMark copy while replication is in progress. Doing so will result in the failure of both processes.

TimeMark copy fails.

Snapshot Resource policy (Updated April 2012)


Issue
Snapshot Resource threshold has been reached and is not expanding.

Cause/Suggested Action
Set the policy to allow expanding the Snapshot Resource automatically. Add storage and manually expand the Snapshot Resource. Delete old TimeMarks in an orderly manner, starting with the earliest, if the Snapshot Resource policy is set to Preserve all TimeMarks. Check errors of the system/storage and repair. If the Snapshot Resource has been set to offline due to the Snapshot Resource policy Always maintain write operations, re-initialize the Snapshot Resource.

Snapshot Resource failure (i.e. a recoverable storage error) or system error (i.e. out of memory)

SafeCache
Issue
A physical resource has failed (for example, the disk was unplugged or removed) but the resources in the SafeCache group are not marked offline. The primary resource has failed and you attempt to disable the cache, but the cache is unable to flush data back to the primary resource. A dialogue box displays N/A as the number of seconds needed to flush the cache. The SafeCache resource has failed and you attempted to resume the SafeCache. The resume appears to be successful, however, the client cannot write to the virtual device.

Cause/Suggested Action
If a physical resource has failed prior to the cache being flushed, the resources in the SafeCache group will not be marked offline until after a rescan has been performed. The cache is unable to flush the data due to a problem with data transfer from the cache to the primary resource.

The client can only write to the virtual device when the SafeCache resource is restored. However, the SafeCache remains in a suspended state. You should suspend and resume the cache from the Console to return the cache status to normal and operational. CDP/NSS Administration Guide 512

Troubleshooting / FAQs

Command line interface


Issue
Failed to resolve storage server to a valid IP address. Error: 0x09022004

Cause/Suggested Action
The storage server hostname is not resolvable on both the client side and the server side. Add the server name to the hosts file to make it resolvable or use the IP address in commands.

Service-Enabled Devices
Issue
An unassigned physical device does not show the Service-enabled Device option when you try to set the device category.

Cause/Suggested Action
If you see that the GUID for this device is fa1cff00..., the device cannot be supported as a Service-enabled Device. This is because the device does not support the mandatory SCSI page codes that are used to determine the actual GUID for the device. In a failover configuration, you should not change the properties of a SED used by a primary server to "Unassigned" on the secondary server. If this occurs, you should do the following: 1. Delete the offline SAN resource. 2. Service-enable the physical disk. 3. Re-create the SAN resource. 4. Re-assign the SAN resource back to the client.

A Service-enabled device (SED) is marked "Incomplete" on the primary server and the client that normally connects to the SED resource has lost access to the disk.

CDP/NSS Administration Guide

513

Troubleshooting / FAQs

Error codes
The following table contains a description of some common error codes.

CDP/NSS Error Codes


Code
1005

Type
Error

Text
Out of kernel resources. Failed to get major number for the SCSI device. Failed to allocate memory.

Probable Cause
Too many Linux device drivers installed. Memory leak from various modules in the Linux OS, most likely from network adapter or other third party interface drivers. Another application has port UDP 11577 open. Physical device associated with primary virtual device may have had a failure. A mirror device has failed. The network might have a problem.

Suggested Action
Type cat /proc/devices for a list and see if any can be removed. Check knowledge base for known memory leak problems in various drivers.

1006

Error

1008

Error

Failed to set up the network connection due to an error in SANRPCListen. Primary virtual device [Device number] has failed and mirror is not in sync. Cannot perform swap operation. Secondary virtual device [Device #] has failed. Replication has failed for virtual device [Device number] -- [Device #]. Failed to connect to physical device [Device number]. Switching to alias to [ACSL]. Failed to start replication -replication is already in progress for virtual device [Device number]. Failed to start replication -replication control area not present on virtual device [Device number]. Failed to start replication -replication control area has failed for virtual device [Device number].

Confirm using netstat -a and then remove or reconfigure the offending application. Check physical device status and all connections, including cables and switches, and downstream driver log. Check drive, cable, and adapter. Check connectivity between primary and replica, including jumbo frame configuration if applicable. Check for a loose or damaged cable on the affected drive.

1016

Critical

1017 1022

Critical Error

1023

Error

An adapter/cable might have a problem.

1030

Error

Only one replication at a time per device is allowed. The configuration might not be valid.

Try later.

1031

Error

Check configuration, restart the console, or re-import the affected drive. Check the physical drive for the first virtual drive segment.

1032

Error

A drive may have failed.

CDP/NSS Administration Guide

514

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1033

Type
Error

Text
Failed to start replication -a snapshot is in progress for virtual device [Device number]. Replication failed for virtual device [Device number] -- the network transport returned error [Error]. Replication failed for virtual device [Device number] -- the local disk failed with error [Error]. Replication failed for virtual device [Device number] -- the local snapshot used up all of the reserved area. Replication failed for virtual device [Device number] -- the replica snapshot used up all of the reserved area. Replication failed for virtual device [Device number] -- the local server could not allocate memory. Replication failed for virtual device [Device number] -- the replica disk failed with error [Error]. Replication failed for virtual device [Device number] -- failed to set the replication time. A SCSI command terminated with a nonrecoverable error condition that was most likely caused by a flaw in the medium or an error in the recorded data. Check the system log for additional information.

Probable Cause
There is a raw device backup or snapshot copy in progress. The network might have a problem.

Suggested Action
Do not open raw devices or perform snapshot copy when replication is occurring. Check connectivity between the primary and replica, including jumbo frame configuration if applicable. Check all physical drives associated with the virtual drive.

1034

Warning

1035

Error

There is a drive failure.

1036

Error

Snapshot reserved area is insufficient on the primary server.

Add additional snapshot reserved area.

1037

Error

Snapshot reserved area is insufficient on the primary server.

Add additional snapshot reserved area.

1038

Error

Memory is low.

Check system memory usage.

1039

Error

Replication failed because of the indicated error. The configuration might not be valid.

Based on the error, remove the cause, if possible.

1040

Error

Check the ipstor.dat file on the replica server.

1043

Error

This is most likely caused by a flaw in the media or an error in the recorded data.

Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure.

CDP/NSS Administration Guide

515

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1044

Type
Error

Text
A SCSI command terminated with a nonrecoverable hardware failure (for example, controller failure, device failure, parity error, etc.). Check the system log for additional information. Replica rescan for differences has failed for virtual device [Device number] -- the local device failed with error. Replica rescan for differences has failed for virtual device [Device number] -- the replica device failed with error. Replica rescan for differences has failed for virtual device [Device number] -- the network transport returned error. Replica rescan for differences cannot proceed -- replication control area not present on virtual device [Device #]. Replica rescan for differences cannot proceed -- replication control area has failed for virtual device [Device #]. Replica rescan for differences cannot proceed -- a merge is in progress for virtual device [Device number]. Replica rescan for differences failed for virtual device [Device number] -replica status returned.

Probable Cause
This is a general I/O error that is not media related. This can be caused by a number of potential failures, including controller failure, device failure, parity error, etc. There is a drive failure.

Suggested Action
Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure.

1046

Error

Check all physical drives associated with the virtual drive.

1047

Error

There is a drive failure.

Check all physical drives associated with the virtual drive.

1048

Error

Network problem.

Check connectivity between primary and replica, including jumbo frame configuration if applicable. Check configuration, restart GUI Console, or re-import the affected drive.

1049

Error

The configuration might not be valid.

1050

Error

There is a drive failure.

Check the physical drive for the first virtual drive segment.

1051

Error

A merge is occurring on the replica server.

No action is required. A retry will be performed when the retry delay expires.

1052

Error

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

CDP/NSS Administration Guide

516

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1053

Type
Error

Text
Replica rescan for differences cannot proceed -- replication is already in progress for virtual device [Device #]. Replication cannot proceed -- a merge is in progress for virtual device [Device number]. Replication failed for virtual device [Device number] -- replica status returned [Error]. Replication role reversal failed for virtual device [Device number] -- the error code is [Error]. Replication failed for virtual device [Device number] -- start replication returned [Error]. Rescan replica failed for virtual device [Device number] -- start scan returned [Error] I/O path failure detected. Alternate path will be used. Failed path (A.C.S.L): [ACSL]; New path (A.C.S.L): [ACSL]. Replication cannot proceed -- snapshot resource area does not exist for remote virtual device [Device ID]. Replication cannot proceed -- unable to connect to replica server [Server name].

Probable Cause
Only one replication is allowed at a time for a device.

Suggested Action
Try again later.

1054

Error

A merge is occurring on source.

No action is required. A retry will be performed when the retry delay expires. Check the ipstor.dat file on the replica server.

1055

Error

The configuration might not be valid.

1056

Error

The configuration might not be valid for replication role reversal. The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1059

Error

Check the ipstor.dat file on the replica server.

1060

Error

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1061

Critical

An alias is in use due to primary path failure.

Check the primary path from the server to the physical device.

1066

Error

The snapshot resource for the replica is no longer there. It may have been removed accidentally. Either the network connection is down or the replica server is down.

From the Console. log into the replica server and check the state of the snapshot resource for the replica. If deleted accidentally, restore it back. From the console, log into the replica server and check the state of the server at the replica site. Determine and correct either the network or server problem.

1067

Error

CDP/NSS Administration Guide

517

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1068

Type
Error

Text
Replication cannot proceed -- group [Group name] is corrupt. Replication cannot proceed -- virtual device [Device ID] no longer has a replica or the replica device does not exist.

Probable Cause
The group configuration is not consistent or is missing. The designated replica device is no longer on the replica server. Most likely the replica drive was either promoted or deleted while the primary server was down.

Suggested Action
Try to restart server modules or recreate the group. Check the replica server first. If the replica exists, the configuration may be corrupted and you need to call Technical Support. If the drive was promoted or deleted, you have to remove replication from the primary and reconfigure. If the drive was deleted, create a new replica. If the drive was promoted, you can assign it back as a replica but you have to determine if new data was written to the drive while it was promoted, and decide if you want to preserve the data. Once assigned back as the replica, it will be resynchronized with the original primary drive. Wait for the process to complete or change the replication schedule.

1069

Error

1070

Error

Replication cannot proceed -- replication is already in progress for group [Group name].

The snapshot group is in the middle of replication already. Only one replication operation can be running at a given time for each group. The replica was not valid when replication was triggered. The replica might have been removed without the primary. One of the replica drives in the snapshot group is missing. Replication must be able to be performed for the entire snapshot group or it will not proceed.

1071

Error

Replication cannot proceed -- Remote vid %1 does not exist or is not a replica device.

Remove the replication setup from the primary and reconfigure the replication.

1072

Error

Replication cannot proceed -- missing a remote replica device in group [Group name].

See 1069.

CDP/NSS Administration Guide

518

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1073

Type
Error

Text
Replication cannot proceed -- unable to open configuration file.

Probable Cause
Failed to open the configuration file to get replication configuration, possibly because the system was busy. Memory allocation for replication information failed possibly because the system was busy. Replication failed with the listed error. The Snapshot group in the source server has different virtual drives than the replication server. This may be due to an altered configuration when a servers was down. One or more virtual drives in the group failed during replication.

Suggested Action
Check system disk status. Check system status.

1074

Error

Replication cannot proceed -- unable to allocate memory. Replication cannot proceed -- unexpected error %1. Replication cannot proceed -- mismatch between our snapshot group [Group name] and replica server.

Check system status.

1075

Error

Check system status.

1078

Error

This is highly unusual situation. The cleanest way to fix this is to remove replication for devices in the group, remove the group, recreate the group, and configure replication again.

1079

Error

Replication for group [Group name] has failed due to error on virtual device [Device ID].

Check the log to determine the nature of the error. In case o a physical disk failure, the disk must be replaced, and data must be restored from the backup. In case o a communication failure, replication will continue when the problem is resolved, and the schedule starts again. Check the snapshot resource issues, check the og for other errors, and check the maximum number of TimeMark configured.

1080

Error

Replication cannot proceed -- failed to create TimeMark on virtual device [Device ID].

The replication process was not able to create a snapshot. This may be due to various causes, including low system memory, low and improper configuration parameters for automatic snapshot resources, or physical storage is depleted.

CDP/NSS Administration Guide

519

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1081

Type
Error

Text
Replication cannot proceed -- failed to create common TimeMark for group [Group name]. Replication for virtual device [Device ID] has manually aborted by user. Replication for group [Group name] has manually aborted by user. A SCSI command terminated with a recovered error condition. Check the system log for additional information. HotZone for virtual device [Device ID] has been autodisabled due to an error. Primary virtual device [Device number]. has failed, swap to secondary. Rescan replica failed for virtual device %1 -- the network transport returned error %2. Verify the Replication features you are using are supported on both servers. Local server version is %3. Replication for virtual device %1 has set to delta mode -- %2.

Probable Cause
One of the virtual drives in the group failed to create snapshot for replication. See 1080 for details. Replication was stopped by the user. Replication was stopped by the user. This is most likely caused by a flaw in the media or an error in the recorded data. Physical device failure.

Suggested Action
See 1080.

1082

Warning

None.

1083

Warning

None.

1084

Warning

Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure. Check physical devices associated with HotZone. Check physical device.

1085

Error

1087

Error

The mirrored device had a physical error so a mirror swap occurred. The replication protocols on the source and replica servers mismatch.

1089

Error

Make sure you are using compatible builds on both source and replica servers.

1096

Warning

Replication for the virtual device switched to delta mode due to an operation triggered by the user, such as a configuration change or replica rescan, or due to a replication I/O error, out of space, out of memory condition, etc.

Check device status for I/O error or disk space usage error. Increase memory or reduce the concurrent activities for memory or other type of errors.

CDP/NSS Administration Guide

520

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1097

Type
Warning

Text
Replication for group %1 has set to delta mode -%2. {Affected members: %3 }

Probable Cause
Replication for the virtual device switched to delta mode due to an operation triggered by the user, such as a configuration change or replica rescan, or due to a replication I/O error, out of space, out of memory condition, etc. Failed to get the delta of the resource to replicate possibly due to too many pending processes. Failed to get the delta of the resource to replicate due to the resource being offline.

Suggested Action
Check device status of the group members for I/O error, disk space usage error. Increase memory or reduce the concurrent activities for memory or other type of errors.

1098

Error

Replication cannot proceed -- Failed to get virtual device delta information for virtual device %1. Replication cannot proceed -- Failed to get virtual device delta information for virtual device %1 due to device offline. Replication cannot proceed -- Failed to communicate with replica server to trigger replication for virtual device %1.

Retry later.

1099

Error

Check device status and bring the device back online.

1100

Error

Failed to connect to the replica server or exchange replication information with the replica server to start replication for the virtual device. Failed to connect to the replica server or exchange replication information with the replica server to start replication for the group. Failed to update virtual device metadata to start replication possibly due to a device access error or the system being busy.

Check connectivity between the primary and replica servers. Check if the replica server is busy. Readjust the schedule to avoid too many operations from occurring at the same time. Check connectivity between the primary and replica servers. Check if the replica server is busy. Readjust the schedule to avoid too many operations from occurring at the same time. Check virtual device status. Check system status.

1101

Error

Replication cannot proceed -- Failed to communicate with replica server to triggr replication for group %1.

1102

Error

Replication cannot proceed -- Failed to update virtual device meta data for virtual device %1.

CDP/NSS Administration Guide

521

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1103

Type
Error

Text
Replication cannot proceed -- Failed to initiate replication for virtual device %1 due to server busy. Replication cannot proceed -- Failed to initiate replication for group %1 due to server busy. Replication failed for virtual device %1

Probable Cause
Failed to start replication for virtual device because the system was busy. Failed to start replication for group because the system was busy. The network might have a problem.

Suggested Action
Check system status.

1104

Error

Check system status.

1108

Error

Check connectivity between the primary and replica, including jumbo frame configuration if applicable. Make sure replication is enabled, the target server is running version 6.1 or higher, no and TimeView of that TimeMark exists or is mounted.

1111

Error

TimeView data replication cannot proceed -- Failed to initiate replication for virtual device %1, TimeMark %2 due to %3.

It might be due to one of the following reasons: replication is not enabled; TimeView replication is not supported on the replica server version; TimeView of the TimeMark is mounted. The ioctl call to start TimeView replication failed due to one of the following reasons: TimeMark might have been deleted at the time TimeView replication is triggered; virtual device might be offline; a replication is already in progress; network connection is lost; a memory allocation failure occurred; snapshot resource is offline; other TimeMark operations are in progress.

1113

Error

TimeView data replication cannot proceed -- Failed to start replication for virtual device %1, TimeMark %2 due to %3.

Based on the reason stated in the event message, check the status of the virtual device and snapshot resources.

CDP/NSS Administration Guide

522

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1123

Type
Error

Text
Stopping TimeView data replication cannot proceed -- Failed to stop for virtual device %1, TimeMark %2 due to %3.

Probable Cause
The ioctl call to stop TimeView replication failed due to one of the following reasons: TimeMark might have been deleted; the replication is no longer in progress; virtual device is offline; network connection is lost; a memory allocation failure occurred; snapshot resource is offline; other TimeMark operations are in progress. Replica server or replica device may be in an unhealthy state. A network error is reported.

Suggested Action
Based on the reason stated in the event message, check the status of the virtual device and snapshot resources

1131

Error

TimeView Replication failed for virtual device %1 -- the replica device failed with error %2. TimeView Replication failed for virtual device %1 - the network transport returned error %2. TimeView Replication failed for virtual device %1 - the local disk failed with error %2. TimeView Replication failed for virtual device %1 - start replication returned %2

Check replica server and disk status

1132

Error

Check the network status between two servers.

1133

Error

Local physical device may be busy or snapshot resource area might be offline. A replication may already be in progress, TimeView status might not be OK, or the system memory might be low. The replication protocols on the source and replica servers mismatch.

Check the local disks on the server.

1134

Error

Based on the returned error, check the source server.

1136

Error

TimeView Replication failed for virtual device %1 - the network transport returned error %2, Verify the Replication features you are using are supported on both servers. Local server version is %3. TimeView Replication failed for virtual device %1 error code is %2

Make sure you are using compatible builds on both source and replica servers.

1137

Error

A replication error is reported.

Based on the returned error, check the servers, devices, and the network.

CDP/NSS Administration Guide

523

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1201

Type
Warning

Text
Kernel memory is low. Add more memory to the system if possible. Restart the host if possible. Failed to trespass path to [Path]. Failed to add path group. ACSL: [Path]. Failed to activate path: [Path]. Detected critical path failure . Path [Path] will be removed. Path [Path] does not belong to active path group. Rescanning the physical adapters is recommended to correct the configuration. No valid path is available for device [Device ID]. No valid group is available. No active path group can be found. [GUID]. Failed to add path to storage device: [Path]. CLARiiON storage path is trespassing. T300 storage path is trespassing. HSG80 storage path is trespassing.

Probable Cause
Too many processes for the current resources.

Suggested Action
Add more memory to the system if possible. Restart the host if possible. Check storage status and path connections. Check storage status and path connections. Check storage status and path connections. Check storage status and path connections. Use only active paths.

1203 1204 1206 1207

Error Error Error Error

All downstream storage paths had failures. Downstream storage path failure. Downstream storage path failure. Downstream storage path failure. Tried to use a nonactive path to access storage. There may be a problem with the configuration. Downstream storage path failure. Unexpected path configuration. Storage connectivity failure. Downstream storage path failure. Downstream storage path failure or manual trespass. Downstream storage path failure or manual trespass. Downstream storage path failure or manual trespass.

1208

Warning

1209

Warning

Rescan the physical adapters.

1210 1211 1212

Warning Warning Warning

Check storage status and path connections. Check path group configuration. Check cables, switches and storage system to determine cause. Check storage status and path connections. Check storage status and path connections. Check storage status and path connections. Check storage status and path connections.

1214 1215

Error Warning

1216

Warning

1217

Warning

CDP/NSS Administration Guide

524

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1218

Type
Warning

Text
MSA1000 storage path is trespassing. TimeMark [TimeMark] cannot be created during disk rollback. Time-stamp: [Time]. Snapshot resource %1 became offline due to storage problem or memory shortage.

Probable Cause
Downstream storage path failure or manual trespass. Disk rollback is in progress.

Suggested Action
Check storage status and path connections. Wait until disk rollback is complete.

1230

Error

1231

Error

Physical storage of the snapshot resource may have a failure or server memory is low.

Check storage and server status to remove the failure condition. If the snapshot resource policy is set to Always maintain write operations, you will lose all snapshots on that resource and need to reinitialize the snapshot resource. If the policy is set to Preserve-all/recent-TimeMarks, you might need to reinitialize the snapshot resource if it does not automatically come back online once the failure condition is removed. If the TimeMark reclamation process cancellation was not user-initiated, check the snapshot resource status and available space. If automatic expansion of the snapshot resource is enabled and resource space is low, perform a manual expansion prior to the next TimeMark reclamation. Check storage and server status to remove the failure condition.

1234

Error

Timemark reclamation vdev %1 timestamp %2 cancelled by snapshot operation or user.

TimeMark reclamation has been cancelled by the user or a snapshot operation (i.e snapshot expansion).

1235

Error

Timemark reclamation vdev %1 timestamp %2 failed due to storage problem or memory shortage. The disk virtual header and snapshot resource could not be updated for virtual disk %1. Meta transfer link %1.

Physical storage of the snapshot resource may have a failure or server memory is low. Physical storage of the disk virtual header or snapshot resource may have a failure. Meta transfer link is down due to incorrect network configuration or error condition in network connection.

1236

Error

Check storage to remove the failure condition. Data on related snapshots may be compromised. Check meta transfer link configuration and network connection.

1300

Critical

CDP/NSS Administration Guide

525

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
1302

Type
Critical

Text
Global Safe Cache flushing enabled.

Probable Cause
Global Safe Cache flushing is enabled.

Suggested Action
Contact Tech Support.

CDP/NSS Error Codes


Code
3003

Type
Error

Text
Number of CCM connections has reached the maximum limit %1. CCM could not create a session with client %1.

Probable Cause
There are too many CCM Consoles open. There may be a network issue or not enough memory for CCM module to create a communication thread with the client. CCM module cannot get the list of SAN clients by executing internal CLI commands. CCM RPC service could not be created. in the CCM user or password string for connecting to the server is too long. The Event Log message file is missing.

Suggested Action
Close the CCM GUI on different machines. Check network communication and client access from the server; try to restart the ccm module on the server. This is very unlikely to happen; check the executable iscli is present in $ISHOME/bin. Try to restart the ccm module on the server. Enter a string within the limit.

3009

Error

3010

Error

List of the clients cannot be retrieved from the server. CCM service cannot be started. User name or password is longer than the maximum limit %1. The version information of the message file cannot be retrieved. CCM service cannot be created on the server as socket creation failed. CCM service cannot be created on the server as socket settings failed. CCM service cannot be created on the server as socket binding to port %1 failed.

3014 3016

Error Error

3017

Warning

This is very unlikely to happen; check the existence of $ISHOME/etc/msg/ english.msg. This is very unlikely to happen; try to restart the ccm module on the server. This is very unlikely to happen; try to restart the ccm module on the server. Identify the process using the ccm port and stop it.

3020

Error

A TCP socket could not be created or set for CCM service. A TCP socket option could not be set for CCM service. Another process may be using the same port number.

3021

Error

3022

Error

CDP/NSS Administration Guide

526

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
3023

Type
Error

Text
CCM service cannot be created on the server as TCP service creation failed. CCM service cannot be created on the server as service registration failed. Patch %1 failed -environment profile is missing in /etc. Patch %1 failed -- it applies only to build %2.

Probable Cause
A TCP service could not be created for the CCM service.

Suggested Action
This is very unlikely to happen; try to restart the ccm module on the server. This is very unlikely to happen; try to the restart the ccm module on the server. Check server package installation. Get the patch, if any, for your build number or apply the patch on another server that has the expected build number. Run the patch with the root account. None.

3024

Error

Binding CCM service to RPC callback function failed when CCM module started. Unexpected loss of environment variables defined in /etc/.is.sh on the server. The server is running a different build than the one for which the patch is made.

7001

Error

7002

Error

7003

Error

Patch %1 failed -- you must be the root user to apply the patch. Patch %1 installation failed -- it has already been applied. Patch %1 installation failed -- prerequisite patch %2 has not been applied. Patch %1 installation failed -- cannot copy new binaries. Patch %1 rollback failed - there is no original file to restore. Patch %1 rollback failed - cannot copy back previous binaries. Patch %1 failed -- the file %2 has the patch level %3, higher than this patch. You must rollback first %4.

The user account running the patch is not the root user. You tried to apply the same patch again. A previous patch is required but has not been applied.

7004

Warning

7005

Error

Apply the required patch before applying this one.

7006

Error

Unexpected error on the binary file name or path in the patch. This patch has not been applied or has already been rolled back. Unexpected error on the binary file name or path in the patch. A patch with a higher level has already been applied that conflicts with this patch.

Contact Tech Support.

7008

Warning

None.

7009

Error

Contact Tech Support.

7010

Error

Roll back the higher-level patch, apply this patch, and then reapply the higher-level patch.

CDP/NSS Administration Guide

527

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
7011

Type
Error

Text
Patch %1 failed -- it applies only to kernel %2. Patch %1 failed -- The available free space is %2 bytes; you need at least %3 bytes to apply the patch. Insufficient privilege (uid: [UID]). The server environment is corrupt.

Probable Cause
You tried to apply the patch to a server that is not running the expected OS kernel. Patch applied to a server running low on the disk used for server home directory.

Suggested Action
Apply the patch on a server that has the expected kernel. Add more storage.

7012

Error

10001

Error

Server modules are not running with root privilege. The configuration file in the / etc directory, which provides the server home directory and other environmental information, is either corrupted or deleted. During the initialization process, one or more critical processes experienced a problem. This is typically due to system drive failure, storage hardware failure, or system configuration corruption. An error occurred when accessing the SCSI devices during startup. Most likely due to storage connectivity failure or hardware failure.

Log in to the server with the root account before starting server modules. Determine the cause of such corruption and correct the situation. Perform regular backups of server configuration data so it can be restored. Check storage connectivity; check the system drive for errors via OS-provided utilities (i.e. fsck); check for a server environment variable file in /etc. Check the storage devices, e.g., power status; controller status, etc. Check the connectivity, e.g., cable connectors. With Fibre Channel switches, even if the connection status light indicates that the connection is good, it is still not a guarantee. Push the connector in to make sure. Check the specific storage device using OS-provided utilities such as hdparm. Get supported storage devices.

10002

Error

10003

Error

Failed to initialize configuration [File name].

10004

Error

Failed to get SCSI device information.

10005

Error

A physical device will not be available because we cannot create a Global Unique Identifier for it.

Physical SCSI device is not qualified because it does not support proper SCSI Inquiry pages.

CDP/NSS Administration Guide

528

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
10006

Type
Error

Text
Failed to write configuration [File name].

Probable Cause
An error was encountered when writing the server configuration file to the system drive. This can only happen if the system drive runs out of space, is corrupted, or has a hardware failure Atomic merge is occurring on source. There is a conflict on ACSL.

Suggested Action
Check the system drive using OS-provided utilities. Free up space if necessary. Replace the drive if it is not reliable.

10054

Error

Server FSID update encountered an error. Server persistent binding update encountered an error. Failed to scan new SCSI devices.

No action required, retry will be performed after when the retry delay expires. Use a different ACSL for binding. See 10004 for information about checking storage devices. If system resources are low, run 'top' to check the process using the most memory. If physical memory is below the server recommendation, install more memory. If the OS is suspected to be in a bad state due to a hardware or software failure, restart the server machine. Check the system drive using OS-provided utilities.

10059

Error

10100

Error

An error occurred when adding newly discovered SCSI devices to the system. This is most likely due to unreliable storage connectivity, hardware failure, or system resources running low.

10101

Error

Failed to update configuration [File name].

An error was encountered when updating the server configuration file to the system drive. This can only happen if the system drive is corrupted or has a hardware failure.

CDP/NSS Administration Guide

529

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
10102

Type
Error

Text
Failed to add new SCSI devices.

Probable Cause
An error occurred when adding newly discovered SCSI devices to the system. This is most likely due to unreliable storage connectivity, hardware failure, or system resources are running low.

Suggested Action
Check the storage devices and the connectivity status. If system resources are low, run 'top' to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory on the system. If the OS is suspected to be in a bad state due to unexpected failure in either hardware or software components, restart the server machine. If there is reason to believe the existing configuration should not be used, e.g., the file is suspected to be corrupted, remove the $ISHOME directory before reinstallation. Check the physical connection of the storage, and the storage system. If problem persists, call tech support.

10200

Warning

Configuration [File name] exists.

A configuration file already exists when installing the software, possibly from a previous installation. The configuration file will be reused. A physical device has a different GUID written on the device header than the record in the configuration file. This is typically caused by old drives being imported without proper initialization. In rare cases, this is due to corruption of the configuration or the device header. The physical storage device is not the one registered previously.

10210

Error

Marked virtualized PDev [GUID] OFFLINE, guid does not match SCSI guid [GUID].

10211

Warning

Marked Physical Device [%1] OFFLINE because its wwid %2 does not match scsi wwid %3, [GUID: %4]. Marked PDev [GUID] OFFLINE because scsi status indicate OFFLINE.

Check if the storage device has been replaced. If not rescan devices.

10212

Error

The physical storage system response indicates the specific device is off-line. It may have been removed, turned off, or malfunctioning.

Check the storage system, and all the cabling. After the problem is corrected, rescan on the adapter where the drive is connected. Limit the scope of the scan to that SCSI address.

CDP/NSS Administration Guide

530

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
10213

Type
Error

Text
Marked PDev [GUID] OFFLINE because it did not respond correctly to inquiry. Marked PDev [GUID] OFFLINE because its GUID is an invalid FSID.

Probable Cause
Physical device failure or unqualified device for SCSI commands. The GUID in the header of the drive does not match the unique ID, called the FSID, which is based on the external properties of the physical drive. It may be caused by drives changed while the server is down. The physical drive geometry, including the number of sectors, is different from the original record. One of the existing SCSI paths for the device is not accessible. This may be due to a disconnected storage cable, a re-zoned Fibre Channel switch, or storage port failure. The adapter driver could be unloaded.

Suggested Action
Check physical device.

10214

Error

Make sure drives are not changed without using the console to eliminate them from the virtual resource list first. Also never allow other applications to directly access physical drives without going through the server. Rescan the drive to establish its properties.

10215

Error

Marked PDev [GUID] OFFLINE because its storage capacity has changed. Missing SCSI Alias [A,C,S,L].

10240

Error

Check cabling and storage system. After situation is corrected, rescan the adapter connected to the drive, and limit the scope to that path. Check the loaded drivers.

10241

Error

Physical Adapter [Adapter number] could not be located in /proc/ scsi/. Duplicate Physical Adapter number [Adapter number] in /proc/scsi/.

10242

Critical

Some Linux kernel versions had a defect that coule cause the same adapter number to be assigned to two different adapters. This is dangerous and may result in overwritten data. The FSID is generated with the LUN of the device. Once a device is used by the server, it is not allowed to have the LUN changed on the storage configuration. The physical storage device is not the one registered previously.

Do not repeatedly load and unload the Fibre Channel drivers and the server modules individually. That can confuse the system. Load and unload all the drivers together. Do not change the LUN of a virtualized drive. Revert back to the original LUN in the storage configuration.

10244

Error

Invalid FSID, device [Device ID] LUN in FSID [FSID] does not match actual LUN.

10245

Error

Invalid FSID, Generate FSID %1 does not match device acsl:%2 GUID %3.

Check if the storage device has been replaced. If not rescan devices.

CDP/NSS Administration Guide

531

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
10246

Type
Error

Text
Failed to generate FSID for device acsl:[A C S L], can't validate FSID. Device (acsl:[A C S L]) GUID is blank, can't validate FSID. Remove scsi alias %1 from %2 because their categories are different. Remove scsi alias %1 from %2 because their GUIDs are different. Import logical resources failed. CDP Journal (GUID: %1) of virtual device %2 (ID %3) need repair. Console (%1): The number of configured device paths is %2 (%3 disks), which reaches/ exceeds the maximum number of supported device paths: %4. Loading snapshot resource for virtual device id %1 has failed. Virtual device is loaded without snapshot resource. Failed to create socket.

Probable Cause
The physical drive does not present valid data for unique ID generation, even the inquiry pages exist. Some process may have erased the disk header. This can be due to the accidental erase by fdisk or format. There might have been a hardware configuration change. There might have been a hardware configuration change. This might be caused by a disk IO failure. CDP Journal expansion failed for the virtual device. Repair is needed. Number of device paths reaches/exceeds the maximum of 4096 after rescan with discovering new devices.

Suggested Action
Only use this type of drive for virtual drive, and not SED. Never access the virtual drives by passing the server.

10247

Error

10250

Warning

Check if the device has changed or has failed. Check if the device has changed or has failed. Check storage devices. Call support to investigate and repair the CDP Journal in question. Review the physical device configuration and keep the number of device paths within the limit.

10251

Warning

10254 10257

Error Error

10258

Warning

10259

Error

Storage of the Snapshot resource may be offline / inaccessible. Virtual device is loaded without snapshot resource based on the policy. This kind of problem should rarely happen. If it does, it may indicate network configuration error, possibly due to system environment corruption. It is also possible that the network adapter failed or is not configured properly. It is also possible that the network adapter driver has problem.

Check the storage system and all cabling to see if repair or replacement is needed. Rescan after correction is made. Restart the network. If problem persists, restart the OS, or restart the machine (turn off then turn on the machine). If problem still persists, you may need to reinstall the OS. If that is the case may sure you properly save all the server configuration information before proceeding.

11000

Error

CDP/NSS Administration Guide

532

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11001

Type
Error

Text
Failed to set socket to reuse address. Failed to bind socket to port [Port number]. Failed to create TCP service. Failed to register TCP service (program: [Program name], version: [Version number]). The server communication module failed to start.

Probable Cause
System network configuration error, possibly due to system environment corruption. System network configuration error, possibly due to system environment corruption. System network configuration error, possibly due to system environment corruption. System network configuration error, possibly due to system environment corruption. Most likely due to the server port is being occupied by either a previous unexpected failure of the communication module, or another application is using the TCP port. The available space on the disk holding the configuration file is not enough.

Suggested Action
See 11000.

11002

Error

See 11000.

11003

Error

See 11000.

11004

Error

See 11000.

11006

Error

Restart the OS and try again. If problem persists, use the OS-provided utilities, such as netstat to check the port used. Increase disk space.

11007

Warning

There is not enough disk space available to successfully complete this operation and maintain the integrity of the configuration file. There is currently %1 MB of disk space available. The server requires %2 MB of disk space to continue. SAN Client ([host name]): Failed to add SAN Client. SAN Client (%1): Authentication failed. There are too many SAN Client connections.

11101

Error

This error is most likely due to a system configuration error or system resources running low. The user account to connect to the server is not valid. The number of simultaneous connections exceeded the supported limit that the current system resources can handle. Access account might be invalid.

Check OS resources using provided utilities such as top. Check user account and password. Stop some client connections.

11103 11104

Error Error

11106

Error

SAN Client ([host name]): Failed to log in.

Check user name and password.

CDP/NSS Administration Guide

533

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11107

Type
Error

Text
SAN Client ([host name]): Illegal access.

Probable Cause
The client host attempted to perform an operation beyond its granted privileges. was tried.

Suggested Action
This is very rare. Record the message and monitor the system. If this happens repeatedly, the cause should be investigated to prevent security breaches. If there is a valid configuration file saved, restore it to the system. Make sure to use reliable storage devices for critical system information. Use top to locate the process using the most memory. If physical memory is below the server recommendation, install more memory to the system. Obtain additional license keycodes.

11112

Error

SAN Client ([host name]): Failed to parse configuration file [File name].

The configuration file is not readable by the server.

11114

Error

SAN Client ([host name]): Failed to allocate memory.

System resources are running low. This may be due to too little memory installed for the system or some runaway process that is consuming too much of the memory. The number of client attached to the server exceeded the licensed number allowed.

11115

Warning

SAN Client ([host name]): License conflict -Number of CPU's approved: [Number of CPU], number of CPU's used: [Number of CPU]. Console ([host name]): Failed to remove SAN Client (%2) from virtual device %3.

11222

Error

Failed to unassign a virtual device from the client possibly due to a configuration update failure.

Check system disk status and system status. If the configuration repository has been configured, check the configuration repository status. None.

11201

Error

There are too many Console connections.

Too many GUI consoles are connected to the particular server. This is a highly unlikely condition. The console host attempts to perform an operation beyond its granted privileges. was tried. An error occurred when adding newly discovered SCSI devices to the system. This is typically due to unreliable storage connectivity, hardware failure, or system resources are running low.

11202

Error

Console ([host name]): Illegal access.

See 11107.

11203

Error

Console ([host name]): SCSI device re-scanning has failed.

See 10100.

CDP/NSS Administration Guide

534

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11204

Type
Error

Text
Console ([host name]): SCSI device checking has failed.

Probable Cause
An error occurred when accessing the SCSI devices when console requests the server to check the known storage devices. Most likely due to storage connectivity failure or hardware failure.

Suggested Action
Check the storage devices, e.g., power status; controller status, etc. Check the connectivity, e.g., cable connectors. For Fibre Channel switches, even if the connection status light indicates a god connection, it is still not a guarantee. Push the connector in to make sure. Check the specific storage device using OS-provided utilities, such as hdparm. See 10006.

11211

Error

Console ([host name]): Failed to save file [file name].

An error was encountered when writing the server configuration file to the system drive. This can only happen if the system drive ran out of space, or it is corrupted, or there is hardware failure in the system drive. Failed to create an index file for the event log retrieval. Most likely due to insufficient system disk space. The server is low in memory resources for normal operation. Failed to create a virtual drive due to either system configuration error, or storage hardware failure, or system resource access failure. When a virtual drive is deleted, all associated resources must also be handled, including replica resources. If the replica server is not reachable at the moment, the remove will not be successful.

11212

Error

Console ([host name]): Failed to create index file [file name] for Event Log. Console ([host name]): Out of system resources. Failed to fork process. Console ([host name]): Failed to add virtual device [Device number].

Free up disk space or add additional disk space to system drive. See 11114.

11216

Error

11219

Error

Check system resource, such as memory, system disk space, and storage device connectivity, i.e., cable connection. Check the log for specific reason of the failure. If the replica is not reachable, the condition must be corrected before trying again.

11220

Error

Console ([host name]): Failed to remove virtual device [Device number].

CDP/NSS Administration Guide

535

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11221

Type
Error

Text
Console ([Host name]): Failed to add SAN Client ([Client name]) to virtual device [Device ID].

Probable Cause
Failure to create a SAN Client entity is typically due to a system configuration error, storage hardware failure, or system resource access failure. This should be rare. The mapping of the SCSI address, namely the adapter, channel, SCSI ID, and LUN (ACSL), is no longer valid. This is due to sudden failure, or improper removal, or change of storage devices in the server . Failed to perform the device throughput test for the given device. This can be due to the OS being in a bad state such that the program cannot be run or the storage device failed. This message can display when the server console tries to query the server status (such as replication status). The RPC server retrieves this information in the /usr/local/ ipstor/etc/<host>/ ipstor.dat.cache file. It will fail if the file is in use by other server processes. A retry usually opens it successfully. When any server process cannot be started, it is typically due to insufficient system resources, an invalid state left by a server process that was not stopped properly, or an unexpected OS process failure that left the system in a bad state. This should be rare. If frequent occurrence is encountered, there may be external factors contributing to the behavior that must be investigated and removed before running the server.

Suggested Action
Check system resource, such as memory, system disk space. Check the syslog for specific reason of the failure. See 11204. Check and restore the physical configuration to proper state if changed improperly.

11233

Error

Console ([host name]): Failed to map the SCSI device name for [A C S L].

11234

Error

Console ([host name]): Failed to execute "hdparm" for [Device number].

Run the hdparm program from the server console directly. Check storage devices as described in 11204. The Console automatically retries the query 3 seconds later until it succeeds. The retry will stop when the Console is closed.

11237

Error

Console ([user name]): Failed to get file /usr/ local/ipstor/etc/[host name]/ipstor.dat.cache

11240

Error

Console ([host name]): Failed to start the server module.

If system resources are low, use top to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory to the system. If the OS is suspected in bad state due to unexpected failure in either hardware of software components, restart the server machine to make sure the OS is in a healthy state before trying again.

CDP/NSS Administration Guide

536

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11242

Type
Error

Text
Console ([host name]): Failed to stop the server module.

Probable Cause
When any server process cannot be stopped, it is most likely due to insufficient system resources, an invalid state left by a server process that may not have been stopped properly, or an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequent occurrence is encountered, there may be external factors that contribute to the behavior that must be investigated and removed before running the server. Failed to retrieve the list of server administrators / users / iSCSI users possibly due to the system being busy or file open error. The server administrator or user ID, or password is not valid.

Suggested Action
See 11240.

11244

Error

Console ([host name]): Failed to access the server administrator list.

Check event log for actual cause.

11245

Error

Console ([host name]): Failed to add user %2.

Check system setting for user and password policy to see if the user ID and password conform to the policy. Check if user exists; look at log message for possible cause, and try again. Check to see if other administrators already deleted the user. Check to see if other administrators already deleted the user. Check device status and system status. See 11101.

11247

Error

Console ([host name]): Failed to delete user %2. Console ([host name]): Failed to reset password for user %2. Console ([host name]): Failed to update password for user %2. Console ([host name]): Failed to modify virtual device %2. Console ([host name]): Failed to add SAN Client ([Host name]). Console ([host name]): Failed to delete SAN Client (%2).

User ID is not valid.

11249

Error

Password is not valid.

11251

Error

Password is not valid.

11253

Error

Failed to expand virtual device possibly due to a device error or the system being busy. See 11101.

11257

Error

11259

Error

Specified client could not be deleted possibly due to configuration update failure.

Check system disk and Configuration Repository if configured.

CDP/NSS Administration Guide

537

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11261

Type
Error

Text
Console ([Host name]): Failed to get SAN Client connection status for virtual device [Device ID].

Probable Cause
Failed to inquire about the SAN Client connection status due to either a system configuration error, storage hardware failure, or system resource access failure. This should rarely happen. See 11112.

Suggested Action
Check system resource, such as memory, system disk space. Check the syslog for specific reason of the failure.

11262

Error

Console ([host name]): Failed to parse configuration file [File name]. Console ([host name]): Failed to restore configuration file [File name]. Console ([host name]): Failed to erase partition of virtual device [Device number]. Console ([host name]): Failed to update meta information of virtual device %2. Console ([host name]): Failed to add mirror for virtual device [Device number]. Console ([host name]): Failed to remove mirror for virtual device %2. Console ([host name]): Failed to stop mirroring for virtual device %2. Console ([host name]): Failed to start mirror synchronization for virtual device %2. Console ([host name]): Failed to swap mirror for virtual device [Device number].

See 11112.

11263

Error

See 10006.

See 10006.

11266

Error

Storage hardware failure.

See 10004.

11268

Error

This may be due to a disk being offline or a disk error.

Check disk status.

11270

Error

This is typically due to a storage device hardware error.

See 10004.

11272

Error

This may be due to a mirror disk error or the system may be busy. This may be due to the system being busy. This may be due to a mirror disk error or the system being busy. This is most likely due to storage device hardware error.

Check disk status and try again. Retry later.

11274

Error

11276

Error

Check disk status and try again.

11278

Error

See 10004.

CDP/NSS Administration Guide

538

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11280

Type
Error

Text
Console ([host name]): Failed to create shared secret for server %2.

Probable Cause
Secure communication channel information for a failover setup, a replication setup, or a Near-line mirroring setup could not be created. Storage hardware failure.

Suggested Action
Check if specified IP address can be reached from the failover secondary server, replication primary server, or Near-line server. See 10004.

11282

Error

Console ([host name]): Failed to change device category for physical device [Device number] to [Device number]. Console ([host name]): Failed to execute failover command (%2). Console ([host name]): Failed to set failover mode ([Mode]). Console ([host name]): Failed to restart the server module. Console ([host name]): Failed to update meta information of physical device [Device number]. Console ([host name]): Failed to swap IP address from [IP address] to [IP address]. Console ([host name]): Failed to get host name. Console ([host name]): Invalid configuration format.

11285

Error

Failed to execute the command to start failover or stop failover. The system resources are low, or the OS is in a unstable state possibly due to previous unexpected error condition. Failed to restart the server modules for failover setup or NAS operations. Storage hardware failure.

Check system log message for actual cause. See 11240.

11287

Error

11289

Error

Check system log messages for possible cause. See 10004.

11291

Error

11293

Error

11294 11295

Error Error

See 11000. The configuration file is not readable by the server.

See 11000. If there is a valid configuration file saved, restore it to the system. Make sure to use reliable storage devices for the critical system information. Check the accuracy of the hostname entered for replication target server and the network configuration between the replication primary and target server.

11296

Error

Console ([host name]): Failed to resolve host name -- %2.

Host name could not be mapped to IP address on replication primary server during replication setup.

CDP/NSS Administration Guide

539

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11299

Type
Critical

Text
Failed to save the server configuration to configuration repository. Check the storage connectivity and if necessary, reconfigure the configuration repository. Invalid [User name ([User name]) used by client at IP address [IP address].

Probable Cause
Configuration file on the configuration repository could not be updated possibly due to offline device, disk failure.

Suggested Action
Check system log messages for possible cause.

11300

Error

An invalid user name is used to log in the server, either from the client host or the IPStor console.

Make sure the correct user name is used. The correct user names are the root, or the admin users created using the "Administrator" option. If many unexplained occurrences of this message are in the log, then may be someone was deliberately trying to gain unauthorized access by guessing the user credential. In that case investigate, start with the source IP address. Make sure the correct user name and password pair is used. If many unexplained occurrences of this message are in the log, then may be someone was deliberately trying to gain unauthorized access by guessing the password. In that case investigate, start with the source IP address. From the client host, delete the server and add it back again to resynchronize with the shared secret.

11301

Error

Invalid password for user ([User name]) used by client at IP address [IP address].

The incorrect password was used during authentication from the IPStor Console, or from the client host during adding of the server.

11302

Error

Invalid passcode for machine ([Host name]) used by client at IP address [IP address].

An incorrect shared secret was used by the client host to connect to the server. This may be because the server was reinstalled and the credential file was changed. In rare cases, this may occur if someone is trying to gain data access by guessing the shared secret. An incorrect login was used by the client host to connect the server.

11303

Error

Authentication failed in stage [%1] for client at IP address [IP address].

From the client host, delete the server, add it back again, and use the correct login.

CDP/NSS Administration Guide

540

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11306

Type
Error

Text
The server Administrator group does not exist.

Probable Cause
Server Administrator Group does not exist in the system possibly due to improper installation or upgrade. It might be a typo when user typed in the ID and password to log in.

Suggested Action
Contact Tech Support for possible cause and fixes.

11307

Error

User %1 at IP address %2 is not a member of the server Administrator's group. Obsolete - The client group does not exist.

Check the user ID and password to make sure there is no possibility for unauthorized login from that IP address. Contact Tech Support for possible cause and fixes.

11308

Error

IPStor Client group does not exist in the system possibly due to improper installation or upgrade. (OBSOLETE since IPStor 5.1) It might be a typo when user typed in the ID and password to log in. User name does not match the original user when resetting the credential for the client. An incorrect login was used during authentication from the Console, or from the client host when adding the server.

11309

Error

User ID %1 at IP address %2 is invalid. The Client User name %1 does not match with the client name %2. Authentication failed for user (%1) at IP address %2 -- %3.

Check the user account and retry. Use the original user name or ask the IPStor Administrator to reset the credential from the client. Make sure the correct user name and password pair is used. If you see many unexplained occurrences of this message in the log, someone may be attempting to gain access by guessing the password. In this case, investigate starting with the source IP address. Set the correct time for both machines in the failover pair.

11310

Error

11315

Error

11408

Error

Synchronizing the system time with [host name]. A system reboot is recommended. Enclosure Management: %1 Enclosure Management: %1

The failover pair has detected a substantial time difference. It is recommended to keep the failover pair synchronized to avoid potential problems. Physical enclosure might have some failures. Physical enclosure has some failures.

11410 11411

Warning Error

Check enclosure configuration. Check enclosure configuration.

CDP/NSS Administration Guide

541

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11505

Type
Error

Text
Console (%1): Failed to process the rollback TimeMark because the Meta Data Resource %2 is mounted. Console ([host name]): Failed to start replica scanning for virtual device %2. Console ([host name]): Failed to set the properties for the server. Console ([host name]): Failed to save report -%2. Console ([host name]): Failed to get the information for the NIC.

Probable Cause
The Meta Data Resource is mounted.

Suggested Action

11506

Error

This may be due to a connection error or the system being busy.

Check connectivity between replication primary server and target server. Check to see if system is busy with pending operations. Check system disk and system status.

11508

Error

Failed to update configuration file for the new server properties possibly due to disk error or the system being busy. A report file could not be saved possibly due to not enough space or error on disk. Network interface information could not be retrieved possibly due to configuration error or low system resources. Failed to configure replication on the primary server possibly due to the system being busy.

11510

Error

Check system disk status, available space, and system status. Check if network configuration is configured properly. Also check if system memory is running low for allocation. Check system log messages for actual cause.

11511

Error

11512

Error

Console ([host name]): Failed to add a replica for device %2 to the server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8). Console ([host name]): Failed to remove the replica for device %2 from the server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8). Console ([host name]): Failed to create the replica device [Device number].

11514

Error

Failed to remove replication configuration on the primary server when deleting replication setup possibly due to the system being busy.

Check system log messages for actual cause.

11516

Error

Failed to create the replica for the source virtual device. This is most likely due to problem in the remote server.

Check the hardware and software condition in the remote replica server to make sure it is running properly before trying again.

CDP/NSS Administration Guide

542

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11518

Type
Error

Text
Console ([host name]): Failed to start replication for virtual device %2.

Probable Cause
The remote server is not reachable, or is in a bad state.

Suggested Action
Check the hardware and software condition in the remote replica server to make sure it is running properly before trying again. Check the system log messages for actual cause and retry. Check the system log messages for actual cause and retry. If the system resources are low, run the 'top' command to check which process is using the most memory. If the physical memory is below the server recommendation, install more memory on the system. If you suspect the OS to be in a bad state due to an unexpected failure in either hardware of software components, restart the server machine.

11520

Error

Console ([host name]): Failed to stop replication for virtual device %2. Console ([host name]): Failed to promote replica device %2 to a virtual device. Console ([host name]): Failed to run the server X-Ray.

It is possibly because the system was busy. It is possibly because the system was busy.

11522

Error

11524

Error

If a server process cannot be started, it is typically due to one of the following reasons: insufficient system resources the server process was not stopped properly and left in an invalid state an unexpected OS process failure left the system in a bad state. This is a very rare occurrence. If this behavior is frequent, look for external factors affecting the server. A storage hardware failure has occurred.

11534

Error

Console ([host name]): Failed to reset the umap for virtual device %2.

Check the storage devices (i.e. power status; controller status). Check connectivity (i.e.cable connectors). With FC switches, even if the connection status light indicates the connection is good, it is not a guarantee. Press the connector in to verify. Check the specific storage device using an OSprovided utility, such as 'hdparm'.

CDP/NSS Administration Guide

543

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11535

Type
Error

Text
Console ([host name]): Failed to update the replication parameters for virtual device %2 to The server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8). Console ([host name]): Failed to claim physical device %2. Console ([host name]): Failed to import physical device %2.

Probable Cause
Failure to update replication properties, possibly because the system was busy.

Suggested Action
Wait until the system is not as busy and retry.

11537

Error

This server version limits the storage capacity. A storage hardware failure has occurred.

Check license agreement and keycodes. Check the storage devices (i.e. power status; controller status). Check connectivity (i.e.cable connectors). With FC switches, even if the connection status light indicates the connection is good, it is not a guarantee. Press the connector in to verify. Check the specific storage device using an OSprovided utility, such as 'hdparm'. Check the system disk status and available space.

11539

Error

11541

Error

Console ([host name]): Failed to save event message (ID: %2).

Failed to create an Event message from the console or CLI for replication, snapshot expansion, etc. possibly because the system disk does not have enough space. Failed to delete replica disk possibly due to the system being busy. Failed to expand replica disk possibly due to the system being busy. Failure to mark replication during synchronization may be due to a connectivity issue between the primary and target server, or because the system is busy.

11542

Error

Console ([host name]): Failed to remove replica device %2. Console ([host name]): Failed to modify replica device %2. Console ([host name]): Failed to mark the replication for virtual device %2.

Check system status and retry Check system status and retry. Check connectivity and system status. Try again.

11544

Error

11546

Error

CDP/NSS Administration Guide

544

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11548

Type
Error

Text
Console ([host name]): Failed to determine if data was written to virtual device %2. Console ([host name]): Failed to get login user list. Console ([host name]): Failed to set failover option <selfCheckInterval: %d sec>. Console ([host name]): Failed to start snap copy from virtual device [Device number] to virtual device [Device number]. Console ([host name]): Failed to get licenses. Console ([host name]): Failed to add license %2. Console ([host name]): Failed to remove license %2. Console ([host name]): Failed to check licenses - option mask %2. Console ([host name]): Failed to clean up failover server directory %2. Console ([host name]): Failed to set (%2) I/O Core for failover -- Failed to create failover configuration.

Probable Cause
Failed to check if the virtual device has been updated possibly due to a device error or the system being busy. The list of users could not be retrieved from the system. Failed to set failover options on primary server possibly due to failover module stopped or disk error. This may happen if another process is performing I/O with the snapshot requirements, such as an backup operation. It is also possibly due to storage hardware failure. License keycode information could not be retrieved. The license is not valid. The license is not valid.

Suggested Action
Check device status and system status.

11553

Error

Check system status.

11554

Error

Check failover module status. Check system disk status.

11556

Error

Check to see if another process is using the snapshot. See 10004 if storage failure is suspected.

11560 11561 11563

Error Error Error

Check system disk and system status. Check license keycode validity. Check license keycode validity. Check license keycode validity. Check system disk and system status.

11565

Error

The license is not valid.

11567

Error

This may be due to a disk error or the system being busy when the failover setup was to be removed. Failed to notify IOCore of failover setup or removal possibly due to the system being busy.

11568

Error

Reconfigure failover if this happens during failover setup.

CDP/NSS Administration Guide

545

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11569

Type
Error

Text
Console ([host name]): Failed to set [Device number] to Fibre Channel mode [Mode].

Probable Cause
This is possibly due to the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded. IPStor FC target mode requires the FalconStor version of the driver to be used. The driver name should be qla2x00fs.o. Failed to assign virtual device to Fibre Channel target. All intermediary configuration changes were rolled back and the configuration remained unchanged. Failed to assign virtual device to Fibre Channel target. However, the configuration was partially updated.

Suggested Action
Use lsmod to check the qla2x00fs driver to make sure it is loaded. If is, check to make sure it is the correct revision. The correct revision should be located in the ipstor/lib directory.

11571

Error

Console ([host name]): Failed to assign Fibre Channel device %2 to %3; rolled back changes

Check LUN conflict, disk status, system status, Fibre Channel Target module status.

11572

Error

Console ([host name]): Failed to assign Fibre Channel device %2 to %3; could not roll back changes.

Check LUN conflict, disk status, system status, Fibre Channel Target module status. You may need to restart the Fibre Channel Target Module to resolve the configuration conflict. Check Fibre Channel Target module status.

11574

Error

Console ([host name]): Failed to unassign Fibre Channel device %2 from %3 and returned %4; rolled back changes.

Failed to unassign virtual device from Fibre Channel target. All intermediary configuration changes were rolled back and the configuration remained unchanged. Failed to unassign virtual device from Fibre Channel target. However, the configuration is partially updated. This may be due to a Fibre Channel target module status.

11575

Error

Console ([host name]): Failed to unassign Fibre Channel device %2 from %3 (not rolled back) and returned %4; could not roll back changes. Console ([host name]): Failed to get Fibre Channel target information. Console ([host name]): Failed to get Fibre Channel initiator information.

Check Fibre Channel Target module status. You may need to restart the Fibre Channel Target Module to resolve the configuration conflict. Check Fibre Channel Target module status.

11577

Error

11578

Error

This could be because the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded.

Run 'lsmod' to check that the qla driver is loaded and it is the correct revision located in $ISHOME/lib/modules/ <kernel>/scsi.

CDP/NSS Administration Guide

546

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11581

Type
Error

Text
Console ([host name]): Failed to set NAS option %2.

Probable Cause
Failed to start the NAS processes. This is typically due to insufficient system resources, invalid state left by last running server processes that may not have been stopped properly, or due to an unexpected OS processes failure that left the system in a bad state. This should happen very rarely. If frequent occurrences are encountered, there are probably external factors contributing to the behavior that should be investigated and removed before running the server. This could be because the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded. Fibre Channel option could not be enabled or disabled. Fail to convert back a virtual device (a promoted replica) to a replica. There is no more storage left for automatic expansion of the snapshot resource, which just reached the threshold usage.

Suggested Action
If system resources are low, run the 'top' command to determine which process is using the most memory. If the physical memory is below the server recommendation, install more memory on the system. If the OS is suspected to be in a bad state due to unexpected failure in either hardware of software components, restart the server machine.

11583

Error

Console ([host name]): Failed to update Fibre Channel client (%2) WWPNs. Console ([host name]): Failed to set Fibre Channel option %2. Console ([host name]): Failed to demote virtual device %2 to a replica. Out of disk space to expand snapshot storage for virtual device [Device ID].

Run the 'lsmod' command to make sure the correct qla driver is loaded and located in $ISHOME/lib/modules/ <kernel>/scsi. Check system status.

11585

Error

11587

Error

Check if virtual device is online or if system is busy. Add additional storage. Physical storage must be prepared for virtual drive before it is qualified to be allocated for snapshot resources. Do not expand drives in toosmall increments. Consolidate the segments by mirroring or creating a snapshot copy to another virtual drive with fewer segments before expanding again. Check system status.

11590

Error

11591

Error

Failed to expand snapshot storage for virtual device [Device ID]: maximum segment exceeded (error code [Return code]).

The virtual drive has an upper limit on the number of physical segments. The drive has been expanded so many times that it exceeded the limit.

11594

Error

Console ([host name]): Failed to set CallHome option %2.

The Email Alert option could not be enabled or disabled.

CDP/NSS Administration Guide

547

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11598

Type
Error

Text
Out of disk space to expand CDP journal storage for %1. Failed to expand CDP journal storage for %1: maximum segment exceeded (error code %2). Failed to create character device to map TimeMark %1 for virtual device %2. Console ([host name]): Failed to proceed copy/ rollback TimeMark operation with client attached on the target virtual device [Device ID]. [Task name] Failed to create TimeMark for virtual device [Device ID] while the last creation/ client notification is in progress.

Probable Cause
The CDP Journal could not be expanded due to insufficient disk space. The CDP Journal resource could not be expanded due to the maximum supported segments. Failed to map a raw device interface for virtual device to perform backup, snapshot copy, or TimeMark copy. The device to be rolled back is still assigned to client hosts. CDP/NSS requires that the device be guaranteed to have no I/O during roll back. Therefore the device cannot be assigned to any hosts. The last snapshot operation, including the notification process, is still in progress. This may be caused by to short an interval between snapshots, or the snapshot notification is held up due to network or client applications. The TimeView resource could not be created for the virtual device possibly due to a device error. The TimeMark option could not be enabled for the virtual device, possibly due to a device error. The TimeMark option for virtual device could not be disabled possibly due to the system being busy. TimeMark is already selected for another operation.

Suggested Action
Add more storage.

11599

Error

Currently up to 64 segments are supported; to prevent this from happening, create a bigger CDP journal to avoid frequent expansions. Check virtual device and snapshot resource status.

11605

Error

11608

Error

Unassign the virtual device before rolling back.

11609

Error

Adjust the frequency of TimeMark snapshots. Determine the actual time it takes for snapshot notification to complete, which is application and data activity dependent. Check to see if the virtual device and snapshot resource are online. Check to see if the virtual device and snapshot resource is online. Retry later.

11610

Error

Console ([host name]): Failed to create TimeView for virtual device %2 TimeMark %3. Console ([host name]): Failed to enable TimeMark for device %2. Console ([host name]): Failed to disable TimeMark for device %2. Failed to select TimeMark %1 for virtual device %2: TimeMark %3 has already been selected.

11613

Error

11615

Error

11618

Error

Wait for the completion for the other operation.

CDP/NSS Administration Guide

548

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11619

Type
Error

Text
Failed to select TimeMark %1 character device for virtual device %2. Failed to create TimeMark %1 for virtual device %2. Failed to delete TimeMark %1 for virtual device %2.

Probable Cause
The TimeMark could not be selected for raw device backup possibly due to the system being busy. The TimeMark for this virtual device could not be created possibly due to a device error or the system being busy. The specified TimeMark for virtual device could not be removed, possibly due to a device error, an operation is pending, or the system is busy. TimeMark failed to copy from source virtual device to target virtual device, possibly due to a device error or the system being busy. TimeMark rollback for virtual device failed possibly due to a device error or the system being busy. Automatic snapshot resource expansion failed possibly due to a device error, quota reached, out-of-space, or the system being busy. Failed to update failover options with auto-recovery mode on secondary server.

Suggested Action
Check system status and retry later.

11621

Error

Check device status and system status.

11623

Error

Check device status and system status. Retry later.

11625

Error

Failed to copy TimeMark %1 of virtual device %2 as virtual device %3.

Check device status and system status. Retry later.

11627

Error

Failed to roll back to TimeMark timestamp %1 for virtual device %2. Failed to expand snapshot storage for virtual device %1 (error code %2). Console ([host name]): Failed to set failover option on secondary server <heartbeatInterval : %2 sec, autoRecoveryInterv al: %3 sec>. Console ([host name]): Failed to set failover option on secondary server <heartbeatInterval : %2 sec, autoRecoveryInterv al: disabled>.

Check device status and system status. Retry later.

11631

Error

Check log messages for possible cause.

11632

Error

Check failover module status. Check system disk status.

11633

Error

Failed to update failover options without auto-recovery mode on secondary server.

Check failover module status. Check system disk status.

CDP/NSS Administration Guide

549

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11637

Type
Error

Text
Failed to expand CDP journal storage for %1 (error code %2). Failed to expand CDP journal storage for %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit. The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the CDP Journal. Failed to expand snapshot resource for virtual device %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit. The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the snapshot resource. Failed to create a temporary TimeView from TimeMark %1 to copy TimeView data for virtual device %2.

Probable Cause
The CDP Journal could not be expanded possibly because there is not enough space or there is an error on the disk. The CDP Journal resource could not be expanded due to the storage quota limit being exceeded.

Suggested Action
Check disk status, available space and system status.

11638

Error

Increase storage quota for the specified user.

11639

Error

The CDP Journal resource was expanded with a smaller increment size than usual due to user quota limit.

Increase storage quota for the specified user.

11640

Error

The snapshot resource was not expanded due to quota limit exceeded.

Increase storage quota for the specified user.

11641

Error

The snapshot resource was expanded with a smaller increment size than usual due to user quota limit.

Increase storage quota for the specified user.

11642

Error

TimeMark might not be available to create the temporary TimeView or raw device creation failed.

If TimeMark is still available, try TimeMark copy again.

CDP/NSS Administration Guide

550

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11643

Type
Error

Text
[Task %1] Failed to create TimeMark for virtual device %2 while notification to client %3 for other resource is in still progress. Take TimeView [TimeView name] id [Device ID] offline because the source TimeMark has been deleted. Console ([host name]): Failed to create TimeView: virtual device [Device ID] already have a TimeView. Failed to convert inquiry string on SCSI device %1 Bad capacity size for SCSI device %1 Discarded scsi device %1, unsupported Cabinet ID Discarded scsi device %1, missing "%2" vendor in inquiry string. SCSI device %1 storage settings are not optimal. Check the storage settings. Discarded scsi device %1, exceeded maximum supported LSI LUN %2. Failed to allocate a %1 MB DiskSafe mirror disk in storage pool %2. There is only %3 MB free space left in storage pool.

Probable Cause
Snapshot notification to the same client for other virtual devices is still pending.

Suggested Action
Retry later.

11644

Error

The TimeMark snapshot which the TimeView is based on has been deleted. The TimeView image therefore is set to OFFLINE because it is no longer accessible. For each TimeMark snapshot, only one TimeView interface can be created.

Remove the TimeView from the resource.

11645

Error

None.

11649 11655 11656

Error Error Warning

The inquiry string contains invalid information. Failed to get capacity information from the device. The Cabinet ID of the device is not supported. The disk is not from one of the supported vendor. Storage settings are not optimal.

Check the device configuration. Check the storage. Check storage device definition. Check storage device definition. Check the storage.

11657

Warning

11658

Warning

11659

Warning

Number of LSI device LUNs exceeds the maximum supported value. The DiskSafe mirror disk could not be created due to insufficient storage space.

Check storage configuration.

11660

Error

Add more storage.

CDP/NSS Administration Guide

551

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11661

Type
Error

Text
Failed to expand the DiskSafe mirror disk by %1 MB for user %2. The total size allocated for this user would be %3 MB and this exceeds the user's quota of %4 MB. Failed to create a %1 MB DiskSafe snapshot resource. There is not any storage pool with enough free space. Console ([host name]): Failed to enable backup for virtual device %2.

Probable Cause
The DiskSafe mirror disk could not be expanded due to user quota.

Suggested Action
Increase storage quota for the specified user.

11662

Error

The Snapshot resource could not be created for DiskSafe mirror disk due to insufficient storage in storage pool assigned for DiskSafe. The backup option could not be enabled for the virtual device possibly due to a device error or the maximum number of virtual devices that can be enabled for backup has reached. The backup option could not be disabled for the virtual device possibly due to a device error. The raw device backup session for the virtual device could not be stopped possibly due to the system being busy. The virtual device could not be added to the group possibly because a snapshot operation was in progress.

Add more storage to the DiskSafe storage pool.

11665

Error

Check that the maximum limit of 256 virtual devices that can be enabled for backup has not been reached. Also, check the disk status and system status. Check disk status and system status.

11667

Error

Console ([host name]): Failed to disable backup for virtual device %2. Console ([host name]): Failed to stop backup sessions for virtual device %2. Console ([host name]): Virtual device %2 cannot join snapshot group %3 group id %4.

11668

Error

Check system status.

11672

Error

Check if a snapshot operation is pending for the virtual device or group. Check disk status and system status. Check if a snapshot operation is pending for the virtual device or group. Check disk status and system status. Try offline re-size operation.

11673

Error

Console ([host name]): Virtual device %2 cannot leave snapshot group %3 group id %4.

Virtual device could not be removed from the group possibly because a snapshot operation was in progress.

11676

Error

Console ([host name]): Failed to resize NAS file system on virtual device %2.

The NAS file system could not be resized automatically after expansion using system commands.

CDP/NSS Administration Guide

552

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11681

Type
Error

Text
Console ([host name]): Failed to resume Cache Resource %2 (ID: %3). Console ([host name]): Failed to suspend cache Resource %2 (ID: %3). Console ([host name]): Failed to reset cache on target device %2 (ID: %3) for %4 copy. Console ([host name]): Failed to add %2 Resource %3 (ID: %4). Console ([host name]): Failed to delete %2 Resource %3 (ID: %4). Console ([host name]): Failed to resume HotZone resource %2 (ID: %3). Console ([host name]): Failed to suspend HotZone resource %2 (ID: %3). Console ([host name]): Failed to update policy for HotZone resource %2 (ID: %3). Console ([host name]): Failed to get HotZone statistic information. Console ([host name]): Failed to get HotZone status. CDP/Safecache marker in device(%1) is full, fail new marker(%2) request for vdev(%3).

Probable Cause
SafeCache usage for the virtual device could not be resumed possibly due to a device error. SafeCache usage for the virtual device could not be suspended possibly due to a device error. SafeCache could not be reset for the snapshot copy target resource possibly due to the system being busy. The specified resource could not be created possibly due to a device error. The specified resource could not be created possibly due to the system being busy. HotZone usage could not be resumed possibly due to disk error. HotZone usage could not be suspended possibly due to disk error. The HotZone policy could not be updated possibly due to disk error. HotZone statistics information could not be retrieved from log file. The HotZone status could not be retrieved possibly due to disk error. CDP/Safecache's marker is full.

Suggested Action
Check disk status and system status.

11683

Error

Check disk status and system status.

11684

Error

Check if system is busy and retry later.

11686

Error

Check disk status and system status. Check if system is busy.

11688

Error

11690

Error

Check system disk status and system status.

11692

Error

Check system disk status and system status.

11694

Error

Check system disk and system status.

11695

Error

Check HotZone log, disk status, system status. Check HotZone device status. Delete old snapshot marker.

11696

Error

11698

Warning

CDP/NSS Administration Guide

553

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11699

Type
Warning

Text
CDP/Safecache(%1) is temporarily full, size (%2)MB. Console ([host name]): Failed to reinitialize snapshot resource (ID: %2) for virtual device (ID: %3). Console ([host name]): Failed to shrink snapshot resource for resource %2 (ID: %3). Deleting TimeMark %1 on virtual device %2 to maintain snapshot resource threshold is initiated. Failed to get TimeMark information to roll back to TimeMark %1 for virtual device %2. Snapshot marker %1 on CDP journal storage for %2 was deleted. Copying CDP journal data to %1 %2 (ID: %3) failed to start. Error: %4. Copying CDP journal to %1 %2 (ID: %3) failed to complete. Error: %4. Console ([host name]): Failed to suspend CDP Journal Resource %2 (ID: %3). Console ([host name]): Failed to get information for license activation. Console ([host name]): Failed to activate license (%2).

Probable Cause
CDP/Safecache is temperarily full. The snapshot resource could not be reinitialized possibly due to disk error.

Suggested Action
Increase the flush speed and expand cache size. Check if the snapshot resource is online. Check if the system is busy.

11701

Error

11706

Error

Shrinking of the snapshot resource failed possibly due to the system being busy. A TimeMark was deleted after a failed expansion in order to maintain the snapshot resource threshold.

Check system status and retry.

11707

Warning

Check disk status, available space. Check if system is busy. Try manual expansion if it is necessary. Retry later.

11708

Error

TimeMark information could not be retrieved for rollback possibly due to a pending TimeMark deletion operation.

11709

Warning

11711

Error

CDP Journal data could not be copied possibly due to the system being busy. Copying CDP Journal data failed possibly due to the system being busy. The CDP Journal for the resource could not be suspended possibly due to the system being busy. License registration information could not be retrieved. License registration failed.

Check system status.

11713

Error

Check system status.

11715

Error

Check system status.

11716

Error

Check license is registered and the public key is not missing. Check connectivity to the registration server; check file system is not read-only for creation of intermediary files.

11717

Error

CDP/NSS Administration Guide

554

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11719

Type
Warning

Text
This server upgrade is not licensed. Please contact FalconStor Support immediately. Console (%1): Failed to flush the TimeView cache data for TimeView resource %2. Failed to convert Snapshot Resource vid %1 due to %2

Probable Cause
License registration information could not be retrieved. The snapshot resource or the cache resource may be offline.

Suggested Action
Contact FalconStor Support.

11722

Error

Check the snapshot and cache resources status.

11724

Error

Snapshot Resource conversion failed.

If the error is due to insufficient space, add more storage. Otherwise, check the system log for more information about the failure. Expand the TimeView if it is due to insufficient space. Check the storage if it is due to I/O error. Check memory usage if it is due to insufficient memory. Check system status.

11728

Error

Console (%1): TimeView Data Conversion has failed for virtual device %2 (timestamp %3)%4

TimeView conversion may fail due to insufficient space, I/O error or insufficient memory.

11730

Error

Console ([host name]): Failed to suspend mirror for virtual device %2. Console ([host name]): Failed to update the replication parameters for virtual device %2 to server %3 (compression: %4, encryption: %5, MicroScan: %6) [Task %1] Snapshot creation for %2 %3 will proceed even if the Nearline mirror is out-of-sync on server %4. [Task %1] Snapshot creation / notification for %2 %3 will proceed even if the Near-line mirroring configuration cannot be retrieved from server %4.

Mirror synchronization could not be suspended possibly due to the system being busy. Replication properties could not be updated.

11738

Error

Check system disk status and system status.

11740

Warning

A snapshot is going to be created while the mirror is outof-sync in a Near-line setup.

Synchronize the mirror of the primary disk on the primary server.

11741

Warning

The system tries to connect to the primary server to obtain the client configuration information when taking a snapshot. The snapshot is still created but the client is not notified.

Check connectivity between the primary server and Nearline server. Check if the primary server is busy.

CDP/NSS Administration Guide

555

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11742

Type
Warning

Text
[Task %1] Snapshot creation / notification for %2 %3 will proceed even if the Near-line mirroring configuration is invalid on server %4. Console ([host name]): Failed to updated mirror policy. Failed to create snapshot image %1 by snapshot marker for virtual device id %2. Failed to create snapshot image %1 by snapshot marker for group id %2. CDP/Safecache marker is full, failed to create new marker %1 request for vdev %2.

Probable Cause
The system attempts to check the primary server's configuration when a snapshot is taken on the Near-line server. If unsuccessful, the snapshot is still taken, but the data might not be valid. Virtual device mirroring policy could not be updated possibly due to the system being busy. The snapshot marker for virtual device could not be created, possibly due to a device error or the system being busy. The snapshot marker for group could not be created possibly due to a device error or the system being busy. The device has reached 256 TimeMarks, which is the maximum number supported if configured with CDP. Or the device has reached the maximum number (256) of unflushed TimeMarks in SafeCache. The group has reached 256 TimeMarks, which is the maximum number supported if configured with CDP. Or the group has reached the maximum number (256) of unflushed TimeMarks in SafeCache. The backup session duration time has exceeded the limit.

Suggested Action
Check primary disk configuration and status.

11761

Error

Check system status and retry later. Check the device status and the system status.

11764

Error

11765

Error

Check the device status and the system status.

11766

Warning

For CDP, make sure to keep fewer than 256 TimeMarks. For SafeCache, make sure fewer than 256 TimeMarks are unflushed. Check for possible storage issues that may be preventing CDP or SafeCache from flushing the data at a reasonable rate. For CDP, make sure to keep fewer than 256 TimeMarks. For SafeCache, make sure you have fewer than 256 unflushed TimeMarks. Check for a possible storage issue preventing CDP or SafeCache from flushing the data at a reasonable rate. None

11767

Warning

CDP/Safecache marker is full, failed to create new marker %1 request for group %2.

11768

Warning

Backup session closed for vdev %1 since absolute session duration %2 %3 was reached. The start time of the session was %4

CDP/NSS Administration Guide

556

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
11769

Type
Warning

Text
Backup session closed for group %1 since absolute session duration %2 %3 was reached. The start time of the session was %4. Console ([host name]): Failed to get mutual chap user list. Console ([host name]): Failed to reset mutual chap secret for user %2. Console ([host name]): Failed to update mutual chap secret for user %2. Console ([host name]): Failed to add mutual chap secret user %2.

Probable Cause
The backup session duration time has exceeded the limit.

Suggested Action
None

11770

Error

The list of iSCSI Mutual CHAP Secret users could not be retrieved. The iSCSI Mutual CHAP Secret for a user could not be reset by root. The iSCSI Mutual CHAP Secret for a user could not be updated by root. The iSCSI Mutual CHAP Secret for a user could not be added by root possibly due to disk problem or the system being busy. The iSCSI Mutual CHAP Secret for a user could not be deleted by root. There is an invalid parameter specified in the report request. Report request parsing failed. Report type is invalid. Specified report type could not be created possibly because the system was busy, out-ofspace, or had a disk error. The security credentials for the failover operation are corrupted or deleted. This will not happen under normal operating conditions.

Check system status and retry later. Check system status and retry later. Check system status and retry later. Check system disk and system status.

11771

Error

11773

Error

11775

Error

11777

Error

Console ([host name]): Failed to delete mutual chap secret user %2. Failed to import report request. Failed to parse report request %1 %2. Undefined report type %1. Failed to create report file %2 (type %1).

Check log message for possible cause. Check parameters for report generation. Check parameters for report generation. Check parameters for report generation. Check system log message for possible cause.

11900 11901 11902 11910

Error Error Error Error

13300

Error

Failed to authenticate to the primary server -Failover Module stopped.

Reconfigure the failover set after re-establishing user credentials, e.g., reset the root password of both hosts and then reconfigure failover using the new root credentials.

CDP/NSS Administration Guide

557

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13301

Type
Error

Text
Failed to authenticate to the local server -Failover Module stopped. Failed to transfer primary static configuration to secondary. Failed to transfer primary dynamic configuration to secondary Failed to transfer primary authentication information to secondary. Invalid failover configuration detected. Failover will not occur. Primary server failed to respond to command from secondary. Failed to add IP address [IP address].

Probable Cause
See 13300.

Suggested Action
See 13300.

13302

Error

Quorum disk failure.

Check failover quorum disk status. Check failover quorum disk status. Check failover quorum disk status. Check network to make sure the config file could be transferred. Check failover quorum disk status and network. Restart the network. If the problem persists, restart the OS or restart the machine (turn off then turn on the machine). If the problem still persists, you may need to reinstall the OS. If that is the case, may sure you properly save all IPStor configuration information before proceeding. None.

13303

Error

Quorum disk failure.

13307

Error

Quorum disk failure.

13308

Error

The primary configuration file is missing. Quorum disk or communication failure. This kind of problem should rarely happen. If it does, it may indicate a network configuration error, possibly due to system environment corruption. It is also possible that the network adapter failed or is not configured properly. It is also possible that the network adapter driver has a problem. During failover, the system may be holding the IP address of the failed server for longer than the failover module can wait. This is not a problem and the message can be ignored.

13309

Error

13316

Error

13317

Error

Failed to release IP address [IP address].

CDP/NSS Administration Guide

558

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13319

Type
Error

Text
Failed to stop IPStor Failover Module. Host may need to reboot.

Probable Cause
When any IPStor process cannot be stopped, it is typically due to insufficient system resources, an invalid state left by a server process that was not stopped properly, or an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequently occurring, there may be external factors contributing to the behavior that should be investigated and resolved before running the server. See 13300.

Suggested Action
See 11240.

13320

Error

Failed to update the configuration files to the primary server [Error]. Failed to allocate memory -- Self-Monitor Module stopped.

See 13300.

13700

Error

When any server process cannot be started, it is most likely due to insufficient system resources, invalid state left by a process that may not have been stopped properly, or due to an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequently occurring, there may be external factors contributing to the behavior that should be investigated and resolved before running the server. See 13317.

If system resources are low, use top to check the process that is using the most memory. If physical memory is below the IPStor recommendation, install more memory to the system. If the OS is suspected in bad state due to unexpected failure in either hardware of software components, restart the server machine to make sure the OS is in a healthy state before trying again. See 13317.

13701

Error

Failed to release IP address [IP address]. Retrying the operation. Failed to add virtual IP address: %1. Retrying the operation. Failed to stop IPStor SelfMonitor Module. Server module failure detected. Condition: %1.

13702

Error

There may be a network issue preventing the primary server from getting its virtual IP back during failback. See 13319. The secondary server has detected that one module has been stopped on the primary.

Check network configuration.

13703 13704

Error Error

See 13319. Check primary server status.

CDP/NSS Administration Guide

559

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13710

Type
Critical

Text
The Live Trial period has expired for server [Server name]. Please contact FalconStor or its representative to purchase a license. The following options are not licensed: [IPStor option]. Please contact FalconStor or its representative to purchase a license. Server failure detected. Failure condition: [Error].

Probable Cause
The liveday live trial grace period has been exceeded.

Suggested Action
Contact FalconStor or a representative to obtain proper license.

13711

Critical

The specific option is not licensed properly.

Contact FalconStor or a representative to obtain proper license.

13800

Error

The primary server detected failure condition as described, which is being reported to the secondary server. Waiting for the secondary server to decide rather it should take over. The virtual drive holding the quorum is no longer available due to the deletion of the first virtual drive when the system was in an inconsistent state.

None.

13804

Critical

Quorum disk failed to release to secondary.

This should rarely happen if the server is not in an experimental stage where drives are created and deleted randomly. Call Technical Support if it persists. Check the log for the specific error conditions encountered and correct the situation accordingly. The secondary server will take over anyway. Shut down the primary via power control or an auto shutdown script to avoid conflict. Check other messages in the log for more details and more precise picture of the situation.

13817

Critical

Primary server failback was unsuccessful. Failed to update the primary configuration. Quorum disk negotiation disk failed.

The primary server failed to restore from the failover operation due to other conditions. The primary server failed to access the quorum disk.

13818

Critical

13820

Warning

Failed to retrieve primary server health information.

The secondary server cannot receive a heartbeat from the primary server. The secondary server is trying to determine if the primary server is down, or the secondary server might be isolated from the network by trying to contact other network entities.

CDP/NSS Administration Guide

560

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13821

Type
Error

Text
Failed to contact other entities in network. Takeover is not initiated assuming failure is on this server.

Probable Cause
The secondary server failed to receive a heartbeat from the primary and failed to contact any other network entities in the subnet. This network problem prevents the secondary server from taking over the primary server. When the primary reports a storage connectivity problem, the secondary will try to determine if it has better connectivity. If it is not 100% healthy, e.g., it fails to connect to all storage devices, it will not take over. Failover waiting time period is too short or the partner is not fully operational.

Suggested Action
Check the secondary server network status.

13822

Critical

Secondary will not take over because storage connectivity is not 100%.

Check the storage connectivity for both the primary and secondary to correct the situation. See 11204 for checking storage.

13823

Warning

Partner server failed to acknowledge takeover request in time. This server will forcefully take over the partner. Failed to stop quorum updating process. PID. Maybe due to storage device or connection failure. Almost running out of file handlers (current [Number of handles], max [Number of handles]). Almost running out of memory (current [Number of KB] K, max [Number of KB]). Get configuration file from storage failed. Server operation is resumed either because the user initiated an action, or the partner server was suspended.

Check failover environment parameter settings and the partner status.

13827

Error

There may be a storage device or connection failure.

Check storage connectivity.

13828

Warning

The operating system is running out of resource for file handles.

Determine the appropriate amount of memory required for the current configuration and applications. Check for any process that is leaking memory. See 13828.

13829

Warning

The operating system is running out of memory.

13830 13832

Error Error

There may be a storage device or connection failure. The failed server was forced to come back.

Check storage connectivity. Check the primary and partner server status.

CDP/NSS Administration Guide

561

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13833

Type
Error

Text
Failed to back up file from [source] to [target location]. Failed to copy file out from Quorum repository. Failed to take over primary. Failed to get configuration files from repository. Check and correct the configuration disk. Secondary server does not match primary server status.

Probable Cause
There may be a storage device or connection failure. There may be a storage device or connection failure on the quorum disk. The secondary server is not completely functional. There may be a storage device or connection failure on the quorum disk.

Suggested Action
Check storage connectivity.

13834

Error

Check storage connectivity.

13835 13836

Error Error

Check secondary server status. Check storage connectivity.

13841

Error

Takeover is in progress but the primary server is not in DOWN or READY status.

Check primary server status. It may have temporarily been in an inconsistent state; if its status is still not DOWN or READY, check if the sm module is running. None.

13842

Warning

Secondary server will takeover. Primary is still down. Secondary server failed to get original conf file from repository before failback. Failed to write to repository. Quorum disk failure detected. Secondary is still in takeover mode. Primary is already shut down. Secondary will take over immediately. One of the heartbeat channels is down: IP address [IP]

The primary server failed.

13843

Error

There may be a storage device or connection failure on the quorum disk. There may be a storage device or connection failure on the quorum disk. There may be a storage device or connection failure on the quorum disk. Failover occurred.

Check storage connectivity.

13844

Error

Check storage connectivity.

13845

Warning

Check storage connectivity.

13848

Warning

None.

13849

Warning

Lost heartbeat IP information.

Check network connections.

CDP/NSS Administration Guide

562

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13850

Type
Error

Text
Secondary server can not locate quorum disk. Either the configuration is wrong, or the drive is offline. Secondary server can't take over due to [Reason] Secondary notified primary to go up because secondary is unable to take over.

Probable Cause
There may be a storage device or connection failure on the quorum disk.

Suggested Action
Check storage connectivity.

13851

Error

The secondary server cannot take over due to the indicated reason. The secondary server detected a failure on the primary server, and also detected a failure on itself. Therefore, take over of the primary does not occur. This might happen if the primary was just booting up. There is a heartbeat communication issue between failover partners. There was a primary server failure or network communication is broken. This may be because of inconsistent failover node configuration files when merging them after restore. The file name already exists or the file system is inconsistent or read-only. There might be storage device or connection failure on the quorum disk. Forced primary recovery. Forced server down. Secondary server is not in a good state. Server configuration is not consistent.

Take action based om the reason explanation. Check the status of both servers.

13853

Error

13856

Error

Secondary server failed to communicate with primary server through IP. Secondary server failed to communicate with remote mirror. Failed to merge configuration file.

Check network connections.

13858

Critical

Check server and network connections. Check the server configuration.

13860

Error

13861

Error

Failed to rename file from %1 to %2. Failed to write file %1 to repository. Primary server is commanded to resume. This server operation will terminate. Secondary server failed to take over. Primary server has invalid failover configuration.

Check the file system .

13862

Error

Check storage connectivity.

13863 13864 13877 13878

Critical Critical Error Error

Check the status of failover servers. Check server status. Check secondary server. Check failover setup configuration.

CDP/NSS Administration Guide

563

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13879

Type
Critical

Text
Secondary server detected kernel module failure; you may need to reboot server %1. Secondary server has detected communication module failure. Failover is not initiated. error [Error]. Secondary server will terminate failover module. error [Error]. Primary server quorum disk may have problem. error [Error]. Secondary server is temporarily busy. Partner server failure detected: [Failure] (timestamp [Date Time]) Partner server power control device status: %1. Server failure detected

Probable Cause
Unexpected kernel module error happened.

Suggested Action
Reboot the secondary server.

13880

Critical

Unexpected error happened in comm module so failover will not occur.

Check server modules status.

13881

Error

Forced fm module to stop.

Check server status.

13882

Error

There might be a storage device or connection failure on the quorum disk. The secondary server has a heavy load perhaps due to I/O or TimeMark operations. The server detected the specified failure condition on the partner that can result in a failover. This server regularly checks commuication with power control device on the partner server. The server has detected a failure on the partner resulting in a failover. A manual takeover has occurred. This server is not able to take over the partner due to a power control failure or a missing configuration file. This may be due to a storage device or connection failure on the quorum disk. Allocation block size is not set to the same value on servers in a failover setup.

Check storage connectivity.

13888

Warning

Check server status.

13895

Critical

Check the failover condition to resolve the issue.

13896

Warning

In case the status is not OK, check the partner server.

13897

Critical

Check the partner server status. Ignore the message if this is an expected action. Check power control status and failover configuration file. Check storage connectivity.

13898 13900

Warning Critical

Manual takeover occurred. This server failed to take over due to %1.

13901

Error

Failed to read %1 from Configuration Repository. Allocation block size mismatch between failover partner(local %1 remote %2)

13909

Critical

Set the environment parameter to the same value.

CDP/NSS Administration Guide

564

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13910

Type
Error

Text
Failover status in quorum may not be right on this server: %1. Failover is configured without a power control option enabled.

Probable Cause
Failover status could not be updated on the quorum disk due to a storage device or connection failure on the disk. A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup. A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup. A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup. A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup.| Forceful takeover happened without powering off the primary server. The server fsnupd terminated abnormally. If this is a failover set,the secondary server may be able to take over The server ipstorcomm terminated abnormally. If this is a failover set, the secondary server might take over. The server NAS terminated abnormally. If this is a failover set,the secondary server may be able to take over. The server iscsi terminated abnormally. If this is a failover set,the secondary server may be able to take over.

Suggested Action
Check storage connectivity.

13912

Warning

In order to avoid any outage by single node failure, configure the power control option using a physical power control device. Either fix the issue on the partner or manually take over after shutting down the partner. In order to avoid any outage by single node failure, configure the power control option using a physical power control device. Either fix the issue on the partner or manually take over after shutting down the partner. Fix your power control equipment.

13913

Critical

This sever cannot take over the partner due to a missing power control device. Failover is configured without a power control option enabled.

13916

Warning

13917

Critical

This server has no power control device.

13918

Critical

Forceful takeover will continue even though power control is not functioning. Local server failure detected: fsnupd terminated abnormally. Local server failure detected: ipstorcomm terminated abnormally. Local server failure detected: NAS terminated abnormally. Local server failure detected: iscsi terminated abnormally.

13919

Critical

Contact Technical Support for possible cause.

13920

Critical

Contact Technical Support for possible cause.

13921

Critical

Contact Technical Support for possible cause.

13922

Critical

Contact Technical Support for possible cause.

CDP/NSS Administration Guide

565

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13923

Type
Critical

Text
Local server failure detected: istor terminated abnormally. Local server failure detected: permanently stop log module due to too many failures. Local server failure detected: log module terminated abnormally. Local server failure detected: log module restart due to memory exceed maximum size fail. Local server failure detected: The auth module has permanently stopped due to too many auth module failures. Local server failure detected: auth module terminated abnormally.

Probable Cause
The server istor terminated abnormally. The server permanently stop log module due to too many failures. The server log module terminated abnormally. The server log module restarted because memory exceeded the maximum threshold. The auth module has permanently stopped because of too many auth module failures. The connection authentication module has stopped. In a failover setup, the partner server will try to take over this server. The CLI proxy module has stopped. In a failover setup, the partner server will try to take over this server. The IO core module has stopped. In a failover setup, the partner server will try to take over this server. An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

Suggested Action
Contact Technical Support for possible cause. Contact Technical Support for possible cause.

13924

Critical

13925

Critical

Contact Technical Support for possible cause. Contact Technical Support for possible cause.

13926

Critical

13927

Critical

Contact Technical Support for possible cause.

13928

Critical

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13929

Critical

Local server failure detected: iscliproxy terminated abnormally.

13930

Critical

Local server failure detected: ioserver process terminated abnormally. Local server failure detected: ioctl_mgr terminated abnormally.

13931

Critical

CDP/NSS Administration Guide

566

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13932

Type
Critical

Text
Local server failure detected: downstream terminated abnormally.

Probable Cause
An IO core thread has stopped. In a failover setup, the partner server will try to take over this server. An IO core thread has stopped. In a failover setup, the partner server will try to take over this server. An IO core thread has stopped. In a failover setup, the partner server will try to take over this server. An IO core thread has stopped. In a failover setup, the partner server will try to take over this server. The downstream management module has stopped. In a failover setup, the partner server will try to take over this server. A Fibre Channel HBA port has changed to a state of link down. In a failover setup, the partner server will try to take over this server. The server cannot access one or more storage arrays. In a failover setup, the partner server will try to take over this server. The configuration file does not have correct Fibre Channel settings; for example, FC clients are detected but no FC adapter is set to target mode.

Suggested Action
If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure. Check the cables and make sure the FC GBIC is correctly plugged in. You may need to replace cables or the GBIC. Determine the cause of the storage failure (i.e. FC port link down, storage array down, connectivity failure) and correct the issue. Check configuration of FC HBAs.

13933

Critical

Local server failure detected: control_from_user terminated abnormally. Local server failure detected: control_from_kernel terminated abnormally. Local server failure detected: ioctl_evt terminated abnormally.

13934

Critical

13935

Critical

13936

Critical

Local server failure detected:kfsnbase terminated abnormally.

13937

Critical

Local server failure detected: Fibre Channel link down detected.

13938

Critical

Local server failure detected: storage connectivity failure.

13939

Critical

Local server failure detected: pipe full, restart comm.

CDP/NSS Administration Guide

567

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
13940

Type
Critical

Text
Local server failure detected: invalid fc configuration in conf file.

Probable Cause
The configuration file does not have correct Fibre Channel settings; for example, FC clients are detected but no FC adapter is set to target mode. An IOCTL function was not processed in time.

Suggested Action
Check configuration of FC HBAs.

13941

Critical

Local server failure detected: ioctl stuck: pid %1 function %2 seconds %3. Secondary server will continue takeover with exception:%1

Check the system log to get more information about the reason for the function failure. Check to see if this context is expected.

13942

Warning

The partner server took over this server under a special context such as a manual takeover. The takeover process may not be perfect due to issues with quorum mirror segments. Device physical layout does not match between failover servers configuration. The IO core module has failed.

13943

Warning

Secondary server detect primary server has a flaw during taking over:%1. Secondary server detect physical layout mismatch with primary server. This server has detected ioserver failure.

Check configuration repository and its mirror.

13944

Critical

Check storage configuration.

13945

Critical

Reboot the server.Check the system log to get more information about the reason for the module failure. You may need to reboot. Check the system log to get more information about the reason for the failure. Check the system log to get more information about the reason for the failure. Check source and target virtual devices.

13946

Error

Secondary server failed to prepare file during takeover: %1 Secondary server will continue manual takeover with exception: %1 Snapshot copy failed to start because of invalid input arguments.

The takeover process may not perfect due to issues with configuration file processing. The takeover process may not perfect due to issues with configuration files. The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it. The source virtual device or destination virtual device cannot be accessed when creating and copying a snapshot and copying it.

13947

Critical

15000

Error

15002

Error

Snapshot copy from virtual device id %1 to id %2 failed because it could not open file %3.

Check source and target virtual devices.

CDP/NSS Administration Guide

568

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
15003

Type
Error

Text
Snapshot copy from virtual device id %1 to id %2 failed because it failed to allocate (%3) memory. Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when writing to file %3, errno is %4. Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when lseek in file %3, errno is %4. Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when reading from file %3, errno is %4. Snapshot copy from virtual device id [Device ID] to id [Device ID] might have run out of snapshot reserved area. Please expand the snapshot reserved area. TimeMark copy failed to start because of invalid input arguments. TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because it failed to open file %4. TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because it failed to allocate (%4) memory.

Probable Cause
Memory is low.

Suggested Action
Check server memory amount and usage.

15004

Error

The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it. The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it. The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it. The snapshot copy operation failed and is most likely due to insufficient snapshot resource area that cannot maintain the snapshot.

Check source and target virtual devices.

15005

Error

Check source and target virtual devices.

15006

Error

Check source and target virtual devices.

15008

Error

Increase the snapshot resource or create the snapshot copy while the virtual drive is not being actively written to.

15016

Error

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark. The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark. Memory is low.

Check source and target virtual devices.

15018

Error

Check source and target virtual devices.

15019

Error

Check server memory amount and usage.

CDP/NSS Administration Guide

569

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
15020

Type
Error

Text
TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when writing to file %4, errno is %5. TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when lseek in file %4, errno is %5. TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when reading from file %4, errno is %5. TimeMark copy from virtual device id [Device ID] snapshot image [TimeMark name] to id [Device ID] might have run out of snapshot reserved area. Please expand the snapshot reserved area. TimeMark rollback failed to start because of invalid input arguments. TimeMark rollback for virtual device id %1 to snapshot image %2 failed because it failed to open file %3. TimeMark rollback for virtual device id [Device ID] to snapshot image [TimeMark name] failed because it failed to allocate ([Kilobytes]) memory.

Probable Cause
The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Suggested Action
Check source and target virtual devices.

15021

Error

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15022

Error

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15024

Warning

The TimeMark copy operation failed and is most likely due to insufficient snapshot resource area that cannot maintain the snapshot.

Increase the snapshot resource or create a TimeMark copy while the virtual drive is not being actively written to.

15032

Error

The source virtual device or the destination virtual device cannot be accessed. The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices. Check source and target virtual devices.

15034

Error

15035

Error

The memory resource in the system is running low. The system cannot allocate enough memory to perform the rollback operation.

Stop unnecessary processes or delete some TimeMarks and try again. If this happens frequently, increase the amount of physical memory to adequate level.

CDP/NSS Administration Guide

570

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
15036

Type
Error

Text
TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when writing to file %3, errno is %4. TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when lseek in file %3, errno is %4. TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when reading from file %3, errno is %4. TimeMark rollback for virtual device id [Device ID] to snapshot image [TimeMark name] might have run out of snapshot reserved area. Please expand the snapshot reserved area. TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred while getting TimeMark extents. Server IO cpl call UPDATE_TimeMark failed on vdev id [Device ID]: Invalid Argument

Probable Cause
The source virtual device or the destination virtual device cannot be accessed.

Suggested Action
Check source and target virtual devices.

15307

Error

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15308

Error

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15040

Error

The snapshot resource area is used for the rollback process. If the resource is too low, it will affect the rollback operation.

Expand the snapshot resource to an adequate level.

15041

Error

This might be due to snapshot resource device error.

Check snapshot resource device.

15050

Error

TimeMark related function call returned error. For example, if you get this error during TimeMark copy, it is most likely due to insufficient snapshot resource space

Check the system log and take action based on the related function call that has failed. For TimeMark copy failure, expand the snapshot resource to an adequate level. Check if TimeMark or Replication successfully completed. If not, manually run TimeMark or Replication after expanding the snapshot resource.

CDP/NSS Administration Guide

571

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
15051

Type
Error

Text
Server ioctl call %1 failed on vdev id %2: I/O error (EIO). Server ioctl call %1 failed on vdev id %2: Not enough memory space (ENOMEM). Server ioctl call %1 failed on vdev id %2: No space left on device (ENOSPC). Server ioctl call %1 failed on vdev id %2: Already existed (EEXIST). Server ioctl call [Device ID] failed on vdev id [Device ID]: Device or resource is busy (EBUSY). Server ioctl call %1 failed on vdev id %2: Operation still in progress (EINPROGRESS). Failed to create TimeMark for group %1. Failed to delete TimeMarks because they are in rollback state. Failed to delete TimeMarks because TimeMark operation is in progress to get TimeMark information. Group cache/CDP journal is enabled for virtual device %1, vdev signature is not set for VSS. Failed to update the configuration of the Primary Disk %1 for Near-line Recovery.

Probable Cause
The virtual drive is not responsive to IO requested by the upper layer. The virtual drive is not responsive to the upper layer calls because of low memory condition. The virtual drive is not responsive to upper layer calls due to insufficient free space. The operation may have already been executed or is in conflict with an existing operation. The virtual drive is busy with I/ O and not responsive to the upper layer calls.

Suggested Action
Try again after checking devices. Check system memory.

15052

Error

15053

Error

Check free space on physical and virtual devices. Check operation results.

15054

Error

15055

Error

Try again when the system is less busy or determine the cause the high activity and correct the situation if necessary. Try again when the system is less busy or determine the cause the high activity and correct the situation. Check group members. Try again.

15056

Error

The virtual drive is busy with I/ O and not responsive to the upper layer calls. TimeMark cannot be created on all group members. TimeMarks are in rollback state. TimeMark operation is in progress.

16002 16003

Error Error

16004

Error

Try again.

16010

Error

Virtual device is not VSS aware.

Select the right virtual device for VSS operation.

16106

Error

Near-line storage device might have a problem.

Check the server connection of the near-line pair.

CDP/NSS Administration Guide

572

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
16107

Type
Error

Text
Failed to update the configuration of the Nearline Disk %1 for Near-line Recovery. Failed to start TimeMark rollback on Near-line Disk %1 for Near-line Recovery. Failed to assign the Primary Server to Nearline Disk %1 to resume the Near-line Mirroring configuration. Failed to update the configuration of the Primary Disk %1 to resume Near-line Mirroring configuration. Failed to update the configuration of the Nearline Disk %1 to resume Near-line Mirroring configuration. Failed to update the configuration of the Primary Disk %1 for Near-line Replica Recovery. Failed to update the configuration of the Nearline Disk %1 for Near-line Replica Recovery. Failed to update the configuration of the Nearline Replica %1 for Nearline Replica Recovery. Failed to start TimeMark rollback on Near-line Replica %1 for Near-line Replica Recovery.

Probable Cause
Near-line storage device might have a problem.

Suggested Action
Check the server connection of near-line pair.

16108

Error

Near-line storage device might have a problem.

Check the TimeMark status and server connection of the near-line pair. Check the server status and retry.

16109

Error

The ioctl call may fail due to server busy or assignment error from Fibre Channel or iSCSI depending on the protocol. Near-line storage device might have a problem.

16110

Error

Check the server connection of the near-line pair.

16111

Error

Near-line storage device might have a problem.

Check the server connection of the near-line pair.

16120

Error

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16121

Error

Storage device might have a problem.

Check the server connection of the near-line pair and replica server. Check the server connection of the near-line pair and replica server. Check the server connection of the near-line pair and replica server.

16122

Error

Storage device might have a problem.

16123

Error

Storage device might have a problem.

CDP/NSS Administration Guide

573

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
16124

Type
Error

Text
Failed to update the configuration of the Primary Disk %1 to resume the Near-line Mirroring configuration. Failed to update the configuration of the Nearline Disk %1 to resume the Near-line Mirroring configuration. Failed to update the configuration of the Nearline Replica %1 to resume the Near-line Mirroring configuration. Console ([host name]): Failed to modify Fibre Channel client (%2) WWPN from %3 to %4. Failed to add storage to the thin disk %1 (error code %2). Failed to add storage to the thin disk %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit. The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the thin disk. Out of disk space to add storage to the thin disk %1.

Probable Cause
Storage device might have a problem.

Suggested Action
Check the server connection of the near-line pair and replica server.

16125

Error

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16126

Error

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16200

Error

There may be duplicate WWPNs.

Check FC WWPNs.

16211

Error

There may not be enough storage available. Quota limit is reached.

Check storage capacity.

16212

Error

Check user quota.

16213

Error

Quota limit is reached.

Check user quota.

16214

Error

There is not enough storage available.

Check storage capacity.

CDP/NSS Administration Guide

574

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
16215

Type
Error

Text
Failed to add storage to the thin disk %1: maximum segment exceeded (error code %2). Console ([host name]): Failed to update the thin disk properties for virtual device %2 (threshold: %3, increment: %4) Console ([host name]): Failed to modify the thin disk size for virtual device %2 to %3 MB (%4 sectors). Console ([host name]): Failed to add storage to the thin disk %2. Failed to expand TimeView %1 (error code %2) Not enough disk space is available to expand TimeView %1

Probable Cause
There is not enough storage available.

Suggested Action
Check storage capacity.

16217

Error

Parameter values might be inconsistent.

Check parameters.

16219

Error

Thin disk expansion failed possibly due to a device error or the system being busy.

Check device status and system status; then try again.

16220

Error

There is not enough storage available.

Check storage capacity.

16223

Error

16224

Error

The physical storage for the TimeView Resource has run out of space. 2) Allocation block size is enabled which may require more space than the actual expansion size.

1) Check the physical amount of space for public and/or storage pools. Add more storage as needed. 2) Check if the allocation block size is enabled. Add more storage as needed. Contact Tech Support.

16225

Error

Failed to expand TimeView %1: maximum segment exceeded (error code %2) Failed to expand TimeView %1. The TimeView is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit.

16226

Error

Contact Tech Support.

CDP/NSS Administration Guide

575

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
16227

Type
Error

Text
The TimeView %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the TimeView. Failed to initialize report scheduler configuration. Failed to start report scheduler.

Probable Cause

Suggested Action
Contact Tech Support.

16232 16234

Error Error

The system might be busy or disk space is running low. The system might be busy and takes longer to start.

Check system resource usage and disk usage. Check if the CLI proxy server module has started. Restart the comm module if the proxy server module is not started. Check to see if the CLI proxy server module is stopped. Retry later. Check system resource usage and disk usage. Retry later. Check system status.

16236 16238 16240 16242 16252

Error Error Error Error Error

Failed to stop report scheduler. Failed to retrieve report schedule(s). Failed to add / update report schedule(s). Failed to remove report schedule(s). Failed to initialize statistics log scheduler configuration. Failed to start statistics log scheduler. Failed to stop statistics log scheduler. Failed to retrieve statistics log schedules. Failed to add / update statistics log schedule(s).

The system might be busy and takes longer to stop. The system might be busy. The system might be busy or disk space is running low. The system might be busy. The statistics scheduler thread could not be started possibly due to being configured incorrectly or system status. The statistics scheduler thread could not start to collect information. Statistics scheduler thread could not stop possibly due to the system being busy. Statistics schedules could not be retrieved possibly due to the system being busy. Statistics schedules could not be updated possibly due to the system being busy.

16254

Error

Check system status.

16256

Error

Check system status.

16258

Error

Check system status.

16260

Error

Check system status.

CDP/NSS Administration Guide

576

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
16262

Type
Error

Text
Failed to remove statistics log schedule(s). Failed to remove TimeView data resource%1 Rescan replica cannot proceed due to replication already in progress. Rescan replica cannot proceed due to replication control area missing. Rescan replica cannot proceed due to replication control area failure. Replication cannot proceed due to replication control area failure. Replication cannot proceed due to replication control area failure. Rescan replica cannot proceed due to replication control area failure. Rescan replica failed due to network transport error. Replicating replica failed due to network transport error. Rescan replica failed due to local disk error. Replication failed due to local disk error.

Probable Cause
Statistics schedules could not be removed possibly due to the system being busy. Removing timeview data resource failed possibly because the system was busy. Rescan cannot be performed when replication is in progress. There may be a storage problem.

Suggested Action
Check system status.

16421

Error

Check the system status and retry. Wait for the process to complete before trying again or change the replication schedule. Check the virtual device layout and storage devices for missing segments. Check the virtual device layout and storage devices for missing segments. Check the virtual device layout and storage devices for missing segments. Check the virtual device layout and storage devices for missing segments. Check the virtual device layout and storage devices for missing segments. Check network condition between the servers.

17001

Error

17002

Error

17003

Error

There may be a storage problem.

17004

Error

There may be a storage problem.

17005

Error

There may be a storage problem.

17006

Error

There may be a storage problem.

17011

Error

Rescan for differences requires connecting to the replica server. A network issue can cause rescan to fail. Replication failed due to a network condition. Rescan encountered a disk I/ O error from the source disk. Replication encountered a disk I/O error from the source disk.

17012

Error

Check network condition between the servers. Check the storage device or system in the source server. Check the storage device or system in the source server.

17013 17014

Error Error

CDP/NSS Administration Guide

577

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
17015

Type
Error

Text
Replication failed because local snapshot used up all of the reserved area. Replication failed because the replica snapshot used up all of the reserved area. Failed to rescan failover secondary server after preparing disk on primary. The following physical devices are not found on the failover partner. Primary Server: %1, Secondary Server: %2, Physical Devices SCSI Addresses: %3. Failed to rescan failover secondary server after importing disk on primary. The following physical devices are not found on the failover partner. Primary Server: %1 , Secondary Server: %2, Physical Devices SCSI Addresses: %3. Failed to open file %1. Failed to add user %1 to the NAS server.

Probable Cause
Replication failed because the snapshot from the source drive could not be maintained due to low snapshot resources. Replication failed because the snapshot from the replica drive could not be maintained due to low snapshot resource space. Failed to update the physical devices on failover secondary server

Suggested Action
Expand the snapshot resource for the source device. Expand the snapshot resource for the replica device. Make sure the physical devices are set up properly on the failover partner server and rescan the physical resources to refresh the configuration.

17016

Error

19007

Warning

19008

Warning

Failed to update the physical devices on failover secondary server.

Make sure the physical devices are set up properly on the failover partner server and rescan the physical resources to refresh the configuration.

31003 31004

Error Error

The specified file does not exist. When adding username and UID into the file /etc/passwd, one of the following errors occurred: - nasgrp is not in /etc/group - username already exists in / etc/passwd - the file /etc/passwd cannot be updated

Check the file existence. Check nasgrp group exists by the command "getent group | grep nasgrp". If not, add it using the command "groupadd nasgrp". If the username is new and the group nasgrp already exists, check to make sure the file system does not have an issue by creating a test file under /etc. If the file cannot be created, reboot the server to trigger a file system check.

CDP/NSS Administration Guide

578

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31005

Type
Error

Text
Failed to allocate memory.

Probable Cause
Memory is low.

Suggested Action
Check system memory usage to make sure enough memory is reserved for usermode operations especially if you have NAS enabled. Run the command "cat / proc/meminfo" to check if ((MemFree+Buffers+Cache d)/MemTotal) is not less than 10%. Investigate to determine the cause of high memory usage. Run "lsof /nas/<resource>" to check the process that opens the device. If the process exists, then manually kill it. If no process opens the device, then you may need to reboot the server.

31011

Error

IPSTORUMOUNT: Failed to unmount %1.

When unmounting a NAS file system, one of the following errors occurred: - the mount path is not from / nas - umount process cannot be forked - NAS file system is busy to be unmounted /etc/mtab cannot be locked temporarily When mounting NAS file system, one of the following errors occurred: - failed to get vdev name by vid from ipstor.conf. This can happen if ipstor.conf cannot be read ot does mot contain VirtualDevConnection info. - unmount failed.(Ref. 31011) (unmount will happen when the mount path is duplicated) - NAS file system failed to be mounted

31013

Error

IPSTORMOUNT: Failed to mount %1.

Check whether you can open ipstor.conf. Try to create a test file under $ISHOME/etc/ $HOSTNAME; if the file cannot be created, written or read, the file system may be corrupted. Reboot the server to trigger a file system check. Check whether the info of vdev, vid and VirtualDevConnection are correctly in ipstor.conf Try to manually mount the NAS device to a test folder, (i.e. /mnt/test), an error will display if the mount fails. Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31017

Error

Failed to write to file %1.

The file system may be inconsistent.

CDP/NSS Administration Guide

579

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31020

Type
Error

Text
Failed to rename file [File name] to file [File name]. IPSTORNASMGTD: Failed to create file [File name]. IPSTORNASMGTD: Failed to lock file [File name]. IPSTORNASMGTD: Failed to open file [File name]. Failed to lock file [File name]. Failed to create file [File name]. Failed to create directory [Directory name]. Failed to remove directory [Directory name]. Failed to execute program '[Program name]'.

Probable Cause
The file system is full or system resources are critically low. See 31020.

Suggested Action
Try removing some unnecessary files (i.e. logs or cores). See 31020.

31023

Error

31024

Error

Some processes exited without an unlock file. One of the configuration files is missing. Some processes exited without an unlock file. The file system is full or system resource is critically low. The file system is full or system resource is critically low. Some other process might be accessing the directory. When any server process cannot be started, it is most likely due to insufficient system resources, invalid state left by an server process that may not have been stopped properly, or an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequent occurrence is encountered, there may be external factors that contribute to the behavior that must be investigated and removed before running the server.

Restart the server modules.

31025

Error

Make sure the package is installed properly. Restart the server modules. Try removing some unnecessary files like logs or cores. Try removing some unnecessary files like logs or cores. Try stopping some running process or exit out of existing logins. If system resources are low, use top to determine the process using the most memory. If physical memory is below the CDP/NSS recommendation, install more memory to the system. If the OS is suspected ito be in bad state due to unexpected failure in either hardware or software components, restart the server to make sure the OS is in a healthy state before trying again. Check whether there is any core file under $ISHOME/bin that indicates process error. Restart the server modules.

31028 31029

Warning Error

31030

Error

31031

Error

31032

Error

31034

Warning

Local IPStor SAN Client is not running.

The Client is not running properly.

CDP/NSS Administration Guide

580

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31035

Type
Error

Text
Failed to add group [Group name] to the NAS server. Failed to delete user [User name] from the NAS server. Error accessing NAS Resource state file for virtual device [Device number]. Failed to rename file [File name] to file [File name]. Failed to create the NAS Resource. Failed to allocate SCSI disk device handle - operating system limit reached. Exceed maximum number of reserved NAS users. Exceed maximum number of reserved NAS groups. Failed to setup password database. Failed to make symlink from [File name] to [File name]. Failed to update /etc/ passwd. Failed to update /etc/ group.

Probable Cause
The number of reserved group IDs are used up.

Suggested Action
Add addition ranges from Console -> NAS Clients -> Windows Clients -> UID/ GID. Kill any running process that belongs to an account that you are deleting. No action needed.

31036

Error

User being deleted is currently logged in. System had an unclean shutdown.

31037

Error

31039

Error

File system is full.

Try removing some unnecessary files like logs or cores. Refer to doc on how to rebuild kernel to support more SCSI devices.

31040

Error

OS limit reached.

31041

Error

The number of reserved user IDs is used up.

Add additional ranges from Console -> NAS Clients -> Windows Clients -> UID/ GID. Add additional ranges from Console -> NAS Clients -> Windows Clients -> UID/ GID. Try removing some unnecessary files (i.e. logs or cores). Try removing some unnecessary files (i.e. logs or cores). Restart the server modules. Restart the server modules.

31042

Error

The number of reserved user IDis used up.

31043

Error

See 31020.

31044

Error

See 31020.

31045 31046

Error Error

Some processes exited without unlocking file. Some processes exited without unlocking file.

CDP/NSS Administration Guide

581

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31047

Type
Error

Text
Synchronization daemon is not running.

Probable Cause
Someone manually stopped the process.

Suggested Action
If system resource is low, run 'top' to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory on the system. If the OS is suspected to be in a bad state due to unexpected failure in either hardware of software components, restart the server machine. Make sure all of the physical devices are connected and powered on correctly and restart the server modules. If the Console shows that the NAS resource is attached but not mounted, you might need to reformat this NAS resource. This will remove all data on the drive. Kill any running processes which might be accessing the mount point. Restart the server modules. Restart the server modules.

31048

Error

Device [Device number] mount error.

Failed to attach to the SAN device provided by the local client module or the file system is corruptedr.

31049

Error

Device [Device number] umount error. Failed to detach device vid [Device number]. Failed to attach device vid [Device number].

Some other process might be accessing the mount point. The client module is not running properly. Failed to attach the SAN device provided by the local client module or the file system is corrupted. Failed to get hostname with the function gethostname. Samba authentication server not accessible.

31050 31051

Error Error

31054

Error

Failed to get my hostname SAM: connection failure.

Check the host exists with that name and the name is resolvable. Check if the auth server is up and running or the name of the server is correct setup from the Console. None.

31055

Error

31056

Warning

Delay mount due to unclean file system on vid [Device number].

During failover, the secondary is waiting for a specific amount of time until the primary unmounts NAS resources gracefully.

CDP/NSS Administration Guide

582

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31058

Type
Warning

Text
Not all disks unmount complete.

Probable Cause
A file system check is in progress or the device is not available during failover/ failback.

Suggested Action
If the file system check is in progress, you can try to stop it by killing the file system repair process. Check physical device status. If the file system check is in progress, you can try to stop it by killing the file system repair or checking processes. Check physical device status. See 31032

31060

Warning

Not all disks mount complete.

A file system check is in progress or the device is not available during failover/ failback.

31061

Error

Nas process ipstorsmbd fail.

One of the following processes is not running properly: ipstorclntd, kvbdi, ipstornasmgtd, smbd, nmbd, winbindd, portmap, rpc.mountd, mountd, nfsd See 31032 See 31017. There is a wrong file system type of NAS resource set in ipstor.conf. Failed to get NAS file system block size from ipstor.conf. Failed to read the file $ISHOME/etc/$HOSTNAME/ ipstor.dat.cache. Failed to get vdev by vid from the file $ISHOME/etc/ $HOSTNAME/ ipstor.dat.cache. When formatting NAS resources, the super block could not be removed because it failed to open the VBDI device or write to the device. Failed to get status of the file $ISHOME/bin/sfsck.

31062 31064

Error Error

Failed to read from file %1 Error file system type %1

See 31017. Check the file system type in ipstor.conf. Check the file system block size in ipstor.conf. See 31017.

31066 31067

Error Error

Invalid XML file cannot parse dynamic configuration %1 dynamic configuration does not match %1

31068

Error

Check the mapping of vdev name and corresponding vid in ipstor.dat.cache. Check whether the device / dev/vbdixx exists.

31069

Error

Do not destroy file system's superblock of %1

31071

Error

Missing file %1

Run the command "stat $ISHOME/bin/sfsck" to see if any error displays.

CDP/NSS Administration Guide

583

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31072

Type
Error

Text
Failed to update CIFS native configuration

Probable Cause
When updating CIFS native configuration, one of the following error happened: If it exists, run the command "dd if=/dev/vbdixx of=/tmp/ test.out bs=512 count=100" to test whether you can read the device. If not, check the physical device status -Failed to create the temporary file $ISHOME/etc/ $HOSTNAME/ .smb.conf.XXXXXX - Failed to get the cifs client from $ISHOME/etc/ $HOSTNAME/nas.conf - Failed to rename the file $ISHOME/etc/$HOSTNAME/ .smb.conf.XXXXXX to $ISHOME/etc/$HOSTNAME/ smb.conf When updatting NFS native configuration, one of the following error happened: - Failed to open the file $ISHOME/etc/$HOSTNAME/ nas.conf - Failed to create the temporary file $ISHOME/etc/ $HOSTNAME/.exports. Failed to open the file $ISHOME/etc/$HOSTNAME/ nas.conf.

Suggested Action
Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31073

Error

Failed to update NFS native configuration

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31074

Error

Failed to parse XML file %1

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check. Reboot the failed server.

31075

Error

Disk %1 unmount failed during failover Due to storage failure, NAS Secondary Server has to reboot to resume its tasks.

Failover or failback has occurred so NAS resources need to be unmounted. NAS resources cannot be detached or unmounted during failover/failback. The storage failure prevented the file system from flushing the cache. Rebooting the failed server will clean the cache.

31076

Critical

Reboot the failed server.

CDP/NSS Administration Guide

584

Troubleshooting / FAQs

CDP/NSS Error Codes


Code
31078

Type
Error

Text
Add NAS resource to iocore failed during failover processing %1

Probable Cause
When adding a NAS resource to iocore, one of the following error happened: - Failed to open the file $IS_CONF - NAS option is not enabled - Failed to open /dev/isdev/ kisconf When getting file system command from $ISHOME/etc/ $HOSTNAME/nas.conf, the InfoItem value is not right.

Suggested Action
See 31017 Run the command "stat / dev/isdev/kisconf" to check the file.

31079

Error

Missing file system commands %1 of %2

Check whether all the InfoItem names are correct in nas.conf. For example, Example: InfoItem name="mount". You can compare it with nas.conf file on a healthy NAS server. Check the iSCSI initiator on the client side.

50000

Error

iSCSI: Missing targetName in login normal session from initiator %1 iSCSI: Login request to nonexistent target %1 from initiator %2

The iSCSI initiator may not be compatible.

50002

Error

The iSCSI target does not exist any longer.

Check the iSCSI initiator on the client side and the iSCSI configuration on the server. Remove targets from the configuration if they do not exist. Check the iSCSI CHAP secret settings on the server and the client sides.

50003

Error

iSCSI: iSCSI CHAP authentication method rejected. Login request to target %1 from initiator %2 RAID: %1 RAID: %1 RAID: %1 Enclosure: %1

The CHAP settings are not valid.

51001 51002 51003 51004

Warning Error Critical Warning

The physical RAID controller might have some failures. The physical RAID controller has some failures. The physical RAID controller has some failures. The physical enclosure might have some failures.

Check RAID controller configuration. Check RAID controller configuration. Check RAID controller configuration. Check enclosure configuration.

CDP/NSS Administration Guide

585

Troubleshooting / FAQs

UNIX SAN Client error codes


Type
UNIX Error UNIX Error Text Failed to add device, %s. Failed to connect to Server %s:%d, errno=%d. Failed to attach to device %ld on server %ld, %s. %s Client is not running! Failed to connect to bridge, %s. Client %s is not authenticated. Failed to authenticate user %s. FC_client_HS Failed. FC_server_HS Failed. Failed to unmount %s, %s. FC_complete_HS Failed FC_client_HeartBeat Failed BridgeStart: Failed to open %s, %s. Failed to wait for status, %s. There is no device assigned to this client! Exiting. pclose status %d. Probable Cause The storage server is not running. The storage server is not running. The storage server is not running. The storage server is not running. The storage server is not running. Changed storage server configuration. Changed storage server configuration. Changed storage server configuration. Changed storage server configuration. The file system device may be busy. Time has expired for IPStor client to authenticate with server. Heartbeat interval has expired. SAN SCSI driver isn't loaded. Child process has exited. Client does not have any device assigned. Status of closing file descriptor. Suggested Action Check the storage server status. Check the storage server status.

UNIX Error

Check the storage server status.

UNIX Error UNIX Error UNIX Error UNIX Error UNIX Error UNIX Error UNIX Error UNIX Error

Check the storage server status. Check the storage server status. Run "ipstorclient monitor" and set up client again. Run "ipstorclient monitor" and set up client again. Run "ipstorclient monitor" and set up client again. Run "ipstorclient monitor" and set up client again. Clean file system access and run a "umount" command. On the storage server side, run "ipstor restart" to restart server again. On the storage server side, run "ipstor restart" to restart server again. Check if the Intel pro 100 NIC has been installed on client machine. None Use SAN console and assign devices to client then run "ipstorclient restart". None

UNIX Error

UNIX Error

UNIX Information UNIX Error

HP-UX Information

CDP/NSS Administration Guide

586

Troubleshooting / FAQs

Type
HP-UX Error

Text No SAN SCSI drivers.

Probable Cause SAN SCSI driver is not loaded. SAN SCSI driver is not loaded. SAN SCSI driver is not loaded. HP client needs dummy NIC cards. The shared secret file is corrupted. The shared secret file is corrupted. The shared secret file is corrupted. The shared secret file is corrupted. The shared secret file is corrupted. SAN SCSI driver is not loaded. The IPStor client module isn't loaded.

Suggested Action Check if the network driver has been correctly loaded on the client machine. Check if the network driver has been correctly loaded on the client machine. Check if the network driver has been correctly loaded on the client machine. Install another Intel Pro 100 NIC on HP client machine. Run "ipstorclient monitor" and set up the client again. Run "ipstorclient monitor" and set up the client again. Run "ipstorclient monitor" and set up the client again. Run "ipstorclient monitor" and set up the client again. Run "ipstorclient monitor" and set up the client again. Run "ipstorclient monitor" again. Run "ipstorclient restart" to load the client module.

HP-UX Error

Failed to open /dev/ sanscsi, %s. BridgeStart: Failed to open %s, %s. Maximum number of virtual adapters (%d) is exceeded! Bad Magic Number! Failed to read sc, %s. Failed to position file pointer, %s. Failed to rewind file pointer, %s. Failed to position file pointer, %s. BridgeStart: Failed to open %s, %s. Failed to open /proc/ scsi/ipstor/%d

HP-UX Error

HP-UX Error

AIX Error AIX Error AIX Error AIX Error AIX Error AIX Error Linux Error

CDP/NSS Administration Guide

587

Troubleshooting / FAQs

Command Line Interface (CLI) error codes


The following table contains command line error codes..
CDP-NSS Command Line Interface Error Messages Error code
0x90020001 0x90020002 0x90020003 0x90020004 0x90020005 0x90020006 0x90020007 0x90020008 0x90020009 0x9002000a 0x9002000b 0x9002000c 0x9002000d 0x9002000e 0x9002000f 0x90020010 Invalid arguments. Invalid Virtual Device ID. Invalid Client access mode. Connecting to $ISPRODUCTSHORT$ server failed. You are connected to $ISPRODUCTSHORT$ server with read-only privileges. Connecting to SAN client failed. Getting SAN client state failed. The requested Virtual Device is already attached. Attaching to Virtual Device failed. Disconnecting from SAN client failed. Detaching from Virtual Device failed. Invalid size. Invalid X Ray options. Logging in to $ISPRODUCTSHORT$ server failed. User has already logged out from $ISPRODUCTSHORT$ server. Invalid client.

Text

Note: Make sure you use the SAN client names that are created on the server. These names may be different from the actual hostname or the ones in /etc/hosts. 0x90020011 0x90020012 0x90020013 0x90020014 0x90020015 0x90020016 0x90020017 0x90020018 0x90020019 0x9002001a Replication policy is not specified. Memory allocation error. Failed to get configuration file from server. Failed to get dynamic configuration from server. Failed to parse configuration file. Failed to parse dynamic configuration file. Failed to connect to the target server. You are connected to the target Server with readonly privilege. Failed to get the configuration file from the target server. Failed to get the dynamic configuration file from target server. CDP/NSS Administration Guide 588

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002001b 0x9002001c 0x9002001d 0x9002001e 0x9002001f 0x90020020 0x90020021 0x90020022 0x90020023 0x90020024 0x90020025 0x90020026 0x90020027 0x90020028 0x90020029 0x9002002a 0x9002002b 0x9002002c 0x9002002d 0x9002002e 0x9002002f 0x90020030 0x90020031 0x90021000 0x90021001 0x90022000 0x90022001 0x90022002 0x90022003

Text
Failed to parse the configuration file from target server. Failed to parse the dynamic configuration file from the target server. Invalid source virtual device. Invalid target virtual device. Invalid source resource type. Invalid target resource type. The virtual device is a replica disk. The virtual device is a replication primary disk. Failed to delete virtual device from client. Failed to delete virtual device. Failed to delete remote client. Failed to save the file. Remote client does not exist. You have to run login command with valid user id and password or provide server user id and password through the command. You have to run login command with valid user id and password or provide target server user id and password through this command. Virtual Device ID %1 is not assigned to the client %2. The size of the source disk and target disk does not match. The virtual device is not assigned to the client. Replication is already suspended. Replication is not suspended. Rescanning Devices failed. The requested Virtual Device is already detached. $ISPRODUCTSHORT$ server is not added to the client. ?CLI_RPC_FAILED. ?CLI_RPC_COMMAND_FAILED. Failed to start a transaction for this command. Failed to start a transaction on the primary server for this command. Failed to start a transaction on the target server for this command. $ISPRODUCTSHORT$ server specified is an invalid IP address. CDP/NSS Administration Guide 589

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90022004

Text
Failed to resolve $ISPRODUCTSHORT$ server to a valid IP address.

Note: For CLI to work with server name instead of IP address, the server name has to be resolved on the client side and server side. This can happen, for example when the server hostname is not in DNS or /etc/hosts file. 0x90022005 Failed to create a connection.

Note: Check network interface is not down on the server to make sure RPC calls go through. 0x90022006 0x90022007 0x90022008 0x90022009 0x9002200a 0x9002200b 0x9002200c 0x9002200d 0x9002200e 0x9002200f 0x90022010 0x90022011 0x90022012 0x90022013 0x90022014 0x90022015 0x90022016 0x90022017 0x90022018 Failed to secure the connection. User authentication failed. Failed to login to $ISPRODUCTSHORT$ server. Failed to get the device statistics from client. Device is not ready. 2 Device is not detached. Failed to get device status from client. The source virtual device is already a snapcopy source. The source virtual device is already a snapcopy target. The target virtual device is already a snapcopy source. The target virtual device is already a snapcopy target. The source virtual device is a replica disk. The target virtual device is a replica disk. Invalid category for source virtual device. Invalid category for target virtual device. The category of source virtual device is different from category of the target virtual device. The size of the primary disk does not match the size of the replica disk. The minimum size for the expansion is %1 MB in order to synchronize them. Getting $ISPRODUCTSHORT$ server information failed. It's possible that the server version is prior to version 1.02 The Command Line Interface and the $ISPRODUCTSHORT$ Server are running different software versions:\n\t<CLI version %1 (build %2) and $ISPRODUCTSHORT$ Server version %3 (build %4)>\nPlease update these components to the same version in order to use the Command Line Interface. Invalid client list information. Invalid resource list information. Getting report data timeout.

0x90022019 0x9002201a 0x9002201b

CDP/NSS Administration Guide

590

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002201c 0x9002201d 0x9002201e 0x9002201f 0x90022020 0x90022021 0x90022022 0x90022023 0x90022024 0x90022025 0x90022026 0x90022027 0x90022028 0x90022029 0x90022030 0x90022031 There is no report data. Failed to open the output file: %1. Invalid Report Data. Output file: %1 already exists. The target server name cannot be resolved on the primary server. Please make sure your DNS is set up properly or use static IP address for the target server. Failed to promote mirror due to virtual device creation error. The mirror is not recovered. Invalid physical segment information. Failed to open file: %1. Physical segment section not defined. Some physical segment information are overlapped. Invalid segment size. Invalid segment section. Invalid TimeMark. The virtual device is in a snapshot group. You have to enable the TimeMark before joining the virtual device to the snapshot group. The virtual device is in a snapshot group. Please use force option to disable the TimeMark option for this virtual device that is in a snapshot group. The virtual device is in a snapshot group. All the virtual devices in the same snapshot group have to be unassigned as well. Please use force option to unassign the virtual device or -N (--no-group-client-assignment) option to unassign the virtual device only. Failed to write to the output file: %1. Please check to see if you have enough space. The client is currently connected and the virtual device is attached. We recommend you to disconnect the client first before unassigning the virtual device. You must use the <force> option to unassign the virtual device from the client when the client is connected. Failed to connect to the replication primary server. Please use <force> option to promote the replica. TimeMark cannot be disabled when the virtual device is in a snapshot group. Please remove the virtual device from the snapshot group first. The virtual device is in a snapshot group, the individual TimeMark policy for the virtual device cannot be updated. Please specify the group id or group name to update the TimeMark policy for the snapshot group. Please specify at least one Snapshot property to be updated. Replica disk does not exist. Therefore, there is no new resource promoted from the replica disk. but the replication configuration is removed from the primary disk. CDP/NSS Administration Guide 591

Text

0x90022032 0x90022033

0x90022034 0x90022035 0x90022036

0x90022037 0x90022038

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90022039 0x9002203a

Text
TimeView virtual device exists for this TimeMark. After rollback, some of the timemarks will no longer exist and there are TimeView resources created for those timemarks. Please delete the TimeView resources first if you want to rollback the timemark. There are TimeView virtual devices associated with this virtual device. Please delete the TimeView virtual devices first. Replica disk does not exist. Only the replication configuration is removed from the primary disk. Invalid adapter number: %1. Total number of Snapshot Group reaches the maximum groups: %1. Total number of Snapshot Group on the target server reaches the maximum groups: %1. The resource is in a Snapshot Group. Please set the backup properties through the Snapshot Group. Replication is not configured for this resource. The resource is in a Snapshot Group. Please set the replication properties through the Snapshot Group. Please specify at least one replication option to be updated. Failed to get Server Time information. Invalid resource type for deleting TimeMark. Invalid resource type for rolling back TimeMark. The virtual device is in a Snapshot Group enabled with TimeMark. Please perform the group TimeMark operation. The Snapshot Group is not enabled for TimeMark. Please perform the TimeMark operation through the virtual device. TimeView virtual device already exists for this TimeMark. There is no Snapshot Image created for this Snapshot Group. The virtual device is in a replication-enabled snapshot group. The snapshot group is not enabled with replication. If the virtual device in the snapshot group is enabled with replication, please perform the replication operations through the virtual device. Failed to create connection for failover partner server. Failed to start transaction for failover partner server. You are connected to $ISPRODUCTSHORT$ failover server partner with readonly privilege.

0x9002203b 0x9002203c 0x9002203d 0x9002203e 0x9002203f 0x90022040 0x90022042 0x90022043 0x90022044 0x90022045 0x90022046 0x90022047 0x9002204 0x90022048 0x9002204a 0x9002204b 0x9002204c 0x9002204d

0x9002204e 0x9002204f 0x90022050

CDP/NSS Administration Guide

592

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90022051 0x90022052 0x90022053 0x90022054 0x90022055 0x90022056 0x90022057 0x90022058 0x90022059 0x9002205a 0x9002205b 0x9002205c 0x9002205d 0x9002205e 0x9002205f 0x90022060 0x90022061 0x90022062 0x90022063 0x90022064 0x90022065 0x90022066 0x90022067 0x90022068 0x90022069 0x9002206a

Text
Failed to parse the configuration from failover partner server. Replication feature is not supported on this server: %1. Backup feature is not supported on this server: %1. TimeMark feature is not supported on this server: %1. Snapshot Copy feature is not supported on this server: %1. Mirroring feature is not supported on this server: %1. Copy Manager feature is not supported on this server: %1. Fibre Channel feature is not supported on this server: %1. The specified TimeMark is the latest TimeMark on the replica disk, which cannot be deleted. Unable to get NAS write access. Failed to parse NAS configuration. The primary disk is not available. The replication configuration on the primary disk will not be removed. There are SAN client connected to the resource. You have to disconnect the client(s) first before deleting the resource. There are active SMB connections associated with this NAS resource. Please disconnect them first or use force option. Snapshot Group feature is not supported on this server: %1. NAS feature is not supported on this server: %1. Timeout while disabling cache resource. ?CLI_ERROR_DIR_EXIST ?CLI_PARSE_NAS_USER_CONF_FAILED Invalid NAS User. The IP address of the replication target server for this configuration has to be in the range of %1. Local Replication feature is not supported on this server: %1. The specified replica disk is the same as the primary disk The batch mode processing is not completed for all the requested virtual devices. The server \"%1\"is not configured for failover. Unable to get server name.\nPlease check that the environment variable ISSERVERNAME has been set properly.

CDP/NSS Administration Guide

593

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002206b 0x9002206c 0x9002206d 0x9002206e 0x9002206f 0x90022070 0x90022071 0x90022072 0x90022073 0x90022074 0x90022076 0x90022077 0x90022078 0x90022079 0x90022083 0x90022084 0x90022085 0x90022086 0x9002208a 0x9002208e 0x9002208f 0x90022090 0x90022091 0x90022092 0x90022093 0x90022094 0x90022095 0x90022096

Text
Unable to get user name.\nPlease check that the environment variable ISUSERNAME has been set properly. Unable to get password.\nPlease check that the environment variable ISPASSWORD has been set properly. Invalid login information format. File %1 does not exist. Unable to open configuration file: %1. Error reading configuration file: %1. There are virtual devices assigned to this client. NAS resource is not ready. Invalid Windows User name. Invalid NAS authentication mode. Failed to get server name. The server is not a failover secondary server. Failover is already enabled on this server. Failover is already suspended on this server. ?CLI_ERROR_BMR_COMPATIBILITY. ?CLI_ERROR_ISCSI_COMPATIBILITY. This command \"%1\"is not supported for this server version: %2. Cache group is not supported for this server version: %1. Snapshot Notification Option is not supported for this server version: %1. Compression option is not supported for this server version: %1. Encryption option is not supported for this server version: %1. Timeout policy is not supported for this server version: %1. Cache parameter is not supported for this version: %1. Cache parameter <skip-duplidate-write> is not supported for this version: %1. Reserving service-enabled Disk Inquiry String feature is not supported for this version: %1. This is not a valid server configuration to set the server communication information. The resource is a NAS resource and it is attached. Please unmount and detach the NAS resource first before performing TimeMark rollback. Invalid iSCSI Target starting lun.

CDP/NSS Administration Guide

594

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90022097 0x90022098 0x90022099 0x9002209a 0x900220a7 0x900220a8 0x900220a9 0x900220aa 0x900220b0 0x900220b1 0x900220b2 0x900220b3 0x90023000 0x90023001 0x90023002 0x90023003 0x90023004 0x90023005 0x90023006 0x90023007 0x90023008 0x90023009 0x90023010 0x90023011 0x90023012 0x90023013 0x90023100 0x90023101 0x90023103 Invalid IPStor user: iSCSI Initiator %1 is already assigned to other client. There are no users assigned to this iSCSI client. Invalid client type for updating the device properties. The client has to support at least one client protocol. Generic client protocol is not supported on this server. Invalid type for client / resource assignment. Invalid Client Type. ?CLI_NAS_SMB_HOME. ?CLI_SNMP_MAX_TRAPSINK. ?CLI_SNMP_NO_TRAPSINK. ?CLI_SNMP_OUT_INDEX Invalid BootIP client. This client is not enabled BootIP. This client is already enabled BootIP. Did not input any BootIP properties. Ip address is needed. Hardware address is needed. Invalid <use-static-ip> value. Invalid <default-boot> value. Duplicated MAC address. Duplicated IP address. This device is a BootIP resource. Disable BootIP before you delete/unassign it. -S 1 and <ip-address> should be specified at the same time. BootIP feature is not supported on this server: %1. DHCP is not enabled in this server, cannot use static ip. Fail to connect to the server. Please make sure the server is running and the version of the server is 4.01 or later. Use existing TimeMark for replication option is not supported on this server: %1. Invalid share information for batch mode NAS share creation.

Text

CDP/NSS Administration Guide

595

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90023104 0x90023105 0x90023102 0x90023106 0x90023107 0x90023108 0x90023109 0x9002310a 0x9002310b 0x9002310c 0x9002310d 0x9002310e 0x9002310f 0x90023110 0x90023111 0x90023112 0x90023113 0x90023114 0x90023115

Text
Replica size exceeds the licensed Worm Size limit on the target server: %1 GB NAS resource will exceed the licnesed Worm Size limt: %1 GB. ?CLI_TARGET_SERVER_NOT_WORM_KERNEL. Compliance time can only be set when compliance clock option is set. Worm is not supported by the kernel of this server. Stop Write option is no longer supported in this version of server: %1. Invalid replica disk. The compliance clock between failover servers is more than 5 minutes apart. Please use force option to continue. The compliance clock between replication servers is more than 5 minutes apart. Please use force option to continue. Local replication is not supported for WORM resource. You do not have the license for WORM resource. Invalid iSCSI user password length (12 - 16). The login user is not authorized for this operation. The specified user name already exists. Invalid user name. Continuous Replication is not supported. Replication is still in progress. Please wait for replication to complete before disabling the TimeMark option. This resource is in a snapshot group. Snapshot notification will be determined at the group level and cannot be updated at the resource level. This server is configured as Symmetric failover server. In Symmetric failover setup, the same target WWPN will be used on the secondary server during failover instead of the standby WWPN as in Asymmetric failover setup. It's not necessary and not allowed to configure Fibre channel client protocol for the same client on the failover partner server. This client is already enabled with Fibre Channel protocol on the failover partner server. The operation cannot proceed. Replication protocol is not supported for this version of server: %1. TCP protocol is not supported for continuous mode replication on this target server. It is supported on a target server of version 5.1 or later. It is required to assign all the Fibre Channel devices to \"all-to-all\"for Symmetric failover setup. Invalid CDP journal timestamp.

0x90023116 0x90023117 0x90023118 0x90023119

CDP/NSS Administration Guide

596

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002311a 0x9002311b 0x9002311c 0x9002311d 0x9002311e 0x9002311f 0x90023120 0x90023121 0x90023122 0x90023123 0x90023124 0x90023125

Text
CDP option is not supported for this server version: %1. CDP journal is not available. TimeMark priority is not supported on this server: %1. TimeMark information update is not supported on this server: %1. TimeMark information cannot be update for replica group. TimeMark comment cannot be updated for TimeMark group. TimeMark priority cannot be updated for TimeMark group member. CDP journal was suspended at the specified timestamp. This virtual device is still valid for cross mirror setup. Manual swapping is not allowed. Notification Frequency option is not supported on this version of server: %1 This operation is not supported for cross mirror configuration. This virtual device is in a TimeMark group. Rollback is currently in progress for one of the group member %1. Please wait until the rollback is completed before starting rollback for this virtual device. Invalid CDP journal tag. Clients have to be unassigned before rollback is performed. The specified data point is not valid for post rollback TimeView creation. The specified data point is not valid for recurring rollback. This virtual device is still valid for cross mirror setup. Manual swapping is not allowed. Group replication schedule has to be suspended first before joining the resources to the group. Replication schedule of the specified resource has to be suspended first before joining the replication group. Replication schedule for all the group members has to be suspended first before joining the resources to the group or enabling the group replication. MicroScan option for individual resource is not support on this server: %1. Source virtual device isn't on a Falconstor SED. Resource has STP enabled. ?CLI_ERROR_MULTI_STAGE_REPL_NOT_SUPPORTED. Suspend / Resume Mirror option is not supported on this server: %1. Mirror of this resource is already suspended. Mirror of this resource is already suspended.

0x90023126 0x90023127 0x90023128 0x90023129 0x90023130 0x90023131 0x90023132 0x90023133 0x90023134 0x90023135 0x90023136 0x90023137 0x90023138 0x90023139 0x9002313a

CDP/NSS Administration Guide

597

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002313b 0x9002313c 0x90024001 0x90024002 0x90024003 0x90024004 0x90024005 0x90024006 0x90024007 0x90024008 0x90024101 0x90024102 0x90024103 0x90024104 0x90024105 0x90024106 0x90024107 0x90024108 0x90024109 0x9002410a 0x9002410b 0x9002410c 0x9002410d 0x9002410e 0x90024201 0x90024202 0x90024203

Text
This resource is in the replication disaster recovery state, the operation is not allowed. This group is in the replication disaster recovery state, the operation is not allowed. The target virtual device is in a SafeCache group or a CDP group. Please remove the resource from the group first if you need to copy the data to the resource. Mirror Policy feature is not supported on this server: %1. ?CLI_ERROR_REPL_TRANSMITTED_INFO_NOT_SUPPORTED. The virtual device info serial number is not supported for this version of server: %1. Fast replication synchronization is not supported for this version of server: %1. Mirror Swap option is already disabled Mirror Swap option is already enabled. Disable Mirror Swap option is not supported for this server version: %1. A server is in a cross-mirror setup. Virtual device is a Near-line Disk. Virtual device is a Primary Disk enabled with Near-line Mirror. Rescan didn't find the new assigned virtual device The new assigned virtual device has been allocated The remote client hasn't iSCSI target. The virtual device is not a Primary Disk enabled with Near-line Mirror The virtual device is not a Near-line Disk. The servers are not Near-line Mirroring partners for the specified virtual device. There is an error in Near-line Mirroring configuration. Mirror license is required to perform this operation. Please swap the Primary Disk with its mirror first All segments of the Primary Disk are in online state. Cannot join a Near-line Disk to a group contains Near-line Disk with different Near-line server The virtual device has been assigned to a client and the virtual device's userACL doesn't match the snapshot group's userACL. Near-line Mirror option is not support on this server: %1 The operation is not allowed when Near-line Recovery is initiated for the specified Primary Disk:.

CDP/NSS Administration Guide

598

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024204 0x90024205 0x90024206 0x90024207 0x90024208 0x90024209 0x90024301 0x90024302 0x90024303 0x90024304 0x90024305 0x90024306 0x90024307 0x90024308 0x90024351 0x90024401 0x90024402 0x90024403 0x90024404 0x90024501 0x90024502 0x90024601 0x90024602 0x90024603 0x90024604 0x90024605 0x90024606

Text
The operation is not allowed when Near-line Recovery is initiated for the specified Nearline Disk:. TimeMark rollback is not supported for Near-line Disk:. The specified resource is a Near-line Disk. Please remove the Near-line Mirroring configuration first. The specified resource is enabled with Near-line Mirror. Please remove the Near-line Mirroring configuration first. The specified iSCSI target is assigned to a Near-line server. The operation is not allowed when Near-line Replica Recovery is initiated for the specified Nearline Replica Disk: Cannot disable InfiniBand, since there are targets/devices assigned to InfiniBand client. Infini-band is not supported in this build. iSCSI isn't enabled. No infini-band license. Command is not allowed because FailOver is enabled. Failed to convert IP address to integer. The given IP address isn't binded to an infini-band NIC. InfiniBand isn't enabled. Each zone's size cannot be bigger than the HotZone resource's size. Problem vdev command is not supported by this version of server. Virtual device signature is not supported by this version of server. Invalid physical device name. There is no Fibre Channel devices to perform this operation. The virtual device specified by <timeview-vid> isn't a timeview. The timeview doesn't belong to the given virtual device or snapshot group. The cli command is not allowed because of the server is in failover state. There is virtual device allocated on the physical device. The physical device is in a storage pool. The server isn't the owner of the physical device. Physical device is online. Invalid initiator WWPN.

CDP/NSS Administration Guide

599

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024607 0x90024608 0x90024609 0x9002460a 0x9002460b 0x9002460c 0x9002460d 0x9002460e 0x9002460f 0x90024610 0x90024611 0x90024612 0x90024613 0x90024614 0x90024615 0x90024616 0x90024617

Text
The specified new initiator WWPN is invalid already exists. Replacing Fibre Channel client WWPN operation is not supported on this server. The specified target disk is a thin disk. Thin provisioning feature is not supported on this server: %1. Minimum outstanding IOs for mirror policy is not supported on this server: %1 Mirror throughput control policy is not supported on this server: %1. Replication throughput control policy is not supported on this server: %1. iSCSI Mutual Chap Secret option is not supported on this server: %1. Host Apps Info is not supported on this server: %1 The version of the primary server has to be the same or later than the version of the Nearline server for Near-line mirroring setup. The version of the primary server has to be the same or later than the version of the replica server for replication setup. Saving persisted timeview data information is not supported in this version of server: %1 Replication using specific TimeMark is not supported in this version of server: %1. iSCSI mobile user update is not supported in this version of server. Service Enable Device license is required to perform this operation. Primary server is configured for symmetric failover. Near-line Recovery alredy triggered for the Primary failover partner server. Please resume the configuration first. Primary server is configured for symmetric failover. Near-line server client exists on the primary failver partner server. Please remove the client from the primary failover partner server first. Near-line disk is enabled with mirror. Please remove the mirror from Near-line disk before performing Near-line recovery. Near-line Resource is not the mirror of the Primary Disk. Please swap the mirror first before performing the Near-line Recovery. Invalid Near-line client for iscsi protocol". Failed to discover device on Near-line server. Failed to discover device on Near-line server failover parnter. There is not enough space available for virtual header allocation. Near-line server client does not exist on the Primary server. Near-line server failover partner client does not exist on the Primary server Near-line server client properties is not configured for assignment. CDP/NSS Administration Guide 600

0x90024618 0x90024619 0x9002461a 0x9002461b 0x9002461c 0x9002461d 0x9002461e 0x9002461f 0x90024620

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024621 0x90024622 0x90024623 0x90024624 0x90024625 0x90024626 0x90024627 0x90024628 0x90024629 0x9002462a 0x9002462b 0x9002462c 0x9002462d 0x9002462e 0x9002462f 0x90024630 0x90024631 0x90024632 0x90024633 0x90024634 0x90024635 0x90024636 0x90024637 0x90024638 0x90024639

Text
Near-line server failover partner client properties is not configured for assignment. Near-line recovery is not supported on the specified server. Timeout waiting for sync status for thin disk packing. Thin disk is out-of-sync for packing. Timeout waiting for swap status for thin disk packing. The data copying program is missing. Thin disk copy is not supported on the specified server. Global cache resource is not supported on this server: %1. Thin disk relocation is not supported on the specified server. Near-line Disk is already configured on Near-line server, but the configuration does not match with primary disk. Primary Disk is already configured on Primary server, but the configuration does not match with the specified Near-line Server. The specified Primary Disk is already configured for Near-line Mirroring on the specified Near-line server. The Primary server is configured for failover, but the failover partner server is not configured properly for Near-line Mirroring. The Primary server is not configured as client on the Near-line server. The Primary failover partner server is not configured as client on the Near-line server. Near-line Disk is not assigned to the Primary server client on Near-line server. Near-line Disk cannot be found on the specified Near-line server. service-enabled Device of Near-line Disk cannot be found on the Primary failover partner server. Failed to get the serial number for the Primary Disk. Suspend mirror from the Primary first before performing conversion. Failed to discover device on Near-line primary server. Invalid Primary resource type. Invalid Near-line resource type. CDP Journal is enabled and active for replica group. Please suspend CDP Journal and wait for the data to be flushed. safeCache is enabled and active for replica group. Please suspend safeCache and wait for the data to be flushed.

CDP/NSS Administration Guide

601

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002463a 0x9002463b 0x9002463c 0x9002463d 0x9002463e 0x9002463f 0x90024640 0x90024641 0x90024642 0x90024643 0x90024644 0x90024645 0x90024646 0x90024647 0x90024648 0x90024649 0x9002464a 0x9002464b 0x9002464c 0x9002464d 0x9002464e 0x9002464f 0x90024650 0x90024651

Text
CDP Journal is enabled and active for primary group. Please suspend CDP Journal and wait for the data to be flushed. safeCache is enabled and active for primary group. Please suspend safeCache and wait for the data to be flushed. CDP Journal is enabled and active for replica disk. Please suspend CDP Journal and wait for the data to be flushed. safeCache is enabled and active for replica disk. Please suspend safeCache and wait for the data to be flushed. CDP Journal is enabled and active for primary disk. Please suspend CDP Journal and wait for the data to be flushed. safeCache is enabled and active for primary disk. Please suspend safeCache and wait for the data to be flushed. Primary disk is enabled with Near-line Mirroring, the operation is not allowed. Primary disk is a Near-line disk, the operation is not allowed. Primary disk is a NAS resource. Please umount and detach the resource first. HotZone is enabled and active for the primary disk. Please suspend the HotZone first. There is no member in the group for the operation. CDR is enabled for the primary disk. Please disable CDR first. CDR is enabled for the primary group. Please disable CDR first. The group configuration is invalid. The operation cannot proceed. Replication configuration between the primary and replica is inconsistent. The operation cannot proceed. Forceful Role Reversal can only be performed from replica server for disaster recovery when the primary server is not available. Forceful Role Reversal cannot be performed when the primary server is still available and operational. Forceful Role Reversal is not supported in this version of server: %1 The replica disk is not loaded for the operation. Updating umap timestamp for the new primary resource(s) failed. The operation can only be performed after forceful role reversal. HotZone is enabled on new replica, repair cannot proceed. CDR is enabled for the original primary disk. Please disable CDR first. CDR is enabled for the original primary group. Please disable CDR first.

CDP/NSS Administration Guide

602

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024652 0x90024653 0x90024654 0x90024655 0x90024655 0x90024656 0x90024657 0x90024658 0x90024659 0x9002465a 0x9002465b 0x9002465c 0x9002465f 0x90024665 0x90024666 0x90024667 0x90024668 0x90024669 0x9002466a 0x9002466b 0x9002466c 0x9002466e 0x9002466f 0x90024670 0x90024671 0x90024672

Text
?CLI_MICRSCAN_COMPRESSION_CONFLICT_ON_TARGET. Snapshot resource cannot be reinitialized when it is accessible. The option for discardable changes for the timeview is not enabled. Snapshot resource is offline. ?CLI_INVALID_NEARLINE_CONFIG. ?CLI_INVALID_NEARLINE_DISK. The option for discardable changes is not supported for this type of resource. Fail to enable cache for the timeview to keep discardable changes. Fail to enable cache for the timeview to keep discardable changes and timeview cannot be removed. There is still cached data not being flushed to the timeview. Please flush the changes first if you do not want to discard the changes before deleting the timeview. The option for discardable changes for the timeview is not enabled. This operation can only be performed on failover secondary server in failover state. The option for discardable TimeView changes is not supported for this version of server: %1. Primary server user id and password are required for the target server to establish the communication information. The resource is a Near-line Disk and the Primary Disk is a thin disk. Expansion is not supported for think disk. The options for snapshot resource error handling are not supported. Failed to connect to the primary server. There is no iSCSI targets configured on the specified server. iSCSI initiator connection information is not available on this version of server: %1. Your password is expired. Please change your password first. TimeView replication option is not supported on this version of server: %1. TimeView replication option is not supported on this version of server: %1. The replica disk of the source resource is invalid for TimeView replication. TimeMark option is not enabled on the replica disk of the source resource of the TimeView. The TimeMark of the TimeView is not available on the replica disk of the source resource. Failed to get the TimeMarks of the replica disk to validate the TimeMark timestamp for TimeView replication.

CDP/NSS Administration Guide

603

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024673

Text
TimeView replication can only be performed for source resource enabled with remote replication. Local replication is enabled for the source resource. TimeView replication cannot proceed. TimeView replication option is not enabled for the source resource. This operation can only be performed for a Near-line disk as a reversed replica. TimeView resource of the source TimeMark exists on the primary server. TimeView data replication cannot proceed. TimeView resource of the replica TimeMark exists on the target server. TimeView data replication cannot proceed. TimeView data exists on the replica TimeMark. Please specify -vf (--force-to-replicate) option to force the replication. Remote replication is not enabled for the resource for TimeView data replication. Inquiry page retrieval is not supported for this version of server: %1 TimeView rollback is not supported on this version of server: %1. TimeView copy is not supported on this version of server: %1. CDP Journal rollback and TimeView data rollback are mutually exclusive. There is no TimeView data associated with this TimeMark to perform TimeView copy. Virtual device MicroScan option is not supported in this version of server: %1. The specified target device is enabled with global SafeCache. Please disable global SafeCache first if you need to copy data to this resource. There is no unflushed cache marker. There is no unflushed cache marker and the cache is not full. Please create a cache marker to flush the data to it first. There is no TimeView data associated with this TimeMark to perform TimeView rollback. Sync priority setting is not supported for this version of server: %1 Fail to get the TimeMarks of the source disk to validate the TimeMark timestamp for TimeView replication. There is no TimeView data for the specified TimeMark on the source resource. The TimeMark of the TimeView is not available on the source resource. Failed to parse the configuration file from primary server. The primary disk of the source resource is invalid. Failed to get the TimeMarks of the primary disk to validate the TimeMark timestamp for TimeView replication status.

0x90024674 0x90024675 0x90024676 0x90024677 0x90024678 0x90024679 0x9002467a 0x9002467b 0x9002467c 0x9002467d 0x9002467e 0x9002467f 0x90024680 0x90024681 0x90024682 0x90024683 0x90024684 0x90024685 0X90024686 0x90024687 0x90024688 0x90024689 0x90024690

CDP/NSS Administration Guide

604

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x90024691 0x90024692 0x90024693 0x90024694 0x90024695 0x90024696 0x90024697 0x90024698 0x90024699 0x9002469a 0x9002469b 0x9002469c 0x9002469d 0x9002469e 0x90024700 0x9002469f 0x90024701 0x90024702 0x90024703 0x90024704 0x90024705 0x90024706 0x90024707 0x90024708 0x90024709 0x9002470a

Text
Timeview data replication is in progress for the specified Timemark. TimeView data of the specified replica TimeMark is invalid. Physical device cannot be found on failover partner server. Physical device is already owned by the failover partner server. Sync CDR replica TimeMark setting is not supported for this version of server: %1. Preserve CDR primary TimeMark setting is not supported for this version of server: %1. Specified CDR related parameters without CDR enabled. It appears that the nearline disk is still available. Please login into the nearline server before removing the configuration. Keep TimeMarks setting is not supported for this version of server: %1 To keep Timemarks, the TimeView resources have to be unassigned before rollback is performed. CDP journal is active. To keep Timemarks, please suspend CDP journal and wait for the data to be flushed. SafeCache is active. To keep Timemarks, please suspend SafeCache and wait for the data to be flushed. Group CDP journal is active. To keep Timemarks, please suspend Group CDP journal and wait for the data to be flushed. Group SafeCache is active. To keep Timemarks, please suspend Group SafeCache and wait for the data to be flushed. Specified mirror monitoring related parameters without mirror monitoring option enabled. Specified throughput control related parameters without throughput control option enabled. Read the partition from inactive path option is not supported for this version of server: %1. Use report luns option and lun ranges option are mutually exclusive. Specified discover new devices options while in scan existing devices mode. BTOS feature is not supported for this version of server: %1. Select TimeMark with timeview data is not supported for this version of server: %1. Fibre channel client rescan is not supported for this version of server: %1. Configuration Repository can not be disabled when failover is enabled. Configuration Repository is not enabled. Configuration Repository has already been enabled. Configuration Repository can not be enabled when failover is enabled.

CDP/NSS Administration Guide

605

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x9002470b 0x9002470c 0x9002470d 0x90024719 0x9002471a 0x9002471b 0x9002471c 0x9002471d 0x9002471e 0x9002471f 0x90024720 0x90024721 0x90024722 0x90024723 0x90024724 0x09021000

Text
Only administrators have the privilege for the operation. TimeView replication is in progress. Please wait until the timeview replication is completed. Please specify -F to allow forceful role reversal. Backup is enabled. Recovery cannot proceed. Replication job queue is not supported in this version of server: %1. Replication schedule is not allowed for this resource. Replication schedule is not allowed for this group. Virtual Device name cannot be renamed for this resource. I/O latency retrieval is not supported in this version of server: %1. The new client OS types is not supported on this server. Group rollback is only supported for SAN resources. Group rollback is not supported for group with Near-line disks. No Timemark available for selected CDP journal timestamp. Group rollback is not supported in this version of server: %1. The specified virtual device does not have CDP enabled for the journal related options. RPC call failed: RPC encoding arguments error. RPC decoding results error. RPC sending error. RPC receiving results error. RPC timeout error. (Note: Check the server is not disconnected from the network where RPC call timeout could happen after 30 sec.) RPC version mismatch. RPC authentication error. RPC program not available. RPC program version mismatch. Cannot parse XML configuration Cannot allocate memory Cannot find openssl library or the public key file. Cannot reach the registration server on the Internet. Cannot connect to the registration database. Cannot find the keycode in the registration database. License registration limit has been reached.

0x80020500 0x800B0100 0x8023040b 0x80230406 0x80230403 0x80230404 0x80230405

CDP/NSS Administration Guide

606

Troubleshooting / FAQs

CDP-NSS Command Line Interface Error Messages Error code


0x80230406

Text
Cannot find host while attempting register keycode. Note: If the FalconStor license server can not be reached, make sure the server has Internet access. Failed to register keycode because system call timed out. Server is in failover state. Failed to read config file. ISHOME isn't defined.

0x80230407 0x80230408 0x80020600 0x8023040c

Contact FalconStor Technical Support for any error not listed above.

CDP/NSS Administration Guide

607

CDP/NSS User Guide

Port Usage
This appendix contains information about the ports used by CDP and NSS. The following ports are used for incoming requests. The communication direction of each port listed below is a one-way communication from a source to a destination. The reply to the request is sent back to a dynamic port number. Network firewalls should allow access through these ports for successful communications. To maintain a high level of security, it is recommended that you disable all unnecessary ports. Although you may temporarily open some ports during initial setup of the CDP/NSS appliance, such as the telnet port (23) and FTP ports (20 and 21), you should shut them down after your work is complete.
Note: Make sure there are no blocked ports and the loopback device access is open.

The ports are not used unless the associated option is enabled in CDP/NSS. For FalconStor appliances, the ports marked are enabled by default.
Port
20 21 22 23 25 67 68 69 80

Protocol
TCP/UDP TCP/UDP TCP TCP/UDP TCP/UDP UDP UDP UDP TCP

Open on
CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server CDP/NSS Server FalconStor Management Console RecoverTrac

Used by
FTP Client FTP Client Host Client Host Client SANClient SANClient SAN Client SAN Client FalconStor web license server

Description
Standard FTP port used for file data transfer Standard FTP port used for sending commands Standard Secure Shell (SSH) port used for remote sessions Standard Telnet port used for remote sessions Standard SMTP port used for Email Alerts DHCP port used for iSCSI Boot (BootIP) DHCP port used for iSCSI Boot (BootIP) TFTP (Trivial File Transfer Protocol) port used for iSCSI Boot (BootIP) Standard Internet port used for online registration of license keycodes. Registration information is sent back using HTTP protocol, where a local random port number is used (not hard-coded), just like a typical web-based page. The firewall does not block the random port if the established bit is set to let established traffic in. Standard HTTP port used to access the FalconStor Management Console via Web Start

SAN Client
81

HTTP

CDP/NSS Server

SAN Client

CDP/NSS Administration Guide

608

Port Usage

Port
111

Protocol
TCP/UDP

Open on
CDP/NSS Server

Used by
NFS Client

Description
NFS port used for rpcbind RPC program number mapper. The NFS port is assigned via the SUNRPC protocol. The ports vary, so it is not feasible (or convenient) to keep checking them and reprogramming a firewall. Most firewalls have a setting to "Enable NFS" upon which they will change the settings if the ports change. Standard Network Time Protocol (NTP) transport layer used to access external time servers ipstornmbd NETBIOS Name Service used for CIFS protocol Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client. RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls. If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines. ipstornmbd NETBIOS Datagram Service for CIFS protocol Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client. RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls. If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines

123

UDP

CDP/NSS Server

NTP Server

137

UDP

CDP/NSS Server RecoverTrac Server

CIFS Client RecoverTrac Client (Protected/ Recovered)

138

UDP

CDP/NSS Server RecoverTrac Server

CIFS Client RecoverTrac Client (Protected/ Recovered)

CDP/NSS Administration Guide

609

Port Usage

Port
139

Protocol
TCP

Open on
CDP/NSS Server RecoverTrac Server

Used by
CIFS Client RecoverTrac Client (Protected/ Recovered)

Description
ipstornmbd NETBIOS Session Service for CIFS protocol Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client. RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls. If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines Standard Simple Network Management Protocol (SNMP) port used to query CDP/NSS MIBs Standard SNMP multiplexing (SMUX) protocol port used to query Dell OpenManage system MIBs Standard secure HTTP port used to access FalconStor Web Setup Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client. RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls. If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines IPMI power control port used for Alert Standard Format (ASF) Remote Management and also used to power off the failed server in a failover configuration Standard SNMP AgentX port used to query agents, such as Fujitsu ServerView HTTPS port used for hardware configuration of DELL servers iSCSI port used for communication between iSCSI clients and the server. used for iSCSI Boot (BootIP) option. used to virtualize iSCSI storage on CDP/ NSS Server PXE port for iSCSI Boot (BootIP) option

161

UDP

CDP/NSS Server

SNMP Client

199

UDP

CDP/NSS Server

SNMP Client

443 445

HTTPS TCP

CDP/NSS Server RecoverTrac Server

FalconStor Web Setup RecoverTrac Client (Protected/ Recovered)

623

UDP

CDP/NSS Server

Failover Server

705 1311 3260

UDP HTTPS TCP

CDP/NSS Server CDP/NSS Server CDP/NSS Server

SNMP Client Dell Open Manage Server iSCSI Client or storage

4011

UDP

CDP/NSS Server

SAN Client

CDP/NSS Administration Guide

610

Port Usage

Port
5001

Protocol
TCP

Open on
CDP/NSS Server

Used by
SAN Client CDP/NSS Replica Server FalconStor Web Setup FileSafe Client

Description
istcp port used to test network connection and measure bandwidth performance

8009 8443

TCP TCP

CDP/NSS Server

Standard Apache AJP port used for FalconStor Web Setup Apache Tomcat SSL communication port used for internal FileSafe commands Secure RPC communication port used for sending requests to the configuration management module on the server.

FileSafe Server on CDP/NSS Server CDP/NSS Server

11576

TCP

CLI from SAN Client FalconStor Management Console IMA RecoverTrac HyperTrac Snapshot Director for VMware CDP/NSS Replica Server

11577

TCP

CDP/NSS Replication Source Server CDP/NSS Replica Server

CDP/NSS
Replication Source Server CDP/NSS Server (Secondary)

Communication port used to send replication data This port is only open while replication is being performed. Otherwise, it is closed.

11580

TCP

CDP/NSS Server (Primary)

CDP/NSS Server
(Secondary)

CDP/NSS
Server (Primary)

Communication port used between a pair of failover servers It is not required for stand-alone CDP/NSS Server

11582

TCP

CDP/NSS Server

CLI from SAN Client RecoverTrac HyperTrac Snapshot Director for VMware FalconStor Management Console

Communication port used to send CLI commands to the CDP/NSS Server

11583

TCP

CDP/NSS Server

Communication port used to send report requests (i.e. report schedules, global replication report, statistics log configuration updates) to the configuration management module on the server.

CDP/NSS Administration Guide

611

Port Usage

Port
11588

Protocol
TCP

Open on
CCM Server on CDP/NSS Server

Used by
CCM Console FalconStor Management Console CDP/NSS Server

Description
FalconStor Central Client Management (CCM) plugin port used to send CCM internal commands to the server.

11762

TCP

RecoverTrac HyperTrac IMA DiskSafe Snapshot Director for VMware

ipstorclntd SecureRPC port used to send management requests (i.e. snapshot notification, configuration, information retrieval) to IMA module on SAN Clients. Snapshot Director for VMware opens this ESX firewall port during installation.

18651 Hypervisor ports (VMware vCenter or Hyper-V)

TCP Hypervisor port protocol

FileSafe Server on CDP/NSS Server Hypervisor

FileSafe Client RecoverTrac Server

Communication port used for FileSafe data copy Hypervisor ports used to manage recovery from/to VMs within Hypervisor Servers and Centers. Refer to the appropriate Hypervisor documentation to determine port requirements. RecoverTrac uses VMware vSphere APIs to manage VMware environments and remote WMI/ remote VDS to manage Hyper-V environments.

CDP/NSS Administration Guide

612

CDP/NSS User Guide

SMI-S Integration
Large Storage Systems and Storage Area Networks (SANs) are emerging as a prominent and independent layer of IT infrastructure in enterprise class and midrange computing environments. Examples of applications and functions driving the emergence of new storage technology include: Sharing of vast storage resources between multiple systems via networks, LAN free backup, Remote, disaster tolerant, on-line mirroring of mission critical data, Clustering of fault tolerant applications and related systems around a single copy of data. Archiving requirements for sensitive business information. Distributed database and file systems.

The FalconStor SMI-S Provider for CDP and NSS storage offers CDP and NSS users the ability to centrally manage multi-vendor storage networks for more efficient utilization. FalconStor CDP and NSS solutions use the SMI-S standard to expose the storage systems it manages to SMI-S Client. The storage systems supported by FalconStor include Fibre Channel disk arrays and SCSI disk arrays. A typical SMI-S Client can discover FalconStor devices through this interface. It utilizes CIM-XML while is a WBEM protocol that uses XML over HTTP to exchange Common Information Model (CIM) information. The SMI-S server is included in CDP and NSS versions 6.15 Release 2 and later.

CDP/NSS Administration Guide

613

SMI-S Integration

SMI-S Terms and concepts


Storage Management Initiative Specification (SMI-S) A storage standard developed and maintained by the Storage Networking Industry Association (SNIA). SMI-S enables broad interoperability among heterogeneous storage vendor systems, allowing different classes of hardware and software products supplied by multiple vendors to reliably and seamlessly interoperate for the purpose of monitoring and controlling resources. The FalconStor SMI-S interface overcomes the deficiencies associated with legacy management systems that deter customers from using more advanced storage management systems. openPegasus The FalconStor SMI-S Provider uses an existing open source CIM Object Manager (CIMOM) called openPegasus for a portable and modular solution. It is an opensource implementation of the DMTF CIM and WBEM standards. openPegasus is packaged in tog-pegasus-[version].rpm with Red Hat Linux and is automatically installed CDP and NSS appliances with version 6.15 R2 and later. If it has not been installed on your appliance, you can install it using the following command: -rpm -ivh --nodeps tog-pegasus*.rpm Command Central Storage (CCS) SMI-S Provider Usage on Veritas CommandCentral Storage (CCS) offers a storage resource management solution by providing centralized visibility and control across physical and virtual heterogeneous storage environments. By enabling storage capacity management, centralized monitoring and application to spindle mapping, CommandCentral Storage helps improve storage utilization, optimizes resources, increases data availability, and reduces capital and operational costs.

Enable SMI-S (updated June 2012)


To enable SMI-S, right-click on the server in the FalconStor Management Console and select Properties. Highlight the SMI-S tab and select the Enable SMI-S checkbox.

By default, the Enable SMI-S checkbox is not selected, which means the SMI-S Provider is disabled. You will need to enable SMI-S before you can use the thirdparty storage resource manager to perform discovery and storage management using FalconStor NSS and CDP.

CDP/NSS Administration Guide

614

SMI-S Integration

Use the SMI-S Provider


Launch the Command Central Storage console
1. Use an html browser to open the address https://localhost on port 8443 to open CCS console. 2. Use the default user name: admin and password: password to login to Command Central Storage console for the first time. The top two panels are the main menu bar of CCS and the bottom right is the main control panel. The main menu bar and the storage section of the main control panel are important to SMI-S usage.

Add FalconStor Storage


To add FalconStor managed storage devices: 1. Navigate to Tools in the main menu bar and then select Configure a New Device in the main control panel. 2. Select Array from the drop-down menu for Device Category and select FalconStor NSS for Device Type and click Next. The Device Configuration screen displays. 3. Enter the IP address of the server, along with the user name and password, which is the same as the server login account (i.e. root account or administration account). For the Interop Namespace field, enter falconstor/Default and accept the default for the other fields. Once the server has been added successfully, a status screen similar to the screen shown below displays:

CDP/NSS Administration Guide

615

SMI-S Integration

View FalconStor Devices


To view FalconStor storage devices: 1. Select Managing-->Summary from the main menu bar of the Command Central Storage console. Alternatively, you can select Managing -->Storage from the main menu bar. 2. In the main control panel select Arrays. The Virtualization SAN Arrays Summary screen displays the FalconStor storage. 3. Select the corresponding device by clicking on the name. A summary of the storage device displays.

View Storage Volumes


To view storage volumes: 1. Select the Storage Volumes tab in the sub-menu on the top of main control panel. A summary of storage volumes displays. Assigned virtual disks display in Unknown Storage Volumes [Masked to Unknown Host(s)] or (Un)Claimed Storage Volumes while unassigned virtual disks display in Unallocated Storage Volumes [Unmasked]. 2. Select an individual volume to view the storage pool it is in, and the physical LUN it relies on.

View LUNs
To view logical unit numbers (LUNs): 1. Select the LUNs tab in the sub-menu on the top of the main control panel. A summary of CDP/NSS vitual disks displays. Assigned vitual disks display as Unknown LUNs [Maskd to Unknown Host(s)] or (Un)claimed LUNs, while unassigned virtual disks display as Unallocated LUNs [Unmasked]. 2. Select an individual LUN to view the storage pool it is in, and the physical LUN it relies upon.

View Disks
1. To view disks, select the Disks tab in the sub-menu to view LUN information. A summary of physical storage displays. And the individual disks display.

CDP/NSS Administration Guide

616

SMI-S Integration

2. Select individual disk to view which storage pool it is in and which storage volume it was created from.

View Masking Information


1. To view masking information, select Connectivity in the sub-menu bar to view a summary of all the FC adapters, ports and storage views. 2. Select individual adapters and ports to view the detail. 3. Select individual view to view the port it is seen from and the storage volume it sees.

CDP/NSS Administration Guide

617

RAID Management for VS-Series Appliances


(Updated 12/1/11)
The FalconStor RAID Management Console allows you to discover, configure, and manage storage connected to VS-Series appliances. A redundant array of independent disks (RAID) consists of a set of physical disks configured according to a specific algorithm. The FalconStor RAID Management Console enables centralized management of RAID controllers, disk drives, RAID arrays, and mapped/unmapped Logical Units for the storage enclosure head and any expansion enclosure(s) connected to it. The console can be accessed from a VS-Series server after you connect to it in the FalconStor Management Console. The management responsibilities of the RAID Management Console and the FalconStor Management Console are shown below:.

RAID management information is organized as follows: Prepare to use the RAID Management Console - Prepare for RAID management. Launch the RAID Management Console and discover storage - Launch the RAID Management Console.

CDP/NSS Administration Guide

618

RAID Management for VS-Series Appliances (Updated 12/1/11)

Manage storage arrays in the RAID Management console: Display a storage profile View enclosures Manage controller modules Manage disk drives Manage RAID arrays Logical Unit Mapping Monitor configured storage - Monitor storage from the FalconStor Management console.

Prepare for RAID management


You must complete the following before attempting any RAID management procedures: 1. Connect the FalconStor appliance and storage enclosures according to steps 1 through 4 in the FalconStor Virtual-Storage Appliances (VS/TVS) Hardware QuickStart Guide (QSG) shipped with your appliance. 2. Perform initial system configuration using the FalconStor Web Setup application, as described in the FalconStor CDP/NSS Software QuickStart Guide (also shipped with your appliance). 3. Connect to the VS server in the FalconStor Management Console, logging in as a user with Administrator status.

CDP/NSS Administration Guide

619

RAID Management for VS-Series Appliances (Updated 12/1/11)

Preconfigured storage
Preconfigured storage enclosures are shipped with a default RAID 6 configuration that consumes all available resources. In the FalconStor Management console, default devices that have been mapped to the FalconStor host are visible under Physical Resources --> Physical Devices --> SCSI Devices.

Mapped LUs

Note: Other devices displayed in this location are not related to storage. PERC 6/i devices are internal devices on the CDP/NSS appliance; the Universal Xport device is a system device housing a driver that provides access to storage.

In the RAID Management console, these devices are known as Logical Units (LUs) (refer to Logical Unit Mapping). The FalconStor RAID Management console lets you reconfigure these default devices as needed. When mapped LUs are available in the FalconStor Management console, you can create San Resources. The last digit of the SCSI address (A:C:S:L) corresponds to the LUN number that you choose in the Mapping dialog. Refer to Logical Resources in the CDP/NSS Administration Guide and FalconStor Management Console online help for details on configuring these physical devices as virtual devices and assigning them to clients.

Unconfigured storage
If your storage array has not been preconfigured, you must prepare storage using functions in the RAID Management console before you can create SAN resources in the FalconStor Management console: Create RAID arrays (refer to Create a RAID array). Create Logical Units (LUs) on each array (refer to Create a Logical Unit).
CDP/NSS Administration Guide 620

RAID Management for VS-Series Appliances (Updated 12/1/11)

Map each LU to a Logical Unit Number (LUN) (refer to Define LUN mapping).

CDP/NSS Administration Guide

621

RAID Management for VS-Series Appliances (Updated 12/1/11)

Launch the RAID Management Console


Right-click the server object and select RAID Management. The main screen, which describes the management categories available in the console, is displayed.

Discover storage
This procedure locates in-band or out-of-band storage. 1. Click the Discover button (upper-right edge of the display).
2. In the Discover Storage dialog, select the discovery method.

Select Manual (the default) to discover out-of-band storage. Enter a controller IP address and select Discover. The preconfigured controller IP addresses for controller modules on the storage enclosure head (Enclosure 0) are 192.168.0.101 (slot 0) and 192.168.0.102 (slot 1).
Note: Each controller module uses a different IP address to connect to the server. You can use either IP address for the purpose of discovering storage.
CDP/NSS Administration Guide 622

RAID Management for VS-Series Appliances (Updated 12/1/11)

Select Automatic if you do not know the IP address. This option can detect only in-band storage and will require additional time to search the subnet.

A confirmation message is displayed when storage is discovered. The example below shows two storage items discovered during Automatic discovery. Each discovered storage array includes a storage enclosure head and any expansion enclosures that were preconfigured for your system.

After discovery, each storage array profile is listed in the Discover Storage dropdown. Select a profile to display components in the RAID Management console.

You can use the keyboard to navigate through the Discover Storage list. Page Up/ Page Down jump between the first and last items in the list; Up and Down cursor arrows scroll through all items in the list. Action menu You can also manage storage profiles by clicking Action --> .

CDP/NSS Administration Guide

623

RAID Management for VS-Series Appliances (Updated 12/1/11)

To discover storage, click Add to display the Discover Storage dialog. Continue as described above. To remove a storage profile, click its checkbox and then click Remove. After you do this, the profile you removed will still exist, but its storage will not be visible from the host server.

Future storage discovery


To discover an additional storage enclosure head or expansion enclosure in the future, select Discover Storage from the drop-down list, then click Discover.

CDP/NSS Administration Guide

624

RAID Management for VS-Series Appliances (Updated 12/1/11)

Display a storage profile


After storage has been discovered, select a storage profile from the Discover Storage drop-down list. The console loads the profile using its (valid) IP address and displays the components of the array. In the navigation pane, the Storage object is selected by default; information at this level includes the storage name and IP address and summary information about all components in the array. From this object, you can configure all controller connection settings (refer to Configure controller connection settings).

Navigation pane

The navigation pane includes objects for all components in the storage array you selected in the Discover Storage drop-down list. Double-click an object to expand and display the objects below it; double-click again to collapse. When you select any object, related information is displayed in the content pane to the right. Some items include a right-click menu of management functions, while others are devoted to displaying status information.

CDP/NSS Administration Guide

625

RAID Management for VS-Series Appliances (Updated 12/1/11)

Status bar

The Status Bar at the bottom of the screen identifies - from left to right - the host machine, the storage array name and its WWID, and that date/time of the last update to storage configuration.
Action menu - Click Manage Storage to display a dialog that lets you display a storage profile and discover new storage (equivalent of Discover Storage). Tools menu - Click Manage Event Log to view or clear the event log for the selected storage profile.

Menu bar

Click Exit to close the RAID Management console and return to the FalconStor Management console. Tool bar Click Exit to close the RAID Management console and return to the FalconStor Management console. Click About to display product version and copyright information.

Rename storage
You can change the storage name that is displayed for the Storage object in the navigation pane. To do this: 1. Right-click the Storage object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: _ (underscore); - (hyphen); or # (pound sign). 3. Click OK when you are done.

Refresh the display


To refresh the current storage profile, right-click the Storage object and click Refresh. Note that this is not an alternative method for discovering storage.

CDP/NSS Administration Guide

626

RAID Management for VS-Series Appliances (Updated 12/1/11)

Configure controller connection settings


After storage has been discovered, you can change the port settings for controller modules on the controller enclosure head as required by your network administrator. To do this: 1. Right-click the Storage object and select Configure Controller Connection. You can also do this from the Controller Modules object or from the object for an individual controller.

2. Select the controller from the drop-down list. The dialog displayed from the object for an individual controller provides settings for that controller only. 3. Set the IP address, subnet mask, and gateway as needed, then click Apply.
Caution: Improper network settings can prevent local or remote clients from accessing storage.

CDP/NSS Administration Guide

627

RAID Management for VS-Series Appliances (Updated 12/1/11)

View enclosures
A storage array includes one storage enclosure head (numbered Enclosure 0) and, if connected, expansion enclosures (numbered Enclosure 1 to Enclosure x). Select the Enclosures object to display summary information for components in all enclosures in the selected storage profile.

Individual enclosures
Select a specific storage enclosure object to display quantity and status information for its various components, including batteries, power supply/cooling fan modules, power supplies, fans, and temperature sensors.

CDP/NSS Administration Guide

628

RAID Management for VS-Series Appliances (Updated 12/1/11)

Storage enclosure head

Expansion enclosure

CDP/NSS Administration Guide

629

RAID Management for VS-Series Appliances (Updated 12/1/11)

Manage controller modules


Each enclosure head (Enclosure 0) has two RAID controller modules. Select the Controller Modules object to display summary information and status for both controllers, as well as a controller image that provides at-a-glance controller status. The controller icon in the navigation pane also indicates status:

and and and and and and

Controller is online. Controller needs attention. Controller activity is suspended. Controller has failed. Controller is in service mode. Controller slot is empty.

You can configure connection settings for both controllers from this object (refer to Configure controller connection settings).

RAID controller firmware must be upgraded from time to time (refer to Upgrade RAID controller firmware).

CDP/NSS Administration Guide

630

RAID Management for VS-Series Appliances (Updated 12/1/11)

Individual controller modules


Select a controller object to display detailed information and configure its connection settings. The selected controller is outlined in yellow and will also show controller status.

You can configure connection settings for both controllers from this object (refer to Configure controller connection settings).

CDP/NSS Administration Guide

631

RAID Management for VS-Series Appliances (Updated 12/1/11)

Manage disk drives


The storage enclosure head has 12 or 24 drives; an expansion storage enclosure will have either 12 or 24 drives. The display also includes an image for each enclosure, showing at-a-glance drive status. Interactive enclosure images The enclosure image in the content pane provides information about any drive, regardless of the disk object you have selected in the navigation pane. Enclosure 0 always represents the storage enclosure head. Enclosures 1 through x represent expansion enclosures. (When an enclosure has 24 drives, drive images are oriented vertically.) Hover your mouse over a single drive image to display enclosure/slot information and determine whether the drive is assigned or unassigned. Hovering adds a yellow outline to the drive. Slot statuses include:

Unassigned - available to be assigned to an array. Assigned to an array. Set as a hot spare and in use to replace a failed disk. Set as a hot spare, on standby. Unassigned disk removed - empty slot. Disk replaced - assigned. Disk replaced - unassigned. The following disk images indicate a disk that is not healthy.

Previously assigned to an array but was removed. Previously assigned to an array but failed. Not previously assigned to an array but failed. Hot spare failed while in use. Hot spare standby failed.

CDP/NSS Administration Guide

632

RAID Management for VS-Series Appliances (Updated 12/1/11)

Select the Disk Drives object to display summary and status information for all drives in all enclosures in the selected profile, including layout, status, disk mode, total capacity, and usable capacity, as well as interactive enclosure images (refer to Interactive enclosure images).

CDP/NSS Administration Guide

633

RAID Management for VS-Series Appliances (Updated 12/1/11)

Individual disk drives


In the navigation pane, the icon for an individual disk indicates drive mode and status: Assigned, status optimal Assigned, status failed Assigned, being replaced (rebuild action) Unassigned, status optimal Unassigned, status failed Unassigned, replacing failed drive (rebuild action) Hot spare in use, status optimal Hot spare in use, status failed Hot spare standby, status optimal Hot spare standby, status failed Select an individual disk drive object to display additional details about the drive. The select drive is outlined in green in the interactive enclosure image.

You can also configure the selected drive to be a global hot spare.

CDP/NSS Administration Guide

634

RAID Management for VS-Series Appliances (Updated 12/1/11)

Configure a hot spare


Configuring a disk as a hot spare enables it to replace any failed disk automatically. This option is available for the selected disk only if the disk is unassigned and its status is optimal (normal). To create a global spare, right-click an unassigned disk and select Hot Spare - Set. The procedure will start automatically.

When the procedure is done, the disk icon is changed to standby mode in all interactive enclosure displays (refer to Interactive enclosure images).

Remove a hot spare


If a hot spare is in standby mode (and not in use), you can remove the hot spare designation. To do this: Right-click the disk and select Hot Spare - Remove.

When the procedure is done, the disk icon image changes to unassigned in all interactive enclosure displays.

CDP/NSS Administration Guide

635

RAID Management for VS-Series Appliances (Updated 12/1/11)

Manage RAID arrays


A RAID array is a collection of disks chosen from all enclosures in the selected storage profile. Select the RAID Arrays object to display summary information about all arrays, including name, status, RAID level, total capacity, total free capacity, and physical disk type. When you select this object, the disks associated with all arrays are outlined in blue in the interactive enclosure image.

From this object, you can create a RAID array, then create Logical Units (LUs) on any array and map them to FalconStor hosts (refer to Create a RAID array and Create a Logical Unit).

CDP/NSS Administration Guide

636

RAID Management for VS-Series Appliances (Updated 12/1/11)

Create a RAID array


You can create a RAID array using unassigned disks chosen from all enclosures in the selected storage profile. To do this: 1. Right-click the RAID Arrays object and select Create Array. 2. Type a name for the RAID and select the RAID level.

3. Select physical disks in the interactive enclosure image. Drive status must be Optimal, Unassigned (view hover text to determine status). For most effective use of resources, all disks in a RAID array should have the same capacity. If you select a disk with a different capacity than the others you have selected, a warning (Warning: disks differ in capacity) will be displayed. As you select disks, the Number of Disks in RAID and RAID Capacity values increase; selected disks show a check mark. 4. Select Create when you are done. Several messages will be displayed while the RAID is created; a confirmation message will display when the process is complete. The storage profile is updated to include the new array.

CDP/NSS Administration Guide

637

RAID Management for VS-Series Appliances (Updated 12/1/11)

Create a Logical Unit


You must define a Logical Unit (LU) on an array in order to map a device to a FalconStor host. To do this: 1. Right-click the RAID Arrays object or the object representing an individual array and select Create Logical Unit. 2. Type the label for the LU; this is the name that will appear in the RAID Management console.

3. If you began the procedure from the RAID Arrays object, select the RAID array on which you want to create the LU from the RAID drop-down list, which shows the current capacity of the selected array. If you began the procedure from an individual array, the current capacity for that array is already displayed. 4. Enter a capacity for the LU and select GB, TB, or MB from the drop-down list. 5. The Logical Unit Owner (the enclosure controller) is selected by default; do not change this selection. 6. You can assign (map) the LU to the FalconStor host at this time. The Map LUN option is selected by default. You can do this now, or uncheck the option and map the LU later (refer to Unmapped Logical Units). 7. Select a host from the drop-down list. 8. Choose a LUN designation from the drop-down list of available LUNs. 9. Select Create when you are done.

CDP/NSS Administration Guide

638

RAID Management for VS-Series Appliances (Updated 12/1/11)

Several messages will be displayed while the LU is created; a confirmation message will display when the process is complete. The storage profile is updated to include the new LU and you will see it appear in the display.

Individual RAID arrays


In the navigation pane, the icon for an individual array provides at-a-glance array status:

RAID 0, status optimal RAID 0, status degraded (one or more disks have failed) RAID 0, status failed RAID 1, status optimal RAID 1, status degraded (one or more disks have failed) RAID 1, status failed RAID 5, status optimal RAID 5, status degraded (one or more disks have failed) RAID 5, status failed RAID 6 status optimal RAID 6 status degraded (one or more disks have failed) RAID 6 status failed

CDP/NSS Administration Guide

639

RAID Management for VS-Series Appliances (Updated 12/1/11)

Select a RAID array object to display summary details and status information about physical disks assigned to the array, as well as the mapped Logical Units (LUs) that have been created on the array. When you select an array, the associated disks are outlined in green in the interactive enclosure image.

The following functions are available from the selected array: Create a Logical Unit Rename the array Delete the array Check RAID array actions Replace a physical disk

CDP/NSS Administration Guide

640

RAID Management for VS-Series Appliances (Updated 12/1/11)

Rename the array


You can change the name that is displayed for an array in the navigation pane at any time. To do this: 1. Right-click the array object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: _ (underscore); - (hyphen); or # (pound sign). 3. Click OK when you are done.

Delete the array


To delete an array, expand the RAID Arrays object until you can see the individual array objects. When you delete an array, all data will be lost and cannot be retrieved. 1. Right-click the array object and select Delete Array. 2. Type yes in the dialog to confirm that you want to delete the array, then select OK.

When the array has been deleted, the storage profile is updated automatically.

CDP/NSS Administration Guide

641

RAID Management for VS-Series Appliances (Updated 12/1/11)

Check RAID array actions


LU activities may take some time. Typical actions include:
Initialization - creating a Logical Unit Rebuild - swapping in a hot spare to replace a failed disk Copy-back - replacing a failed disk with an unassigned healthy disk, removing the hot spare from the configuration

To check current actions, right-click the object for an individual array and select Check Actions. A message reporting the progress of any pending action will be displayed.

To check actions on another array, select it from the drop-down list. Click OK to close the dialog.

Replace a physical disk


When a disk has failed in the array, the hot spare takes its place automatically. You need to follow up and replace the failed disk with an unassigned healthy disk, freeing up the hot spare. A failed disk is easily identified in the Disk Drive area of the console.

CDP/NSS Administration Guide

642

RAID Management for VS-Series Appliances (Updated 12/1/11)

In the RAID Array area of the console, the array icon shows that its status as degraded ( ) and disk status is displayed as failed.

CDP/NSS Administration Guide

643

RAID Management for VS-Series Appliances (Updated 12/1/11)

Right-click the array object in the navigation pane and select Replace Physical Disk. The Replace Physical Disk dialog shows the failed disk. In the array image in the dialog, select an unassigned, healthy disk to replace the failed disk. The disk you select will show a green check mark and the disk ACSL will be displayed in the dialog.

Click Replace Disk. A rebuild action will start. While this action is in progress, the icons for the replacement disk and the disk being replaced will change to replace . When the action is done, replacement disk status changes to assigned/optimal.

CDP/NSS Administration Guide

644

RAID Management for VS-Series Appliances (Updated 12/1/11)

Logical Units
Double-click an Array object to display the objects for mapped Logical Units (LUs) on the array. Select an LU object to display status, capacity, WWPN, RAID information, ownership, cache, and other information.

The following functions are available from the selected LU: Define LUN mapping Remove LUN mapping Rename LU Delete Logical Unit

Define LUN mapping


If you did not enable LUN mapping when you created a Logical Unit, you can do this at any time. To do this:

CDP/NSS Administration Guide

645

RAID Management for VS-Series Appliances (Updated 12/1/11)

1. Right-click the Logical Unit object in the console and select Define LUN mapping. (You can also do this from LUS listed under the Unmapped Logical Units object; refer to Unmapped Logical Units.)

2. Choose a LUN from the drop-down list of available LUNs and select OK. Several messages will be displayed while the LUN is assigned and a confirmation message will display when the process is complete. The storage profile is updated. After you perform a rescan in the FalconStor Management console, you can prepare the new device for assignment to clients. In the console, the last digit of the SCSI address (A:C:S:L) corresponds to the LUN number you selected in the Mapping dialog.

CDP/NSS Administration Guide

646

RAID Management for VS-Series Appliances (Updated 12/1/11)

Remove LUN mapping


Removing LUN mapping removes a physical device from the FalconStor console and prevents the server from accessing the device. To do this: 1. Right-click the LU object and select Remove LUN Mapping. 2. Type yes in the dialog to confirm that you want to remove LUN mapping, then select OK.

Several messages will be displayed while the mapping is removed and a confirmation message will display when the process is complete. The storage profile is updated. You can re-map the LU at a later time, then rescan in the FalconStor Management console to discover the device.

Rename LU
You can change the name that is displayed for an LU in the navigation pane at any time. To do this: 1. Right-click the LU object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: _ (underscore); - (hyphen); or # (pound sign). 3. Click OK when you are done.
CDP/NSS Administration Guide 647

RAID Management for VS-Series Appliances (Updated 12/1/11)

Delete Logical Unit


To delete an LU, expand the object for an individual RAID array until you can see the individual LU objects. When you delete an LU, all data will be lost and cannot be retrieved. 1. Right-click the LU object and select Delete Logical Unit. 2. Type yes in the dialog to confirm that you want to delete the LU, then select OK.

Several messages will be displayed while the LU is deleted and a confirmation message will display when the process is complete. When the LU has been deleted, the storage profile is updated.

CDP/NSS Administration Guide

648

RAID Management for VS-Series Appliances (Updated 12/1/11)

Logical Unit Mapping


Select this object to display current mapping information for all Logical Units created on all RAID arrays, including mapped and unmapped LUs. The display also includes summary information about the host machine, which represents the controllers on all servers connected to the storage array, such as host and interface type and port information.

You can expand this object to display unmapped and mapped LUs.

Unmapped Logical Units


Selecting Unmapped Logical Units displays LUs that have not been mapped to a host machine and are therefore not visible in the FalconStor Management Console.

CDP/NSS Administration Guide

649

RAID Management for VS-Series Appliances (Updated 12/1/11)

From this object, you can define LUN mapping for any LU with Optimal status (refer to Define LUN mapping). Select an individual unmapped LU to view configuration details.

From this object you can rename the LU (refer to Rename LU) or define LUN mapping (refer to Define LUN mapping).

CDP/NSS Administration Guide

650

RAID Management for VS-Series Appliances (Updated 12/1/11)

Mapped Logical Units


Display information for mapped LUs from the Host object. Host information includes the host OS, the type of interface on the host controller, and the WWPN and alias for each port.

This screen includes the mapped Logical Units that are visible in the FalconStor console, where the last digit of the SCSI address (A:C:S:L) corresponds to the number in the LUN column of this display - this is the LUN number you selected in the Mapping dialog.

CDP/NSS Administration Guide

651

RAID Management for VS-Series Appliances (Updated 12/1/11)

Upgrade RAID controller firmware


When an upgrade to RAID controller firmware is available, FalconStor will send a notification to affected customers. Contact FalconStor Technical Support to complete the following steps to upgrade firmware: 1. Download firmware files as directed by Technical Support. 2. Select Tools --> Upgrade Firmware in the menu bar.

3. To complete Stage 1, browse to the download location and select the firmware file. If you also want to upgrade non-volatile static random access memory (NVSRAM), browse to the download location again and select the file. Click Next when you are done. 4. To complete Stage 2, transfer the selected files to a server location specified by Technical Support. 5. To complete Stage 3, download the firmware to controllers. 6. In Stage 4, activate the firmware.

CDP/NSS Administration Guide

652

RAID Management for VS-Series Appliances (Updated 12/1/11)

Event log
To display an event log for the selected storage profile, select Tools --> Manage Event Log --> View Event Log in the menu bar.

All events are shown by default; three event types are recorded. - Informational events that normally occur. - Warnings related to unusual component conditions. - Critical errors such as device failure or loss of connectivity.

Filter the event log


Select an event type in the Events list to display only one event category. Click a column heading to sort event types, components, locations, or descriptions. Select an item in the Check Component list to display events only for the RAID array, RAID controller modules, physical disks, virtual disks, or miscellaneous events.

Click Quit to close the Event Log.

Clear the event log


To remove events from the log for the currently displayed storage profile, click Tools --> Manage Event Log --> Clear Event Log in the menu bar, then select OK in the confirmation dialog.
CDP/NSS Administration Guide 653

RAID Management for VS-Series Appliances (Updated 12/1/11)

Monitor storage from the FalconStor Management console


While all storage configuration must be performed in the RAID Management console, you can monitor storage status information in the FalconStor Management console from the Enclosures tab, which is available in the right-hand pane when you select the server object. Storage component information includes status of expansion enclosures and their components; you can also display information about the host server, management controllers, and other devices.

Storage information
To display information about storage, make sure the Check Storage Components option is checked. Choose a storage profile from the drop-down list. Click Refresh to update the display with changes to storage resources that may have been made by another user in the RAID Management console. If you uncheck this option, information about storage is removed from the display immediately.

CDP/NSS Administration Guide

654

RAID Management for VS-Series Appliances (Updated 12/1/11)

Server information
To include information about the host server and other devices, make sure the Host IPMI option is checked. You can display information for as many or as few categories as you like: Chassis status Management controller (MC) status Sensor information FRU device information LAN Channel information

If you uncheck an option, related information is removed from the display immediately.

CDP/NSS Administration Guide

655

CDP/NSS Administration Guide

Index
A
Access control Groups 297 SAN Client 67 SAN Resources 105 Storage pools 74 Access rights Groups 297 IPStor Admins 43 IPStor Users 43 Read Only 96 Read/Write 96 Read/Write Non-Exclusive 96 SAN Client 67 SAN Resources 105 Accounts Manage 42 ACSL Change 67 Activity Log 37 Adapters Rescan 57 Administrator Management 42 AIX Client 66 Delete SAN Resource 106 Expand virtual device 104 SAN Resource re-assignment 96 Alias 60, 203 APC PDU 214, 218 Appliance Check physical resources 113 Log into 109 Remove storage device 115 Start 107 Statistics 114 Stop 108 telnet access 109 Uninstall 117 Appliance-based protection 21 Asymmetric Logical Unit Access (ALUA) 497 Authentication 191 Authorization 192 Auto Recovery 232 AWK 467

B
Backup dd command 401 To tape drive 401 ZeroImpact 398 Block devices 57, 489 Troubleshooting 490 BMC Patrol SNMP integration 434 Statistics 435 View traps 435

C
CA Unicenter TNG Launch FalconStor Management Console 431 SNMP integration 430 Statistics 431 View traps 431 Cache resource 240 Create 240 Disable 245 Enlarge 245 Suspend 245 Write 64 capacity-on-demand 70 CCM error codes 526 CCS Veritas Command Central Storage 614 CDP journal 314 Add tag 310 Mirror 310 Protect 310 Recover data 314 Status 309 Tag 310, 316 Visual slider 314 CDP/NSS Licensing 34 CDP/NSS Server Properties 36 Central Client Manager (CCM) 21 CHAP secret 46 CLI Troubleshooting 513 Client Add 65, 189 iSCSI 121
CDP/NSS Administration Guide 656

Index

AIX 66 Delete SAN Resource 106 Expand virtual device 104 SAN Resource re-assignment 96 Assignment Solaris 100 Windows 100 Definition 17 HP-UX 66 Delete SAN Resource 106 iSCSI 119 Linux 66 Expand virtual device 104 Solaris 66, 100 Expand virtual device 104 Troubleshooting 502 Windows 100 Expand virtual device 104 Client Throughput Report 138 Command Line Interface 21, 407 Commands 409 Common arguments 408 Event Log 409 Failover 409 Installation and configuration 407 Usage 407 Community name Changing 436 Compression Replication 342 Configuration repository 33, 208 Mirror 208 Configuration wizard 30 Connectivity 46 Console 28 Administrator Management 42 Change password 45 Connect to server after failover 211 Connectivity 46 Custom menu 69 Definition 18 Discover Storage Servers 29, 33 Import a disk 59 Log 68 Log Options 68 Logical Resources 62 Options 68 Physical Resources 54 Replication 64 Rescan adapters 57

SAN Clients 65 Search 32 Server properties 36 Start 28 System maintenance 50 Troubleshooting 498 User interface 32 Continuous Data Protection (CDP) 301 Continuous replication 336, 346 Enable 339 Resource 347, 348 Create Primary TimeMark - 339 Cross mirror Check resources & swap 224 Configuration 206 Recover from disk failure 223 Requirements 200 Re-synchronize 224 Swap 196 Troubleshooting 510 Verify & repair 224

D
Data access 191 Data migration 79 Data protection 274 Data tab 152 dd command 401 Debugging 503 Delta Mode 339 Delta Replication Status Report 140, 349 Devices Failover 203 Scan LUNs greater than zero 495 Disaster recovery Import a disk 59 Replication 24, 335 Disk Foreign 59 IDE 57 Import 59 System 55 Disk expansion behavior 369 Disk Space Usage Report 141 Disk Usage History Report 142 DiskSafe 21, 42, 192 Linux 499 DynaPath 21, 98 DynaPath-FC

CDP/NSS Administration Guide

657

Index

Fibre Channel Target Mode 177

E
Email Alerts Configuration 458 Exclude system log entries 467 Include system log entries 466 Modifying properties 469 Signature 460 System log check 466 System log ignore 467 Triggers 460, 470 Custom email destination 470 New script 471 Output 471 Return codes 471 Sample script 471 X-ray 464 EnableNOPOut 127 Encryption Replication 342 Event Log 32, 128 Command Line Interface 409 Export 130 Filter information 129 Print 130 Refresh 130 Sort information 129 Troubleshooting 500 Expand virtual device 102 Linux clients 104 Solaris clients 104 Troubleshooting 490 Windows 2000 Dynamic disks 104 Windows clients 104 Export data From reports 136

F
Failover 194, 195 And Mirroring 236, 273 Asymmetric 196 Auto Recovery 220, 231 Auto recovery 222 Check Consistency 231 Command Line Interface 409 Configuration 198 Connect to primary after failover 211 Consistency check 231

Convert to mutual failover 230 Cross mirror Check resources & swap 224 Configuration 206 Recover from disk failure 223 Re-synchronize 224 Swap 196 Verify & repair 224 Exclude physical devices 230 Fibre Channel Target failure 202 Fix failed server after failover 222 Force a takeover 232 Heartbeat monitor 204 Intervals 231 Mutual failover 195 Network connection failure 202 Network connectivity failure 195 Physical device change 229 Power control 216 APC PDU 214, 218 HP iLO 214, 217 IPMI 214, 217 RPC100 214, 217 SCSI Reserve/Release 217 Primary/Secondary Servers 195 Recovery 195, 220, 231 Remove configuration 234 Replication note 370 Requirements 198 Asymmetric mode 200 Clients 199 Cross mirror 200 General 198 Shared storage 199 Sample configuration 197 Self-monitor 204 Server changes 229 Server failure 204 Setup 205 Status 219 Storage device failure 203, 204 Storage device path failure 203 Subnet change 230 Suspend/resume 233 TimeViews 236 Troubleshooting 509 Verify physical devices match 231 FalconStor Management Console 18, 28 Fibre Channel

CDP/NSS Administration Guide

658

Index

Fabric 162 Point-to-Point 162 Fibre Channel Configuration Report 145 Fibre Channel Target Mode 162, 166 2 Gig switches 166 Access new devices 177 Assign resources to clients 176 Client HBA failover settings DynaPath 168 HP-UX 168 Linux 168 DynaPath-FC 177 Enable 170 Fabric topology 168 Failover HBAs 173 Limitations 173 Multiple switches 173 Failover configuration 173 Hardware configuration 164, 168 Initiator mode 171 Installation and configuration 163 Multiple paths 176 Persistent binding Clients 168 Downstream 164 QLogic configuration 166 QLogic ports 171 Target mode 171 Target port binding 164 Troubleshooting clients 506 Zoning 165 FileSafe 22, 42, 192 FileSafe Server 22 filesystem utility 79 Filtered Server Throughput Report 156 Foreign disk 59 format utility 101, 104

H
Halt server 52 health monitoring 212 heartbeat 212 High availability 194 Host-based protection 22 Hostname Change 31, 51 HotZone 22, 246 Configure 247 Disable 253 Prefetch 246 Read Cache 246 Status 251 Suspend 253 HP iLO 214, 217 HP OpenView SNMP integration 428 HP-UX 16 HP-UX Client 66 Delete SAN Resource 106 HyperTrac 22

I
IBM Tivoli NetView SNMP integration 432 IDE drives 57 Import Disk 59 In-Band Protection 21 Installation SNMP BMC Patrol 434 CA Unicenter TNG 430 HP OpenView 428 IBM Tivoli NetView 432 IP address changing 489 IPBonding mode options 333 IPMI 52, 214, 217, 465 Filter 52 Monitor 52 IPStor Admins Access rights 43 IPStor Users Access rights 43 ipstorconsole.log 68 iSCSI Client 22

G
Global Cache 244 Global options 352 Groups 63, 295 Access control 297 Add resources 297 Create 295 Replication 296 GUID 22, 59, 63

CDP/NSS Administration Guide

659

Index

Failover 127, 199 Troubleshooting 505 iSCSI Target 23 iSCSI Target Mode 119 Initiators 119 Targets 119 Windows Add iSCSI client 121 Disable 127 Enable 120 Stationary client 122 ismon Statistics 114

J
Jumbo frames 51, 502

K
Keycodes 34 kisdev# 401

L
Label devices 100 Licensing 30, 34 Link Aggregation 333 Linux Client 66 Expand virtual device 104 Troubleshooting 506 Local Replication 24, 335 Logical Resources 23, 62 Expand 102 Icons 63, 500 Status 63, 500 Logs 128 Activity log 37 Console 68 Event log refresh 68 ipstorconsole.log 68 LUN Scan LUNs greater than zero 495

MIB module 423 Microscan 23, 40, 343, 352 Microsoft iSCSI initiator 127 default retry period 127 Migrate Drives 79 Mirroring 254 And Failover 236, 273 CDP journal 310 Configuration 256 Configuration repository 208 Expand primary disk 267 Fix minor disk failure 266 Global options 272 Monitor 261 Performance 39, 272 Promote the mirrored copy 264 Properties 272 Rebuild 271 Recover from failure 266 Remove configuration 273 Replace disk in active configuration 266 Replace failed disk 266 Replication note 370 Requirements 256 Resume 271 Resynchronization 40, 262, 272 Setup 256 Snapshot resource 283 Status 264 Suspend 271 Swap 264 Synchronize 267 MPIO 491 MTU 51 Multipathing 60, 403 Aliasing 60 Load distribution 404 load distribution 404 Path management 405 Mutual CHAP 46, 47

M
MaxRequestHoldTime 127 MCS 491 Menu Customize Console 69 MIB 23 MIB file 423 loading 424, 490

N
Near-line mirroring 371 After configuration 380 Configuration 372 Fix minor disk failure 396 Global options 394 Monitor 375

CDP/NSS Administration Guide

660

Index

Overview 371 Performance 394 Properties 395 Rebuild 390 Recover data 382 Recover from failure 396 Remove configuration 395 Replace disk in active mirror 397 Replace failed disk 396 Requirements 372 Resume 394 Re-synchronization 376 Rollback 383 Setup 372 Status 381 Suspend 394 Swap 390 Synchronize 390 NetView SNMP integration 432 Statistics 433 Network configuration 30, 50 Network connectivity 500 Failure 195 NIC Port Bonding 23, 331 change IP address 489 NNM SNMP integration 428 Statistics 429 NPIV 23, 166, 213 NSS What is? 14

Near-line mirroring 394 Replication 39, 352 Persistent binding 54, 496 Clients 168 Downstream 164 Troubleshooting 496 Persistent reservation 125 Physical device Prepare 55 Rename 56 Repair 60 Test throughput 60 Physical Resource Allocation Report 148 Physical resources 54, 75 Check 113 Icons 55 IDE drives 57 Prepare Disks 80 Troubleshooting 490 Physical Resources Allocation Report 147 Physical Resources Configuration Report 146 Ports 193 usage 608 Power Control options 213, 216 Prefetch 23, 246 Prepare disks 55, 80 pure-ftpd package 50

Q
QLogic Configuration 166 HBA 213 iSCSI HBA 116 Ports 171 Target mode settings 166 Queue Depth 506 Quiescent 308 Quota Group 44, 45 User 44, 45

O
OID 23, 26 openPegasus 614 Out of kernel resources error 514

P
Passwords Add/delete administrator password 42 Change administrator password 42, 45 Patch Apply 48 Rollback 48 Path failure 203 Performance 239 Mirror 39 Mirroring 272

R
RAID Management Array 632 Automatic discovery 623 Check actions 642 Console 622 Navigation tree 625 Controller modules 630

CDP/NSS Administration Guide

661

Index

Controller settings 627 Discover Storage 622, 624 Automatic 623 Expansion enclosures 624 Manual 622 Disk drive Assigned 632 Available 632 Empty 632 Failed 632 Hot spare 632 Remove 635 Set 635 Removed 632 Standby 632 Disk drive images 632 Disk drives 632 Interactive images 632 Enclosures 628 Expansion enclosures 628 FalconStor Management console Discover storage 654 Enclosures tab 654 IPMI information 655 Firmware upgrade 652 Hardware QuickStart Guide 619 Host information 649, 651 Hot spare Remove 635 Set 635 In-band 622 Individual controller modules 631 Individual disk drives 634 Individual enclosure 628 Expansion enclosure 629 Storage enclosure head 629 Individual Raid arrays 639 Logical Unit Mapping 649 Logical Units 620, 636, 640, 645 Create Logical Unit 638 Define LUN mapping 645, 650 Delete Logical Unit 648 Remove LUN mapping 647 Rename Logical Unit 647 Unmapped Logical Units 649 LUs 620 Manual discovery 622 Mapped Logical Units 651 Monitor storage 654

Out-of-band 622 Preconfigured storage 620 RAID Arrays Check actions 642 Create RAID Array 637 Delete RAID Array 641 Replace physical disk 642 RAID arrays 636 Logical Units 645 SAN Resources 620 Storage enclosure head 628, 632 Storage object 625 Storage profile 625 Read Cache 23 Reboot server 52 Recover data with TimeView 314 RecoverTrac 23 Relocate a replica 367 remote boot 493 Remote Replication 24, 335 Repair Paths to a device 60 Replica resource Protect 347 Replication 24, 335, 352 Assign clients to replica disk 353 Change configuration options 355 Compression 342 Configuration 337 Console 64 Continuous 336 Continuous replication resource 347 Delta 336 Delta mode 339 Encryption 342 Expand primary disk 369 Failover note 370 First replication 346 Force 357 How it works 336 Local 335 Microscan 40, 343, 352 Mirroring note 370 Performance 39, 352 Parameters 352 Policies 341 Primary disk 24, 335 Promote 353 Recover files 355

CDP/NSS Administration Guide

662

Index

Recreate original configuration 354 Remote 335 Remove configuration 368 Replica disk 24, 335 Requirements 337 Resume schedule 357 Reversal 354, 365 Scan 24, 354 Setup 337 Start manually 357 Status 349 Stop in progress 357 Suspend schedule 357 Switch to replica disk 353 Synchronize 348, 357 Test 352 Throttle 40 TimeMark note 370 TimeMark/TimeView 355 Troubleshooting 511 Reports 131 Client Throughput 139 Creating 132 Global replication 161 Delta Replication Status 349 Disk Space Usage 141 Export data 136 Filtered Server Throughput 156 Physical Resource Allocation 148 Physical Resources Allocation 147 Physical Resources Configuration 146 SAN Client Usage Distribution 153 SAN Client/Resources Allocation 154 SAN Resource Usage Distribution 156 SAN Resources Allocation 155 SCSI Channel Throughput 150 SCSI Device Throughput 152 Server Throughput 139 Types 138 Global replication 161 Viewing 136 repositories 22 Rescan 209 Adapters 57 Resource IO Activity Report 148 Retention 24 RPC100 214, 217

S
SafeCache 24, 239, 342 Cache resource 240 Configure 240 Disable 245 Enlarge 245 Properties 245 Status 245 Suspend 245 Troubleshooting 512 SAN Client 65 Access control 67 Add 65, 189 iSCSI 121 AIX 66 Assign SAN Resources 96 Definition 17 HP-UX 66 iSCSI 119 Linux 66 Solaris 66, 100 Windows 100 SAN Client / Resources Allocation Report 154 SAN Client Usage Distribution Report 153 SAN Resource tab 152 SAN Resource Usage Distribution Report 156 SAN Resources 62, 75, 76 Access control 105 Assign to Clients 96 Create service enabled device 92 Create virtual device 80 Creating 80 Delete 106 Physical resources 76 Prepare Disk 80 Virtual devices 76 Virtualization examples 76 SAN Resources Allocation Report 155 SCSI Aliasing 60, 203 Troubleshooting adapters/devices 494 SCSI Channel Throughput Report 150 SCSI Device Throughput Report 152 SCSI Devices tab 406 Security 191 Authentication 191 Authorization 192 Data access 191 Disable ports 193

CDP/NSS Administration Guide

663

Index

Physical security of machines 193 Recommendations 192 Storage network topology 193 System management 191 Server Authentication 191 Authorization 191 Check physical resources 113 Definition 17 Discover 29, 33 Import a disk 59 Log into 109 Network configuration 50 Properties 36 Remove storage device 115 Scan LUNs greater than zero 495 Start 107 Statistics 114 Stop 108 telnet access 109 Uninstall 117 X-ray 507 Server Throughput Report 156 Service Enabled Devices 79 Creating 92 Troubleshooting 513 Service enabled devices Creating 80 SMI-S 25, 613, 614 enable 614 Snapshot 274 Agent 25 notification 280, 307 trigger 307 Resource Check status 282 Delete 283 Expand 283 Mirror 283 offline 488 Options 283 Properties 283 Protect 283 Reinitialize 283 Shrink Policy 283 Troubleshooting 488 Setup 274 Snapshot Copy 290 Status 294

Snapshot Resource expand 278 SNMP Advanced topics 436 BMC Patrol 434 CA Unicenter TNG 430 Changing the community name 436 HP OpenView 428 IBM Tivoli NetView 432 Implementing 425 Integration 423 Limit to subnetwork 436 Manager on different network 436 Traps 38, 424 Troubleshooting 513 Using a configuration for multiple Storage Servers 436 snmpd.conf 436 Software updates Add patch 48 Rollback patch 48 Solaris Client 66 Expand virtual device 104 Troubleshooting 507 Virtual devices 100 Statistics ismon 114 Stop Takeover option 221 Storage 25 Remove device 115 Storage Cluster Interlink 196, 198 Port 25, 198, 210 Storage device path failure 203 Storage Pool Configuration Report 159 Storage pools 70 Access control 74 Administrators 70 Allocation Block Size 73 Create 71 Manage 70 Properties 72 Security 74 Set access rights 74 Tag 74 Type 72 Storage quota 44 Storage Server Authentication 191 Authorization 191

CDP/NSS Administration Guide

664

Index

Connect in Console 29 definition 17 Discover 29, 33 Import a disk 59 Network configuration 50 Scan LUNs greater than zero 495 Troubleshooting 507 uninstall 488 X-ray 507 Swapping 224 Sync Standby Devices 196, 510 Synchronize Out-of-Sync Mirrors 40, 394 Synchronize Replica TimeMark 339 System Disk 55 log 466 Management 191 tab 152 System maintenance 50 Halt 52 IPMI 52 Network configuration 50 Reboot 52 Restart network 51 Restart the server 51 Set hostname 51

T
Target mode settings QLogic 166 Target port binding 164 target server 335 Thin Provisioning 25, 77, 82, 256, 337 Throttle 40 speed 363 tab 362 Throttle window Add 362 Delete 362 Edit 362 Throughput Test 60 TimeMark 25 Replication note 370 retention 24, 290 Troubleshooting 512 TimeMark/CDP 301 Add comment 310 Change priority 310

Copy 312 Create manually 310 Delete 329 Disable 330 Failover 236 Free up storage 329 Maximum reached 306 Policies 325, 329 Priority 306, 311 Replication 330 Resume CDP 329 Roll forward 324 Rollback 324 Scheduling 304 Setup 302 Status 308 Suspend CDP 329 TimeView 301, 314 TimeView 26, 301, 314 Recover data 314 Remap 321 Tivoli SNMP integration 432 Trap 26 Traps 424 Trigger 26 Trigger Replication after TimeMark 370 Troubleshooting 488, 505 Block devices 490 CLI 513 Client Connectivity 502 Windows 503 Console launch 498 Cross mirror 510 Debugging 503 Event log 500 Failover 509 Cross mirror 510 FC storage 496 Fibre Channel Client 506 iSCSI Client 505 Jumbo frame support 502 Linux Client 502, 506 Network connectivity 500 Physical resources 490 Replication 511 SafeCache 512 SCSI adapters and devices 494 Linux Client 494
CDP/NSS Administration Guide 665

Index

Service Enabled Devices 513 Snapshot resources 488 SNMP 513 Solaris Client 507 TimeMark 512 Virtual device expansion 490 Windows client 503

Z
ZeroImpact 26 backup 398 Zoning 165 Soft zoning 165

U
UEFI 493 USEQUORUMHEALTH 198 User Quota Usage Report 160

V
VAAI 26 Virtual devices 76 Creating 80 Expand 102 expansion FAQ 490, 491 Virtualization 76 Examples 76 VMware 168 VMware ESX server vmkping command 91 Volume set addressing 164, 174, 496 VSA 164, 174, 496 enable for client 496

W
watermark value 341 Windows 2000 Dynamic disks Expand virtual device 104 Windows Client Expand virtual device 104 Troubleshooting 503 Virtual devices 100 World Wide Port Names 175 Write caching 64 WWN Zoning 26 WWPN 98, 175 mapping 98

X
X-ray 507 CallHome 464 System Information file 465

Y
YaST 50
CDP/NSS Administration Guide 666

You might also like