Professional Documents
Culture Documents
Operating Manual
Certified documentation
according to DIN EN ISO 9001:2000
To ensure a consistently high quality standard and
user-friendliness, this documentation was created to
meet the regulations of a quality management system which
complies with the requirements of the standard
DIN EN ISO 9001:2000.
cognitas. Gesellschaft fr Technik-Dokumentation mbH
www.cognitas.de
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1
Product Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2
1.2.1
1.2.2
Management Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
FibreCAT SX Manager Web-Based Interface . . . . . . . . . . . . . . . . . . . . . . 8
FibreCAT SX Manager Command-Line Interface . . . . . . . . . . . . . . . . . . . . 8
1.3
1.4
1.5
Notational Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6
Technical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7
Important Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1
Notes on Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2
2.3
CE Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4
RFI Suppression
2.5
2.6
Notes on Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7
Environmental Protection
2.8
2.8.1
2.8.1.1
2.8.1.2
2.8.1.3
2.8.2
Site Requirements . . . . . . . . . . .
Physical Requirements . . . . . . . . . .
Dimension and Weight Specifications
Weight and Placement Guidelines . .
Ventilation Requirements . . . . . . .
Environmental Requirements . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
22
23
23
24
Contents
2.8.3
2.8.3.1
2.8.3.2
2.8.3.3
2.8.4
Electrical Requirements . . . . . . . . .
Electrical Guidelines . . . . . . . .
Site Wiring and Power Requirements
Cabling Requirements . . . . . . .
Management Host Requirements . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
24
25
26
26
3.1
3.1.1
3.1.2
3.1.3
3.1.4
Controller Enclosure . . . . . . . . . . . .
Components . . . . . . . . . . . . . . . . .
Components and Indicators at the Front Side
Ports and Switches at the Back Side . . . . .
Indicators at the Back Side . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
29
31
3.2
3.2.1
3.2.2
3.2.3
3.2.4
Expansion Enclosure . . . . . . . . . . . .
Components. . . . . . . . . . . . . . . . . .
Components and Indicators at the Front Side
Ports and Switches at the Back Side . . . . .
Indicators at the Back Side . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
33
33
33
35
4.1
4.2
4.3
4.3.1
4.4
4.5
4.5.1
4.5.2
4.5.2.1
4.5.2.2
4.5.3
4.5.4
4.5.4.1
4.6
4.6.1
4.6.1.1
. . . . . . . . . . . . . . . . . . . . . . . . . 37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
42
43
44
45
46
47
49
Contents
4.7
4.7.1
4.7.2
4.7.2.1
4.7.3
4.7.4
4.8
Installing Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1
Safety Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2
5.3
5.4
5.4.1
5.4.2
5.4.3
5.5
5.6
5.7
6.1
6.2
6.2.1
6.2.2
6.2.3
6.3
Driver Settings
6.4
6.5
6.6
6.7
6.8
Installing a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
52
52
54
55
56
58
63
63
65
65
. . . . . . . . . . . . . . . . . . . . . . 73
.
.
.
.
76
76
77
77
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . 78
. . . . . . . . . . . . . . . . 80
. . . . 81
Contents
6.9
6.10
Volume Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.11
6.12
6.13
Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.1
7.2
1 Introduction
This guide describes how to install, initially configure and operate FibreCAT SX series
storage system, and applies to the following models:
This guide does not apply to the FibreCAT SX40 model which is covered by separate
documents.
If there are no differences between the five controller enclosure models, from now on they
together are referred to as FibreCAT SX controller enclosure.
Management Software
Introduction
NOTE
As first time configuration of a FibreCAT SX controller enclosure the IP addresses
of its RAID controller(s) is/are to be set. This step can only be performed by using
the CLI through a RS-232 connection, which you will do in Setting the IP Address
Using the CLI on page 73.
All subsequent usage of the CLI can be performed with a terminal emulation
through an Ethernet connection.
Information about using the CLI is in the FibreCAT SX Manager Command Line Interface
(CLI) manual.
Introduction
Important Notes
This chapter contains instructions on the safe operation of your storage system as well
as information about environmental protection and site requirements.
Installing Enclosures
This chapter describes the process of installing FibreCAT SX enclosures in a
PRIMECENTER rack or in a standard 19-inch EIA rack cabinet, cable a controller
enclosure to expansion enclosures, and describes how to connect the power cords and
how to test the connections.
Notational Conventions
Introduction
Commands, options, file names and path names are written in italic letters
in continuous text.
fixed font
<variable>
semi-bold
Highlights text
Quotation marks
I NOTE
V CAUTION
10
Introduction
Technical Data
FibreCAT
SX60
SX80 / SX88
Type
SX100
4 Gbit/s
storage system
Rack space
SX80 iSCSI
1 Gbit/s
storage system
2U / enclosure
max. 12 / enclosure
VMware ESX
3.x
VMware ESX
3.0, 3.0.1, 3.0.2
SPARC Solaris
10
---
2 / enclosure
hot plug and with full redundancy
512 MByte
1 GByte
Number of
FC ports with SFP transceiver
(Small Form-factor Pluggable)
or RJ45 Ethernet ports
(for iSCSI
per RAID controller
16
32
244
32
11
Technical Data
Introduction
FibreCAT
SX60
SX80 / SX88
capacity1
Gross
SATA II 7200 rpm2 HDD
Gross capacity1
SAS1 15000 rpm2 HDD
(available for SX80 / SX88 /
SX 100 only)
SX80 iSCSI
500 GByte
750 GByte
1 TByte
146 GByte
300 GByte
450 GByte
Access time
with SATA II HDDs
Access time with SAS HDD
SX100
24 TByte
(24 x 1 TByte)
56 TByte
(56 x 1 TByte)
108 TByte
(108 x 1 TByte)
56 TByte
(56 x 1 TByte)
RAID 1
Mirrored HDDs
RAID 10
RAID 3
RAID 5
RAID 50
RAID 6
12
Introduction
Technical Data
Management
Diagnostics of non-data
characteristics
Signalling and monitoring via SES (SCSI enclosure services) protocol and
LEDs
Management interfaces
RS232 (mini-DB9)
10/100 Ethernet (RJ45)
Supported protocol
Administration
SNMP
FibreCAT SX Manager (FSM) with web based interface (WBI)
and command line interface (CLI)
Table 5: Management
Options
FibreCAT
SX60
HDDs
SX80 / SX88
SX80 iSCSI
RAID controller
Max. number of expansion enclosures per controller enclosure
SX100
Table 6: Options
Apparent power
Continuous load
535 W
Power factor
Peak load
650 W
Rated voltage
115 V 240 V
Rated frequency
50 HZ 60 Hz
13
Technical Data
Introduction
1926 kJ/j
Vibrations
Immunity
CE certification
Environmental compliance
14
Introduction
Task
15
2 Important Notes
2.1 Notes on Safety
In this section you will find information that you must note when using the storage system.
This device complies with the relevant safety standards for IT equipment.
The following safety notes are also provided in the Safety manual. Also pay
attention to the notes in the operating manual of the connected system.
If you have any questions relating to setting up and operating your system in the
environment where you intend to use it, please contact your sales outlet or our customer
service team.
V CAUTION!
If the device is brought in from a cold environment, condensation may form both
inside and on the outside of the machine.
Wait until the device has acclimatized to room temperature and is absolutely dry
before starting it up. Material damage may be caused to the device if this
requirement is not observed.
Check that the rated voltage specified on the type label is the same as the local
line voltage.
17
Notes on Safety
Important Notes
V CAUTION!
18
The device must only be connected to a properly grounded wall outlet (the
device is fitted with a tested and approved power cable).
Make sure that the power sockets on the device and the protective grounded
outlet of the buildings wiring system is freely accessible.
Switching off the device does not cut off the supply of power. To do this you must
remove the power plugs.
Before opening the unit, switch off the device and then pull out the power plugs.
Route the cables in such a way that they do not form a potential hazard (make
sure no-one can trip over them) and that they cannot be damaged. When
connecting up a device, refer to the relevant notes in this manual.
The servers and the directly connected external storage subsystems should be
connected to the same power supply distributor. Otherwise you run the risk of
losing data if, for example, the central processing unit is still running but the
storage subsystem has failed during a power failure.
Make sure that no objects (such as bracelets or paper clips) fall into or liquids
spill into the device (risk of electric shock or short circuit).
Note that proper operation of the system (in accordance with IEC 60950/DIN
EN 60950) is guaranteed only if slot covers are installed on all vacant slots
and/or dummies on all vacant bays and the housing cover is fitted (cooling, fire
protection, RFI suppression).
Important Notes
You must follow the instructions below when handling modules containing electrostaticsensitive components
Discharge static electricity from your body (for example by touching a grounded metal
object) before handling modules containing electrostatic-sensitive components.
The equipment and tools you use must be free of static charge.
Remove the power plug before installing or removing modules containing electrostaticsensitive components.
Use a grounding strap designed for the purpose, to connect you to the system unit as
you install the modules.
An exhaustive description of the handling of modules containing electrostaticsensitive components can be found in the relevant European and international
standards (DIN EN 61340-5-1, ANSI/ESD S20.20).
2.3 CE Certificate
The shipped version of this device complies with the requirements of the EEC
directives 89/336/EEC Electromagnetic compatibility and 73/23/EEC Low
voltage directive. The device therefore qualifies for the CE certificate
(CE=Communaut Europenne).
19
RFI Suppression
Important Notes
For safety reasons, at least two people are required to install the rack-mounted model
because of its weight and size.
When connecting and disconnecting cables, observe the notes in the operating manual
of your system and the comments in the Important Notes chapter in the technical
manual supplied with the rack.
Ensure that the anti-tilt bracket is correctly mounted when you set up the rack.
For safety reasons, no more than one unit may be withdrawn from the rack at any one
time during installation and maintenance work.
If more than one unit is withdrawn from the rack at any one time, there is a danger that
the rack will tilt forward.
The power supply to the rack must be installed by an authorized specialist (electrician).
20
Important Notes
Environmental Protection
For details on returning and reuse of devices and consumables within Europe, refer to the
Returning used devices manual, or contact your Fujitsu branch office/subsidiary or our
recycling centre in Paderborn:
Fujitsu Technology Solutions GmbH
Recycling Center
D-33106 Paderborn
Tel.
Fax
21
Site Requirements
Important Notes
Rackmount
Height
2U 8.76 cm
Width
44.6 cm
48.0 cm
Depth
Chassis
55.37 cm
57.12 cm
SAS drives
33.1 kg
SATA drives
33.6 kg
22
Important Notes
Site Requirements
Specification
Rackmount
SAS drives
30.8 kg
SATA drives
31.3 kg
2.8.1.2
2.8.1.3
Ideally, use two people to lift an enclosure. However, one person can safely lift an
enclosure if its weight is reduced by removing the power and cooling modules and drive
modules.
Do not place enclosures in a vertical position. Always install and operate the enclosures
in a horizontal orientation.
When installing enclosures in a rack, make sure that any surfaces over which you might
move the rack can support the weight. To prevent accidents when moving equipment,
especially on sloped loading docks and up ramps to raised floors, ensure you have a
sufficient number of helpers. Remove obstacles such as cables and other objects from
the floor.
To prevent the rack from tipping and to minimize personnel injury in the event of a
seismic occurrence, securely anchor the rack to a wall or other rigid structure that is
attached to both the floor and to the ceiling of the room.
Ventilation Requirements
As you prepare for installation, follow these requirements:
Do not block or cover ventilation openings at the front and rear of an enclosure. Never
place an enclosure near a radiator or heating vent. Failure to follow these guidelines can
cause overheating and affect the reliability and warranty of the product.
Leave enough space in front and in back of an enclosure to allow access to enclosure
components for servicing. Removing a component requires a clearance of at least
137 cm in front of and behind the enclosure.
23
Site Requirements
Important Notes
Range
Altitude
5 C to 40 C, non-condensing
Shock
Vibration
Electrical Guidelines
Each enclosure is shipped with two AC power cords that are appropriate for use in a typical
outlet in the destination country. Each power cord should connect one of the power and
cooling modules to an independent, external power source. To ensure power redundancy,
connect the two power cords to two separate circuits; for example, to one commercial circuit
and to one uninterruptable power source (UPS).
Safety status of I/O connections comply with Separated Extra Low Voltage (SELV) requirements.
24
Important Notes
Site Requirements
2.8.3.2
The enclosures work with single-phase power systems having an earth ground
connection. To reduce the risk of electric shock, do not plug an enclosure into any other
type of power system. Contact your facilities manager or a qualified electrician if you are
not sure what type of power is supplied to your building.
Enclosures are shipped with a grounding-type (three-wire) power cord. To reduce the
risk of electric shock, always plug the cord into a grounded power outlet.
Do not use household extension cords with the enclosures. Not all power cords have
the same current ratings. Household extension cords do not have overload protection
and are not meant for use with computer systems.
All AC mains and supply conductors to power distribution boxes for the rack-mounted
system must be enclosed in a metal conduit or raceway when specified by local,
national, or other applicable government codes and regulations.
Ensure that the voltage and frequency of your power source match the voltage and
frequency inscribed on the equipments electrical rating label.
To ensure redundancy, provide two separate power sources for the enclosures. These
power sources must be independent of each other, and each must be controlled by a
separate circuit breaker at the power distribution point.
The system requires voltages within minimum fluctuation. The customer-supplied facilities voltage must maintain a voltage with not more than 5 percent fluctuation. The
customer facilities must also provide suitable surge protection.
Site wiring must include an earth ground connection to the AC power source. The
supply conductors and power distribution boxes (or equivalent metal enclosure) must
be grounded at both ends.
25
Site Requirements
Power circuits and associated circuit breakers must provide sufficient power and
overload protection. To prevent possible damage to the AC power distribution boxes and
other components in the rack, use an external, independent power source that is
isolated from large switching loads (such as air conditioning motors, elevator motors,
and factory loads).
2.8.3.3
Important Notes
Cabling Requirements
As you prepare for installation, follow these requirements:
Keep power and interface cables clear of foot traffic. Route cables in locations that
protect the cables from damage.
Route interface cables away from motors and other sources of magnetic or radio
frequency interference.
26
Note
You must use Ethernet cable designated CAT-5 or higher to connect a controller
enclosure to an Ethernet network.
3.1.1 Components
Enclosure Components
Quantity
1 or 21
2 per enclosure
SFP2
Air management system drive blanks or I/O blanks must fill empty slots to maintain optimum airflow through the
chassis.
The SFPs are part of the controller modules and must not be removed.
27
Controller Enclosure
Enclosure ID LED
LED
Enclosure ID
Green On
OK to Remove(drive
Blue
module)
Power/Activity/Fault
Off
On
The drive module has been removed from any active virtual
disk, spun down, and prepared for removal.
Green Off
(drive module)
On
No fault.
The drive module has experienced a fault, has failed or is a
member of a critical vdisk.
Off
Fault/Service Required
Not active.
On
No fault.
An enclosure-level fault has occurred. Service action is
required. The event has been acknowledged but the
problem still needs attention.
28
LED
Controller Enclosure
FRU OK
Off
Temperature Fault
Green Off
Yellow On
FC ports
CLI port
Expansion port
Figure 2: FibreCAT SX60 / SX80 / SX88 Controller Enclosure Ports (FC) and Power Switch
The following figure shows the ports (location: controller module) and the power switch
(location: power and cooling module) at the back of the FibreCAT SX80 iSCSI controller
enclosure equipped with two iSCSI controllers. The second (lower) controller is optional.
You will use the ports and the switch during the installation procedure.
29
Controller Enclosure
Power switch
Ethernet ports
CLI port
Expansion port
Figure 3: FibreCAT SX80 iSCSI Controller Enclosure Ports (FC) and Power Switch
The following figure shows the ports (location: controller module) and the power switch
(location: power and cooling module) at the back of the FibreCAT SX100 controller
enclosure equipped with two FC RAID controllers. The second (lower) controller is
optional.You will use the ports and the switch during the installation procedure.
Power switch
FC ports
CLI port
Expansion ports
Figure 4: FibreCAT SX100 Controller Enclosure Ports (FC) and Power Switch
Port/Switch
Description
Power switch
FC/Ethernet
ports
4-Gbps FC or 1-Gbps iSCSI ports used to connect to data hosts. Each FC port
contains an SFP1 transceiver. Host port 0 and 1 connect to host channel 0 and 1,
respectively. The FibreCAT SX100 model has four host ports.
CLI port
Micro-DB9 port used to connect the controller module to a local management host
using RS-232 communication for out-of-band configuration and management.
30
Controller Enclosure
Port/Switch
Description
Ethernet
management
port
Expansion port 3-Gbps, 4-lane (12 Gbps total) table-routed egress port used to connect SAS
expansion enclosures.
Table 17: Controller Enclosure Ports and Switches (Back)
1
The SFPs are part of the controller modules and must not be removed (SFP = Small Form-factor Pluggable).
Unit Locator
OK to Remove
Cache status
Host activity
FRU OK
Fault/Service Required
Ethernet activity
Ethernet link status
AC Power Good
Green Off
DC Voltage/Fan Fault/
Service Required
Yellow Off
On
On
Table 18: Controller Enclosure LEDs (Back, Power and Cooling Module)
31
Controller Enclosure
LED (Location:
Controller Module)
Green Off
On
On
Off
Not active.
White
Unit Locator
OK to Remove
Off
On
Yellow On
Fault/ServiceRequired
FRU OK
On
Green Off
On
Green Off
Green Off
On
Ethernet activity
Green Off
Green Off
On
32
Expansion Enclosure
3.2.1 Components.
Description
Quantity
1 or 21
2 per enclosure
Air management system drive blanks or I/O blanks must fill empty slots to maintain
optimum airflow through the chassis.
33
Expansion Enclosure
Power switch
SAS In port
Port/Switch
Description
Power switch
SAS In port
3-Gbps, 4-lane (12 Gbps total) subtractive ingress port used to connect to a
controller enclosure.
3-Gbps, 4-lane (12 Gbps total) table-routed egress port used to connect to another
expansion enclosure.
34
Expansion Enclosure
Unit Locator
FRU OK
OK to Remove
Fault/Service Required
AC Power Good
Green Off
DC Voltage/Fan Fault/
Service Required
Yellow Off
On
On
Table 22: Expansion Enclosure LEDs (Back, Power and Cooling Module)
LED (Location:
Expansion Module)
Green Off
On
Green Off
On
35
Expansion Enclosure
LED (Location:
Expansion Module)
Unit Locator
Off
Not active.
Off
Not implemented.
OK to Remove
Yellow On
Fault/ServiceRequired
FRU OK
On
36
NOTE
This section applies to Microsoft Windows hosts only.
Installing the SCSI Enclosure Services (SES) driver prevents Microsoft Windows hosts from
displaying the Found New Hardware Wizard when the storage system is discovered.
1. In a web browser, go to
http://support.ts.fujitsu.com/com/support/downloads.html and download the SCSI
enclosure device driver for FibreCAT SX to a network location that the data host can
access.
2. Extract the package contents to a temporary folder on the host.
3. In that folder, double-click Setup.exe to install the driver.
4. Click Finish.
The driver is installed.
37
The FibreCAT SX80 iSCSI Storage Systems was designed as a fully redundant dual
controller storage array for iSCSI systems. The storage system alternately assigns virtual
disks between the two controllers to support automatic load balancing. It is currently
available as a non-redundant single controller storage system as an entry level option.
When the FibreCAT SX80 iSCSI Storage System has a single controller, every other virtual
disk created will be owned by the non-existent second controller, regardless of the
ownership specified when creating the virtual disk - any storage assigned to the missing
controller is treated as "failed-over" to the existing controller. This choice allows for transparent upgrade to a dual controller system in the future. In order for the virtual disks
assigned to the non-existent controller to be visible, the host needs to establish logins to
both iSCSI targets. The IP addresses of the non-existent controller (target) need to be
configured in the storage system. The connected hosts then need to establish iSCSI
sessions (logins) to one or both targets to access the volumes on those virtual disks.
38
For example, if you create 6 virtual disks, then the second, fourth and sixth virtual disks
would be owned by the non-existent second controller. To present volumes created on
these virtual disks to a host, you must assign IP addresses to the same ports on the nonexistent controller as ports that are used on the existing controller. The address assigned
to the port of the non-existent controller must be in the same network as the same port of
the existing controller. For example, if controller A has the following configuration:
Controller A
Port/Channel:
IP:
Netmask:
0
192.168.144.1
255.255.255.0
1
10.0.0.1
255.0.0.0
You should then set the IP address of the ports on the controller that is not installed to
reserve addresses in the same subnets so those hosts can see the volumes through the A
controller, as shown below:
Controller B (currently not installed)
Port/Channel:
0
1
IP:
192.168.144.18 10.0.0.22
Netmask:
255.255.255.0
255.0.0.0
Finally, include these addresses in the iSCSI initiator configuration the same as you would
the IP addresses of the existing controller.
If you follow this process, all of the volumes created on the FibreCAT SX80 iSCSI Storage
System will be visible through the existing controller host ports. When you are ready to
upgrade to a dual controller system, simply install the second controller with Ethernet
cables pre-attached and all the work will be done automatically by the storage system to
move those volumes to the new controller.
Configuration Rules
Fixed speed for all ports
NOTE
The following restriction applies to FibreCAT SX60 / SX80 in direct attached configurations:
If your Host Interface Module (HIM) is Model 0 (or you have a mixed mode of HIM
0 and 1 in a dual controller FibreCAT controller enclosure), for FibreCAT SX60 /
SX80 in direct connect mode 2 Gbit FC speed is supported only.
If both HIMs in your controller enclosure are Model 1 (or you have only a single controller
FibreCAT SX and it is Model 1), in direct host connect mode up to 4 Gbit FC speed is
supported for FibreCAT SX60 / SX80.
FibreCAT SX Series Operating Manual
39
For FibreCAT SX88 / SX100, in direct host connect mode always up to 4 Gbit FC speed is
supported.
In switch attached mode, for FibreCAT SX60 / SX80 / SX88 / SX100 up to 4 Gbit FC speed
is supported always (no restriction with any HIM Model).
If you have a direct attached configuration with FibreCAT SX60 / SX80, you should find out
the HIM Model (0 or 1) of your controller(s) via FibreCAT SX Managers Web Based
Interface:
1. Open FibreCAT SX Managers Web Based Interface.
2. Login as monitor or manage user.
3. In MONITOR STATUS menu, click the link advanced settings (see screenshots below).
Here you can find out the HIM Model of your controller module(s):
Figure 9: Detecting the HIM Model With FibreCAT SX Managers WBI (Example With Two HIM Models 0)
40
Figure 10: Detecting the HIM Revision With FibreCAT SX Managers WBI (Example With Two HIM Models 1)
V CAUTION!
Fiber optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber
optic cables. Doing so can degrade performance or cause data loss.
3. Connect the other end of each fiber optic cable to the HBAs as shown in the following
figures.
41
4.5.1 FibreCAT SX60 / SX80 / SX88 With One Dual Port Host
The cabling examples show a high-availability controller and path failover configuration.
This configuration requires host port interconnect circuitry between controller modules to
be set to Interconnected as described in Configuring Controller Enclosure Host Ports (FC)
on page 78. For path failover this configuration requires a host-based multipathing software.
The controller enclosure is equipped with two FC RAID controllers and the host have two
FC HBAs each.
Port interconnects:
interconnected by FibreCAT SX Manager
Configuration Rules
Restrictions:
not for VMware
FC-Topology:
Arbitrated Loop (FC-AL)
FC Speed:
max. 4Gbit/s
(note Host Interface Speed for FibreCAT SX (FC) on page 39)
Host Port Interconnect: Interconnected
Path-Failover Software: FTS DDM V5; native MPIO (DSM); FTS Multipath V 5,
native DM-MP (RedHat, SuSE)
42
4.5.2 FibreCAT SX60 / SX80 / SX88 With Two Dual Port Hosts
The following figure shows the preferred high-availability controller and path redundant
configuration. This configuration requires that host port interconnects are set to Interconnected as described in Configuring Controller Enclosure Host Ports (FC) on page 78. This
configuration also requires host-based multipathing software. For failover behavior, see
Configurations on page 79.
A + B LUNs
A + B LUNs
A + B LUNs
A + B LUNs
Figure 12: High-Availability, Dual-Controller, Direct Attached Connection to Dual Data Hosts for Windows and
Linux (no VMware support)
Configuration Rules
Restrictions:
not for VMware
FC-Topology:
Arbitrated Loop (FC-AL)
FC Speed:
max. 4Gbit/s
(note Host Interface Speed for FibreCAT SX (FC) on page 39)
Host Port Interconnect: Interconnected
Path-Failover Software: FTS DDM V5; native MPIO (DSM); FTS Multipath V 5,
native DM-MP (RedHat, SuSE)
43
4.5.2.1
A + B LUNs
A + B LUNs
FAILED
Figure 13: Direct Attached FibreCAT SX (Controller Failover Scenario)
44
4.5.2.2
A + B LUNs
A + B LUNs
A + B LUNs
In scenario of path-failover, host A sees his B-LUNs via the host interconnect line. The filter
driver (multipath software) generates the B-LUN I/Os across the other HBA.
45
4.5.3 FibreCAT SX60 / SX80 / SX88 With Two Dual Port Hosts for High
Performance
The following figure shows a non-redundant path-failover configuration that can be used
when high performance is more important than high availability. This configuration requires
the host port interconnect circuitry to be be set to Straight-through, which it is by default.
A LUNs
B LUNs
A LUNs
B LUNs
Figure 15: High-Performance, Dual-Controller, Direct Attached Connection to Dual Data Hosts (no Solaris
Support)
Configuration Rules
Restrictions:
FC-Topology:
FC Speed:
on page 39)
Host Port Interconnect:
Path-Failover Software:
46
2370, 2278
2070, 2178
2370, 2278
2070, 2178
2270, 2378
2078, 2170
2270, 2378
2078, 2170
47
This figure shows the optimized connectivity for a high-availability, dual-controller, direct
attached connection to four dual-port data hosts. To ensure failover fault tolerance, each
dual port host must be connected to each controller. Connecting one host port to Port 0 or
Port 1 and the other to Port 2 or Port 3 on the other controller provides the most efficient
internal path balancing in each controller.
Configuration Rules
Usage Clustered servers, direct connect to storage, classroom applications, classroom file
server
Advantage
Benefits of fault tolerance of dual-controller and clustered servers
Example business
Educational facility; small tech business
Topology
Loop mode
HBA settings
fixed speed 2/4Gbit
Link time out 60 sec
node time out 60 sec
Queue depth (see release notes)
SW
MPIO SW
Storage settings
host port interconnect: Interconnected
In this configuration, all hosts have redundant connections to volumes that are associated
with each of the controllers. If a controller fails, the hosts maintain access to all of the
volumes through the hosts ports on the surviving controller.
48
4.5.4.1
2370, 2278
2070, 2178
2370, 2278
2070, 2178
2270, 2378
2078, 2170
2170, 2078
2078, 2170
FAILED
Figure 17: Four Dual-Port Data Hosts with FibreCAT SX100 and Failed Controller
49
A & B volumes
A & B volumes
A volumes, A0 IP
A volumes, A1 IP
B volumes, B0 IP
B volumes, B1 IP
50
A dual-controller FibreCAT SX80 iSCSI storage system uses port 0 of each controller as
one failover pair and port 1 of each controller as a second failover pair. If one controller fails,
all mapped volumes remain visible to all hosts. Dual IP-address technology is used in the
failed over state, and is largely transparent to the host system. However, for complete fault
tolerance, host-based path failover software is recommended.
Configuration Rules
Fixed speed for all ports
4.6.1.1
A & B volumes
A & B volumes
51
Single controller mode is supported in both direct attached and switch attached configuration. In single controller mode the system is set to a failed over state to support a future
upgrade to dual controller system. When configuring a single controller system you will
need to assign ghost host IP configuration to the system. This will ensure that when an
additional controller is inserted no additional configuration is required.
NOTE
Ensure that when configuring a single controller system you assign a host port IP
address to both the active controller and the host B controller.
4.7.1 FibreCAT SX60 / SX80 / SX88 With One Switch and One Dual Port Host
1. Locate the host ports at the back of the controller enclosure.
2. Connect a fiber optic cable to each host port on controller A and controller B.
V CAUTION!
Fiber optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber
optic cables. Doing so can degrade performance or cause data loss.
3. Connect the other end of each cable to a switch port. Refer to Figure 20 for configuration options.
4. Using fiber optic cables connect the switch to two FC HBA ports as shown in the
following figures.
52
The following figure shows a redundant connection through a single switch to a single data
host with two HBA ports. This configuration requires that host port interconnects are set to
Straight-through. It also requires host-based multipathing software.
A + B LUNs
A + B LUNs
Figure 20: Redundant Connection Through a Single Switch to a Single Data Host
NOTE
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
53
4.7.2 FibreCAT SX60 / SX80 / SX88 With Two Switches and One Dual Port
Host
1. Locate the host ports at the back of the controller enclosure.
2. For switch A: Connect fiber optic cables to host port 0 on controller A and host port 1
on controller B. Then connect the other ends of the cables to the switch.
3. For switch B: Connect fiber optic cables to host port 1 on controller A and host port 0
on controller B. Then connect the other ends of the cables to the switch.
V CAUTION!
Fiber optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber
optic cables. Doing so can degrade performance or cause data loss.
4. Using fiber optic cables connect each of the two switches to FC HBAs on the host as
shown in the following figure.
This configuration requires host port interconnect circuitry between controller modules to
be set to Straight-through. The cabling examples show a controller and path high-availability
configuration. For path failover this configuration requires a host-based multipathing
software. The controller enclosure is equipped with two FC RAID controllers and the host
has two FC HBAs.
A+B LUNs
A+B LUNs
Figure 21: Redundant, High-Availability Connection Through Switches to Dual Data Hosts
54
NOTE
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
Configuration Rules
Restrictions:
FC-Topology:
FC Speed:
Host Port Interconnect:
Max. Member in Switch Zone:
Path-Failover Software Linux:
Path-Failover Software Windows:
Path-Failover Software VMware:
4.7.2.1
Adjust the port topology on the switch: F-Port for server, L-Port for storage.
FibreCAT SX Configuration:
55
4.7.3 FibreCAT SX60 / SX80 / SX88 With Two Switches and Two Dual Port
Hosts
For high availability, two data hosts can be connected through two switches to a dualcontroller storage system. The controller host port interconnects must be set to Straightthrrough and host-based multipathing software is not required.
Figure 22: Switch Attached Configuration with Two Switches and Two Hosts
56
NOTE
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
Configuration Rules
Restrictions:
FC-Topology:
FC Speed:
Host Port Interconnect:
Max. Member in Switch Zone:
Path-Failover Software Linux:
Path-Failover Software Windows:
Path-Failover Software VMware:
57
4.7.4 FibreCAT SX100 With One Switch and Two Dual-Port Hosts
Figure 23: Two Dual-Port Data Hosts with FibreCAT SX100 and a Switch for High Availability
58
NOTE
The FC switch must be configured according to port zoning rules (only one initiator
and one target per zone) and with fixed speed and topology settings.
Configuration Rules
FC-Topology:
FC Speed:
Host Port Interconnect:
Max. Member in Switch Zone:
Path-Failover Software Linux:
Path-Failover Software Windows:
Path-Failover Software VMware:
59
5 Installing Enclosures
This chapter describes the process of installing FibreCAT SX enclosures in a
PRIMECENTER rack or in a standard 19-inch EIA rack cabinet.
The installation procedures in this chapter require the following items:
#2 Phillips screwdriver
Standard screwdriver
Allen wrench (provided; used with 6-mm screws and #12-24 x 3/8-inch sockethead
screws)
V CAUTION!
Be careful if using a power tool; it could strip or damage connections.
V CAUTION!
Electrostatic discharge can damage sensitive components. be sure you are
properly grounded before touching a static-sensitive component or assembly.
Ensure that the voltage and frequency of your power source match the voltage and
frequency inscribed on the equipments electrical rating label.
Never push objects of any kind through openings in the equipment. Dangerous voltages
may be present. Conductive foreign objects could produce a short circuit that could
cause fire, electric shock, or damage to your equipment.
61
Installing Enclosures
V CAUTION!
Two people are needed to lift and move the enclosure. Use care to avoid injury. An
enclosure with all drives installed can weigh 33.6 kg. Do not lift the front of the
enclosure; this can cause damage to the drives.
1. Unpack the enclosure.
2. Check the contents of the enclosure and box for the items listed in the following table.
Item
Quantity
ordered1:
Enclosure
Controller enclosure
Expansion enclosure
SFP2 transceivers
2 per enclosure
Rackmount kit
1 per enclosure
Safety manual
62
The SFPs are part of the controller modules and must not be removed.
Installing Enclosures
2 cage nuts
63
Installing Enclosures
Positioning tappets
64
Installing Enclosures
rear left**
2. U*
2. U*
2. U*
2. U*
2. U*
1. U*
1. U*
2. U*
2. U*
1. U*
1. U*
1. U*
1. U*
1. U*
2. U*
2. U*
2. U*
1. U*
1. U*
1. U*
Legend
*: height units (U) counted starting from the bottom line of the device
**: "rear left" and "rear right": seen from the rear side of the rack
"front left" and "front right": seen from the front side of the rack
: M5 sqare caged nut
: M5 screw with centering washer (mounting spring)
Figure 25: Rack Post Mounting Positions of Two enclosures
65
Installing Enclosures
2. Place the right rail (support angle down) from the front into the rack, putting the
positioning tappet into the appropriate whole of the right rear rack post.
3. Compress the spring mounted rail to its length and screw on the front rail end to the right
front rack post as shown below.
Figure 28: Mounting the Right Sliding Rail (Front Side of the Rack)
66
Installing Enclosures
Figure 29: Mounting the Right Sliding Rail (Rear Side of the Rack)
In PRIMECENTER racks, put the positioning tappet of the left rail into the appropriate whole of the support bracket which you have mounted on the left rear rack
post.
Screw on the front rail end to the left front rack post.
Figure 30: Mounting the Left Sliding Rail (Rear Side of the Rack)
6. Mount two square cage nuts to the front rack posts (one to the right and one to the left
post) into the post holes between the rail screws each. See the figure below for the
position of a nut. The nuts must be set in from inside the rack and lock in place in the
square post holes.
67
Installing Enclosures
Figure 31: Mounting Position of a Cage Nut Between the Rail Screws
7. Put the enclosure from the front of the rack on the support angles of the sliding rails and
push it into the rack to the back stop.
8. Screw on the enclosure to both cage nuts mounted to the front rack posts as shown
below for the right side of the enclosure.
68
Installing Enclosures
Figure 33: Fault-Tolerant Cabling Connections Between Controller and Expansion Enclosures
(left example: FibreCAT SX60 / SX80 / SX88, right example: FibreCAT SX80 / SX88)
69
Installing Enclosures
Figure 34: Non-Fault-Tolerant Cabling Connections Between Controller and Expansion Enclosures
(example: FibreCAT SX80 / SX88)
70
Installing Enclosures
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
Controller B
Enclosure
ID 0
Enclosure
ID 2
Out
In
Controller A
Enclosure
ID 3
Out
In
Enclosure
ID 1
Enclosure
ID 4
Out
In
Out
Enclosure
ID 5
Out
In
Out
Enclosure
ID 6
Out
In
In
Enclosure
ID 7
Out
In
In
Enclosure
ID 8
In
Out
In
Out
Enclosure
ID 1
Controller A
Controller B
Enclosure
ID 0
Figure 35: Cabling Connections Between a FibreCAT SX100 Controller and 1 / 8 Expansion Enclosure(s)
NOTE
Redundant cabling is not supported for FibreCAT SX100.
71
Installing Enclosures
Do not power on the system until you complete the procedures in this chapter. The
power-on sequence is described below in Testing the Enclosure Connections on
page 72.
72
To set the IP address for each controller perform the following steps:
The FibreCAT SX Managers command line interface (CLI) embedded in each controller
module enables you to access the module using RS-232 communication and terminal
emulation software. Use this interface to set the IP address for each controller. For detailed
information about the CLI, see the FibreCAT SX Series Command Line Interface (CLI)
manual.
1. From your network administrator obtain an IP address, subnet mask, and gateway
address for each controller.
2. Use the provided micro-DB9 serial cable to connect controller A to a serial port on a
host computer.
FibreCAT SX Series Operating Manual
73
Start HyperTerminal:
Start -> All programs -> Accessories -> Communications -> HyperTerminal
Use the information in the following table for the HyperTerminal settings you are
asked for:
Settings
Values
115,200
Data Bits
Parity
None
Stop bits
Flow control
None
3. Start and configure a terminal emulator, using the following display settings and
communication settings:
Display settings
Values
Font
None
Columns
80 (standard in HyperTerminal)
If HyperTerminal is not part of your Windows installation, install it from your Windows data medium:
Start > Control Panel > Add or remove programs > Add/Remove Windows Components > Accessories and Utilities > Details
> Communications > Details > HyperTerminal (check box) > OK
74
:
Communication settings
Values
Baud Rate
115,200
Data Bits
Stop Bits
Parity
None
Flow Control
None
Connector
COM1 (typically)
75
76
and others
Because the WBI uses popup windows to indicate the progress of user-requested
tasks, disable any browser features or tools that block popup windows.
To optimize performance, set your browser to never check for newer versions of stored
pages: that is, to use cached pages.
To optimize display, use a color monitor and set its color quality to the highest level.
For Internet Explorer, to ensure you can navigate beyond the WBI login page, set the
local-intranet security option to medium or medium-low.
Driver Settings
77
Setting IP addresses for each iSCSI host port (called a target portal) located on the
storage system.
Logging on to iSCSI host ports on each controller module (called a target) from the data
host to initiate connectivity between the data host and the storage system.
1. Double-click the Microsoft iSCSI Software Initiator icon located on the desktop of the
host system.
2. In the Target Portals area of the Discovery tab, click Add.
3. Enter the IP address of an iSCSI host port on your storage system, leave the Port field
set at 3260, and click Add.
4. Repeat Step 2 and Step 3, adding IP addresses for the remaining iSCSI host ports on
the storage system.
IP addresses for storage system host ports (targets) are identified on the data host.
78
Configuring a System for the First Time Configuring the Microsoft iSCSI Software Initiator (iSCSI)
5. On the Targets tab, verify that two targets have been configured (.a and .b).
If two targets are not configured, one or more of the following issues may need to be
resolved:
Controller enclosure host port addresses may not be set correctly on the data host.
Cables between the controller enclosure and/or switches and/or data hosts may not
be connected correctly.
Correct the issue, return to the Targets tab and click Refresh.
6. If two targets are configured, select the first target (controller module) and click Log On.
7. On the Log On to Target dialog, set the following options:
a) For connectivity settings to persist across system reboots, check Automatically
Restore this Connection When the System Boots.
b) For fault-tolerant configurations, select Enable Multi-path.
c) Click Advanced to set connectivity settings as follows:
At the Local Adapter field, select Microsoft iSCSI Initiator from the dropdown
menu.
At the Source IP field, select the IP address for the local data Ethernet port that
is on the same subnet as the first target portal (iSCSI host port) to which you
want the host to connect.
At the Target Portal field, select the IP address for the iSCSI host port on the
target (controller module) to which you are connecting.
Repeat the log on procedure (Step i through Step iii) to initiate connectivity for
the second target portal on the selected target.
8. To allow LUN access through all available ports during failover, change default multipathing settings as follows:
a) On the Targets tab, select the target and click Details.
b) On the Devices tab of the Target Properties dialog, select the first device and click
Advanced.
c) On the MPIO tab of the Device Details dialog, select Round Robin from the Load
Balance Policy drop-down menu and click OK.
d) Repeat Step b and Step c for all devices listed.
9. Repeat tasks in Step 6, Step 7, and Step 8 for the second target.
79
10. On the Persistent Targets tab, verify that two entries appear for each controller (.a and
.b) for a total of four connections.
Configuring more than one session per controller port will use additional host interface
resources and may cause failover to function improperly.
If two persistent targets are not configured for each controller host port, complete the
following steps to remove and reconfigure targets:
a) Select each entry and click Remove.
b) Log off for each connection by selecting Targets > Details > Sessions > Log Off.
c) Verify that IP addresses were set correctly. If not, correct IP address settings.
d) Log on again for each target using the instructions in this section, starting at Step 2.
The data host can now communicate with the controllers through iSCSI Ethernet host
ports.
CAUTION!
Use caution when editing the Windows registry. Editing the wrong entry
or setting an incorrect value for a setting can introduce errors that cause the system
to malfunction. Create a registry back up before following instructions in this
section.
Configuring a System for the First Time Install Native MPIO Functionality Under Windows 2008 for
FibreCAT_SX1"
RAID 5, in which parity is distributed across all disk drives in the virtual disk
81
Volume Mappings
One volume per virtual disk, where the volume is not visible to data hosts
Volume mapping changes take effect immediately. Make changes that limit access
to volumes when the volumes are not in use. Be sure to unmount a mapped volume
from a host system before changing the mapping's LUN.
82
For each virtual disk, the virtual disk panel shows a status icon; the name, RAID level,
size, number of disk drives, and number of volumes; and utility status, if any.
2. Select a virtual disk.
The selected virtual disk's volume names, sizes, default LUNs, and types are displayed.
3. Select a volume.
The Current Host-Volume Relationships panel shows which data-host ports have
access to the selected volume. For the selected volume you might see the following
mappings:
All Hosts - Shows the settings used by all data-host ports to access the volume. This
entry is displayed only if no specific ports are mapped. If a specific port is mapped,
All Hosts changes to All Other Hosts.
WWN value (FC) or IQN value (iSCSI) - Shows the settings used by a data-host port
to access the volume.
All Other Hosts - Shows the access settings used by all data-host ports except by
specifically mapped ports. This entry is displayed only if specific ports are mapped.
If no specific port is mapped, All Other Hosts changes to All Hosts.
For each entry, the port identifier, the assigned LUN, and each controller host port's access
privilege are shown. The access privilege for a controller host port can be read-write, readonly, or none (no access). A mapping cannot include both read-write and read-only access.
4. To add or change a mapping:
In the Assign Host Access Privileges panel, select a host port identifier or All Other
Hosts.
83
84
85
4. Press the power switches at the back of each expansion enclosure to the On position.
While enclosures power up, their LEDs turn on and off intermittently. After the LEDs
stop blinking, if no LEDs on the front and back of the enclosure are yellow, the poweron sequence is complete and no faults have been detected.
5. Press the power switches at the back of the controller enclosure to the On position.
If the enclosures power-on sequence succeeded as described in Step 4, the system is
ready to use.
86
87
block
A single sector on a disk. The smallest unit of data stored (written) to or
retrieved (read) from a disk.
broadcast write
Technology that provides simultaneous caching of write data to both RAID
controllers cache memory with positive direct memory access acknowledgement (certified DMA).
cache
A high speed memory or storage device used to reduce the effective time
required to read data from or write data to a lower speed memory or device.
Read cache holds data in anticipation that it will be requested by a client. Write
cache holds data written by a client until it can be safely stored on more
permanent storage media such as disk or tape. (SNIA)
See also write-back cache, write-through cache.
capacitor pack
The controller module component that provides backup power to transfer
unwritten data from cache to Compact Flash memory in the event of a power
failure. Storing the data in Compact Flash provides unlimited backup time. The
unwritten data can be committed to the disk drives when power is restored.
CAPI
FibreCAT SX Configuration API.
channel
A physical path used for the transfer of data and control information between
storage devices and a RAID controller or a host; or, a SCSI bus in a controller
module.
chassis
An enclosures metal housing.
chunk size
The number of contiguous blocks in a stripe on a disk drive in a virtual disk. The
number can be adjusted to improve performance. Generally, larger chunks are
more effective for sequential reads. See block.
CLI
The command-line interface that system administrators can use to configure,
monitor, and manage FibreCAT SX storage systems. The CLI is accessible from
any management host that can access a controller module through an out-ofband Ethernet or RS-232 connection.
88
controller
The control logic in a storage subsystem that performs command transformation and routing, aggregation (RAID, mirroring, striping, or other), high-level
error recovery, and performance optimization for multiple storage devices.
(SNIA)
A controller is also referred to as a RAID controller.
controller enclosure
An enclosure that contains disk drives and one or two controller modules.
See controller module.
controller module
A FRU that contains: a storage controller processor; a management controller
processor; out-of-band management interfaces; a LAN subsystem; cache
protected by a capacitor pack and Compact Flash memory; host, expansion,
and management ports; and midplane connectivity. If a controller enclosure
contains redundant controller modules, the upper one is designated A and the
lower one is designated B.
copy on write (COW)
A technique for maintaining a point in time copy of a collection of data by
copying only data that is modified after the instant of replicate initiation. The
original source data is used to satisfy read requests for both the source data
itself and for the unmodified portion of the point in time copy. (SNIA)
See also snap pool.
CPLD
Complex programmable logic device. A generic term for an integrated circuit
that can be programmed in a laboratory to perform complex functions.
CPU
Central processing unit. The CPU is where most calculations take place, and
the type of CPU in a controller module affects its performance capability. In
FibreCAT SX storage systems CPU is also referred to as the Storage Controller
processor or the RAID controller processor.
DAS
See direct attached storage (DAS).
data host
A host that reads/writes data to the storage system. A data host can be
connected directly to the system (direct attached storage, or DAS) or can be
connected to an external switch that supports multiple data hosts (storage area
network, or SAN).
FibreCAT SX Series Operating Manual
89
data mirroring
Data written to one disk drive is simultaneously written to another disk drive. If
one disk fails, the other disk can be used to run the virtual disk and reconstruct
the failed disk. The primary advantage of disk mirroring is 100 percent data
redundancy: since the disk is mirrored, it does not matter if one of the disks fails;
both disks contain the same data at all times and either can act as the operational disk. The disadvantage of disk mirroring is that it is expensive because
each disk in the virtual disk is duplicated. RAID 1 and 10 use mirroring.
data striping
The storing of sequential blocks of incoming data on all the different disk drives
in a virtual disk. This method of writing data increases virtual disk throughput
because multiple disks are working simultaneously, retrieving and storing. RAID
0, 10, 3, 5 and 50 use striping.
DHCP
Dynamic Host Configuration Protocol
direct attached storage (DAS)
A dedicated storage device that connects directly to one or more servers.
(SNIA)
disk mirroring
See data mirroring.
DMA
Direct Memory Access
drive module
A FRU consisting of a disk drive and drive sled.
dynamic spare
An available disk drive that is used to replace a failed drive in a virtual disk, if
the Dynamic Spares feature is enabled and no vdisk spares or global spares are
designated.
EC
See Expander Controller (EC).
ECC
Error correcting code.
EIA
Enterprise Information Architecture
90
EMP
See enclosure management processor (EMP).
enclosure
A physical storage device that contains disk drives. If the enclosure contains
integrated RAID controllers it is known as a controller enclosure; otherwise it is
an expansion enclosure.
enclosure management processor (EMP)
A device in the enclosure from which the system can inquire about the
enclosures environmental conditions such as temperature, power supply and
fan status, and the presence or absence of disk drives.
Expander Controller (EC)
The processor (located in the SAS expander in each controller module and
expansion module) that is primarily responsible for enclosure management and
SES.
expansion enclosure
An enclosure that contains disk drives and one or two expansion modules.
Expansion enclosures can be attached to a controller enclosure to provide
additional storage capacity. See expansion module.
expansion module
A FRU that contains: host, expansion, and management ports; an Enclosure
Management Processor; and midplane connectivity. If a system contains
redundant expansion modules, the upper one is designated A and the lower one
is designated B.
expansion enclosure
An enclosure that contains disk drives and one or two expansion modules. See
expansion module.
fabric
A Fibre Channel switch or two or more Fibre Channel switches interconnected
in such a way that data can be physically transmitted between any two N_Ports
on any of the switches. (SNIA)
fabric port (F_Port)
An F_Port that can support an attached arbitrated loop. An FL_Port on a loop
has the AL_PA hex '00' and is the gateway to the fabric for NL_Ports on a loop.
91
fabric switch
A Fabric switch functions as a routing engine that actively directs data transfer
from source to destination and arbitrates every connection. Bandwidth per node
via a Fabric switch remains constant when more nodes are added, and a node
on a switch port uses a data path of up to 100 MByte/sec to send or receive
data.
fabric-loop port (FL_Port)
An F_Port can support an attached arbitrated loop. An FL_Port on a loop has
the AL_PA hex'00', giving the fabric highest priority access to the loop. An
FL_Port is the gateway to the fabric for NL_Ports on a loop.
failback
See recovery.
failover
In an active-active configuration, failover is the act of temporarily transferring
ownership of controller resources from a failed controller to a surviving
controller. The resources include virtual disks, cache data, host ID information,
and LUNs and WWNs. See also recovery.
fault tolerance
The capacity to cope with internal hardware problems without interrupting the
systems data availability, often by using backup systems brought online when
a failure is detected. Many systems provide fault tolerance by using RAID architecture to give protection against loss of data when a single disk drive fails.
Using RAID 1, 3, 5, 10, or 50 techniques, the RAID controller can reconstruct
data from a failed disk drive and write it to a spare or replacement disk drive.
fault-tolerant virtual disk
A virtual disk that provides protection of data in the event of a single disk drive
failure by employing RAID 1, 10, 3, 5, or 50.
FC
See Fibre Channel (FC).
FC-AL
See Fibre Channel-Arbitrated Loop (FC-AL).
Fibre Channel (FC)
A set of standards for a serial I/O bus capable of transferring data between two
ports at up to 100 MByte/sec, with standards proposals to go to higher speeds.
Fibre Channel supports point-to-point*, arbitrated loop, and switched topol-
92
93
IP
Internet Protocol
IQN
iSCSI Qualified Name.
Format: iqn.yyyy-mm.{reversed domain name}
(e.g. iqn.2001-04.com.acme:storage.tape.sys1.xyz)
IQN addresses are the most common format. They are qualified by a date
(yyyy-mm) because domain names can expire or be acquired by another entity.
iSCSI
Internet SCSI
JBOD
Just a Bunch of Disks. An expansion enclosure that is directly attached to a
host.
KByte (KB)
Kilobyte. Equivalent to 1000 bytes for data storage and statistics, or 1024 bytes
for memory.
LAN
See local area network (LAN).
leftover drive
A disk drive that contains metadata but is no longer part of a virtual disk.
local area network (LAN)
Local Area Network. A communications infrastructure designed to use
dedicated wiring over a limited distance (typically a diameter of less than five
kilometers) to connect to a large number of intercommunicating nodes. (SNIA)
logical unit number (LUN)
The SCSI identifier of a logical unit with a target. (SNIA)
For example, a LUN identifies the mapping between a volume (logical unit) and
a host port (target).
loop address
Indicates the unique ID of a node in FC loop topology. A loop address is
sometimes referred to as a Loop ID.
loop port (L_Port)
A Loop port is capable of performing arbitrated loop functions and protocols.
NL_Ports and FL_Ports are examples of loop-capable ports. (SNIA)
FibreCAT SX Series Operating Manual
95
loop topology
See Fibre Channel-Arbitrated Loop (FC-AL).
LUN
See logical unit number (LUN).
management controller (MC)
The processor (located in a controller module) that is primarily responsible for
human-computer interface and computer-computer interface functions, and
interacts with the storage controller.
management host
A workstation with a direct or local connection to the system and that is used to
manage the system.
management information base (MIB)
A database of managed objects accessed by network management protocols.
An SNMP MIB is a set of parameters that an SNMP management station can
query or set in the SNMP agent of a network device (for example, a router).
master volume
A volume that is enabled for snapshots. A master volume must be owned by the
same controller as the associated snap pool.
MByte
Megabyte (MB).
MC
See management controller (MC).
metadata
Data in the first sectors of a disk drive that the system uses to identify virtual
disk members.
MIB
See management information base (MIB).
Network Time Protocol (NTP)
A protocol that enables the storage systems time and date to be obtained from
a network-attached server, keeping multiple hosts and storage devices
synchronized.
96
97
point-to-point
Point-to-point* is an alternative to FC-AL topology and is required in some fabric
switch configurations. The controller enclosure supports point-to-point connections only to fabric ports (F_Ports). Loop topology is appropriate for most fabric
switches, as it provides more flexibility when considering fault-tolerant designs.
*: FC point-to-point Topology was not tested by Fujitsu and it is not recommended to use point-to-point topology. FibreCAT SX100 point-to-point
configurations are not supported.
port bypass circuit (PBC)
See host port interconnect.
port interconnect
See host port interconnect.
port WWN
See world wide port name (WWPN).
power and cooling module
A FRU that includes an AC power supply and two cooling fans. An enclosure
has two power and cooling modules for failure tolerance and can operate with
only one module.
priority
Priority enables controllers to serve other I/O requests while running jobs
(utilities) such as rebuilding virtual disks. Priority ranges from low, which uses
the controllers minimum resources, to high, which uses the controllers
maximum resources.
RAID
Redundant Array of Independent Disks, a family of techniques for managing
multiple disks to deliver desirable cost, data availability, and performance
characteristics to host environments. (SNIA)
RAID controller
See controller.
RAS
Reliability, availability, and serviceability. These headings refer to a variety of
features and initiatives all designed to maximize equipment uptime and mean
time between failures, minimize downtime and the length of time necessary to
repair failures, and eliminate or decrease single points of failure in favor of
redundancy.
98
rebuild
The regeneration and writing onto one or more replacement disks of all of the
user data and check data from a failed disk in a virtual disk with RAID level 1,
10, 3, 5, and 50. A rebuild can occur while applications are accessing data on
the systems virtual disks.
recovery
In an active-active configuration, recovery (also known as failback) is the act of
returning ownership of controller resources from a surviving controller to a
previously failed (but now active) controller. The resources include virtual disks,
cache data, host ID information, and LUNs and WWNs.
remote scripting CLI client
A command-line interface (CLI) that enables you to manage the system from a
remote management host. The client communicates with the management
software through a secure out-of-band interface, HTTPS, and provides the
same control and monitoring capability as the browser interface. The client must
be installed on a host that has network access to the system.
rollback
The process of resetting a volume's data to become identical to a snapshot
taken of that volume.
SAN
See Storage Area Network (SAN).
SAS
Serial Attached SCSI
SATA
Serial Advanced Technology Attachment
SC
See storage controller (SC).
SCSI
Small Computer System Interface. A collection of ANSI standards and
proposed standards which define I/O buses primarily intended for connecting
storage subsystems or devices to hosts through host bus adapters. (SNIA)
99
100
SNMP
Simple Network Management Protocol. An IETF protocol for monitoring and
managing systems and devices in a network. The data being monitored and
managed is defined by a MIB. The functions supported by the protocol are the
request and retrieval of data, the setting or writing of data, and traps that signal
the occurrence of events. (SNIA)
spare
See dynamic spare, global spare, vdisk spare.
standard volume
A volume that is not enabled for snapshots.
standby
See spare.
state
The current operational status of a disk drive, a virtual disk, or controller. A
controller module stores the states of drives, virtual disks, and the controller in
its nonvolatile memory. This information is retained across power interruptions.
Storage Area Network (SAN)
A storage system consisting of storage elements, storage devices, computer
systems, and/or appliances, plus all control software, communicating over a
network. (SNIA)
storage controller (SC)
The processor (located in a controller module) that is primarily responsible for
RAID controller functions. The storage controller is also referred to as the RAID
controller.
storage system
One or more enclosures, referred to in a logical (as opposed to physical) sense.
strip size
See chunk size.
stripe size
The number of data disks in a virtual disk multiplied by the chunk size.
sub-vdisk, subvirtual disk
One of multiple RAID 1 virtual disks across which data is striped to form a RAID
10 virtual disk; or one of multiple RAID 5 virtual disks across which data is
striped to form a RAID 50 virtual disk.
FibreCAT SX Series Operating Manual
101
system
See array.
TByte (TB)
Terabyte. Equivalent to 1000 Gbyte for data storage and statistics, or 1024
Gbyte for memory.
TCP/IP
Transmission ControlProtocol/Internet Protocol
topology
The logical layout of the components of a computer system or network and their
interconnections. Topology deals with questions of what components are
directly connected to other components from the standpoint of being able to
communicate. It does not deal with questions of physical location of components or interconnecting cables. (SNIA)
trap
A type of SNMP message used to signal that an event has occurred. (SNIA)
UT
Universal Time. A modern time system related to the conventional Greenwich
Mean Time (GMT) used for time zones.
UPS
Uninterruptable Power Supply
vdisk
Abbreviation for virtual disk.
vdisk spare
A disk drive that is marked as a spare to support automatic data rebuilding after
a disk drive associated with a virtual disk fails. For a vdisk spare to take the
place of another disk drive, it must be at least equal in size to the failed disk drive
and all of the virtual disks dependent on the failed disk drive must be
redundantRAID 1, 10, 3, 5, or 50.
VDS
Virtual Disk Service. An API that enables virtual disks and volumes to be
managed by third-party applications.
102
verify
A process that checks the integrity of the redundant data on fault-tolerant virtual
disks. For RAID 3, 5, and 50, the verify process recalculates the parity of data
stripes in each of the virtual disks RAID stripe sets and compares it with the
stored parity. If a discrepancy is found, an error is reported and the new correct
parity is substituted for the stored parity. For RAID 1 and 10, the verify process
checks for mirror mismatches. If an inconsistency is encountered, data is copied
from the master disk drive to the slave disk drive. If a bad block is encountered
when the parity is regenerated, the data is copied from the other disk drive,
master or slave, to the reporting disk drive reallocating the bad block.
virtual disk
For FibreCAT SX storage systems, a set of disk drives that share a RAID level
and disk type, and across which host data is spread for redundancy or performance.
volume
A logical subdivision of a virtual disk. Multiple LUNs can be assigned to the
same volume, one for each host port given access to the volume.
See also standard volume.
volume mapping
The process by which volume permissions (read only, read/write, or none) and
LUNs are assigned to a host port.
WBI
See web based interface (WBI).
web based interface (WBI)
The web based interface that system administrators can use to configure,
monitor, and manage controller enclosures and attached expansion enclosures.
The WBI is accessible from any management host that can access an array
through an out-of-band Ethernet connection.
world wide name (WWN)
A unique 64-bit number assigned by a recognized naming authority (often via
block assignment to a manufacturer) that identifies a node process or node port.
(SNIA)
For FibreCAT SX storage systems derive WWNs from the serial numbers of
controller modules and expansion modules.
world wide node name (WWNN)
A globally unique 64-bit identifier assigned to each Fibre Channel node
process. (SNIA)
FibreCAT SX Series Operating Manual
103
104
Figures
Figure 1: Components and Indicators on the Front of an Controller Enclosure. . . . . . . . 28
Figure 2: FibreCAT SX60 / SX80 / SX88 Controller Enclosure Ports (FC)
and Power Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 3: FibreCAT SX80 iSCSI Controller Enclosure Ports (FC) and Power Switch . . . 30
Figure 4: FibreCAT SX100 Controller Enclosure Ports (FC) and Power Switch . . . . . . . 30
Figure 5: FibreCAT SX60 / SX80 / SX88 Controller Enclosure (FC) LEDs . . . . . . . . . . . 31
Figure 6: Expansion Enclosure Ports and Power Switch . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 7: Expansion Enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 8: Single-Controller, Direct Attach Connection to a Single Data Host (iSCSI) . . . 38
Figure 9: Detecting the HIM Model With FibreCAT SX Managers WBI
(Example With Two HIM Models 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 10: Detecting the HIM Revision With FibreCAT SX Managers WBI
(Example With Two HIM Models 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 11: Minimal Connection to a Single Data Host . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Figure 12: High-Availability, Dual-Controller, Direct Attached Connection
to Dual Data Hosts for Windows and Linux (no VMware support). . . . . . . . . . . . . . . . . . 43
Figure 13: Direct Attached FibreCAT SX (Controller Failover Scenario) . . . . . . . . . . . . . 44
Figure 14: Direct Attached FibreCAT SX (Path Failover Scenario) . . . . . . . . . . . . . . . . . 45
Figure 15: High-Performance, Dual-Controller, Direct Attached Connection
to Dual Data Hosts (no Solaris Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 16: Four Dual-Port Data Hosts with FibreCAT SX100 . . . . . . . . . . . . . . . . . . . . . 47
Figure 17: Four Dual-Port Data Hosts with FibreCAT SX100 and Failed Controller . . . . 49
Figure 18: iSCSI Storage Presentation During Normal, Active-Active Operation . . . . . . 50
Figure 19: iSCSI Storage Presentation During Failover . . . . . . . . . . . . . . . . . . . . . . . . . 51
FibreCAT SX Series Operating Manual
105
Figures
Figure 20: Redundant Connection Through a Single Switch to a Single Data Host . . . . 53
Figure 21: Redundant, High-Availability Connection Through Switches
to Dual Data Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 22: Switch Attached Configuration with Two Switches and Two Hosts. . . . . . . . . 56
Figure 23: Two Dual-Port Data Hosts with FibreCAT SX100 and a Switch
for High Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Figure 24: Sliding Rail Kit (A3C40075969) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 25: Rack Post Mounting Positions of Two enclosures. . . . . . . . . . . . . . . . . . . . . . 65
Figure 26: Support Bracket for Use in PRIMECENTER Racks . . . . . . . . . . . . . . . . . . . . 65
Figure 27: Mounting the Support Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 28: Mounting the Right Sliding Rail (Front Side of the Rack) . . . . . . . . . . . . . . . . 66
Figure 29: Mounting the Right Sliding Rail (Rear Side of the Rack) . . . . . . . . . . . . . . . . 67
Figure 30: Mounting the Left Sliding Rail (Rear Side of the Rack) . . . . . . . . . . . . . . . . . 67
Figure 31: Mounting Position of a Cage Nut Between the Rail Screws . . . . . . . . . . . . . . 68
Figure 32: Screwing on the Enclosure (Right Side Example) . . . . . . . . . . . . . . . . . . . . . 68
Figure 33: Fault-Tolerant Cabling Connections Between Controller and Expansion
Enclosures (left example: FibreCAT SX60 / SX80 / SX88,
right example: FibreCAT SX80 / SX88) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Figure 34: Non-Fault-Tolerant Cabling Connections Between Controller and Expansion
Enclosures (example: FibreCAT SX80 / SX88) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Figure 35: Cabling Connections Between a FibreCAT SX100 Controller
and 1 / 8 Expansion Enclosure(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
106
Tables
Table 1: Typographic Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Table 2: General Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Table 3: Hard Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 4: Supported RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 5: Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Table 6: Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Table 7: Electrical Values per Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Table 8: Ambient Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Table 9: Heat Dissipation per Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Table 10: Dimensions per Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Table 11: Compliance with Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Table 12: Installation and Configuration Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Table 13: Dimension and Weight Specification Examples . . . . . . . . . . . . . . . . . . . . . . . . 22
Table 14: Environmental Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Table 15: Controller Enclosure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Table 16: Controller Enclosure LEDs (Front) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Table 17: Controller Enclosure Ports and Switches (Back) . . . . . . . . . . . . . . . . . . . . . . . 30
Table 18: Controller Enclosure LEDs (Back, Power and Cooling Module). . . . . . . . . . . . 31
Table 19: Controller Enclosure LEDs (Back, Controller Module) . . . . . . . . . . . . . . . . . . . 32
Table 20: Expansion Enclosure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Table 21: Expansion Enclosure Ports and Switches (Back) . . . . . . . . . . . . . . . . . . . . . . 34
Table 22: Expansion Enclosure LEDs (Back, Power and Cooling Module) . . . . . . . . . . . 35
FibreCAT SX Series Operating Manual
107
Tables
108
Safety manual
Basic safety information including the handling of racks and rack mount enclosures.
Supported with the hardware as printed manual.
[2]
[3]
[4]
[5]
[6]
109
[7]
[9]
MatrixEP
http://ts.fujitsu.com/matrixep
[11]
[12]
110
Index
A
adhesive labels on plastic casing parts
altitude range, operating 24
array
first-time configuration 73
test configuration 83
C
cable routing guidelines 26
clearance requirements
service 23
ventilation 23
CLI
see command-line interface
command-line interface
description 8
set controller IP address 73
configuration charts 37
console requirement 26
controller enclosure
configure host ports 78
connect to data hosts 41, 52
connect to remote management hosts
D
date
using WBI to set 77
date and time, set controller
dimensions, enclosure 22
disposal of equipment 21
Driver Settings 77
77
E
electrical guidelines 24
enclosure
cabling configurations 69
power off 85
power up 85
test connections 72
environmental protection 21
environmental requirements 24
expansion enclosure
connect to controller enclosure
21
F
frequency requirement, input
H
HIM
restriction 39
Host Interface Module
restriction 39
humidity range, operating
37
69
25
24
I
Initiator (iSCSI) 78
installation safety precautions 61
IP address
default 8, 76, 77
set controller via CLI 73
iSCSI initiator timeout value 80
iSCSI Software Initiator 78
M
management host requirements
Model of HIM 39
26
111
Index
T
temperature range, operating 24
time
using WBI to set 77
timeout value for iSCSI initiator 80
mpclaim 81
MPIO 81
N
Native MPIO 81
notation conventions
10
P
packing 21
physical requirements 22
placement guidelines 23
plastic casing parts, adhesive labels
power cord guidelines 24
power requirements, site 25
power, connect AC 72
21
R
radio suppression 20
recycling, equipment 21
restriction
direct connect mode 39
FibreCAT SX60 / SX80 39
returning, equipment 21
S
safety instructions 17
safety precautions 61
servermanagercmd 81
shock range, operating 24
site planning
console requirement 26
electrical requirements 24
environmental 24
environmental requirements 24
management host requirements 26
physical requirements 22
software, system requirements 76
storage system
environmental requirements 24
power off 85
power on 85
structure of the manual 9
system requirements
WBI 76
112
V
ventilation requirements 23
vibration range, operating 24
virtual disks and volumes
create 81
view status 83
voltage requirement, input 25
W
WBI
logging in 77
See web-based interface
setting date and time 77
system requirements 76
web browser configuration 76
web-based interface
configure 76
create virtual disks 81
description 8, 76
log out 84
set date and time 77
test array configuration 83
weight guidelines 23
weight, enclosure 22
Windows 2008 81
wiring requirements, site 25