You are on page 1of 102

Enterprise

Virtual
Arrays
The EVA family
A technical overview

Choong Ming Tze


Technical Consultant
ming-tze.choong@hp.com
May 2007

© 2007 Hewlett-Packard Development Company, L.P.


The information contained herein is subject to change without notice
Agenda
• Theneeds of today's
businesses
• The EVA Family
• The value of Virtualization
• Tiered Storage within an
EVA
• EVA Management
• EVA SW Features
• EVA Solutions
• EVA Services
Today’s customer challenges
• CIO’s top issues • Key storage
• Business environment themes
is volatile and • Consolidation
unpredictable • Simplification
• Intense competitive • Guaranteed
pressure service levels
• IT budgets constrained
HP StorageWorks EVA Portfolio
Powerfully Simple – Comprehensive
Solutions
CIO End User
Better Manage TCO Real-Time Information
Total Solution:
- HP StorageWorks EVA
Fast Recovery Solution for
- EML E-Series Tape Libraries Windows 2003
- Enterprise File Services File System Extender
Clustered Gateway
Fast access
Powerful, to
Flexible, information
Scalable
Value Agility

Administrator
Remove Operational Tasks
Command View EVA
Powerfully Simple
Command View Tape Library
Business Copy EVA
Continuous Access EVA
Cluster Extension EVA, Metrocluster,
ContinentalCluster
Simplicity
HP StorageWorks product portfolio
The B-Series SAN switch family

SAN Director 4/256


32-384 4Gb ports
FICON support

HP 400 MP-Router
Fabric Manager (16FC + 2IP ports)
Enhanced capabilities

5.x
SAN Switch 4/64 OS
(32-64 ports) b ric 4/48 Port Blade
a
onF For the 4/256
m
C om
HP MPR Blade
SAN Switch 4/32
For the 4/256
(16-32 ports)
Brocade 4Gb SAN Switch
for HP c-Class BladeSystem
SAN Switch 4/8 & 4/16
8 and 16 ports
Brocade 4Gb SAN Switch
for HP p-class BladeSystem
The C-series SAN switch family
Small & Medium-Sized Enterprise & Service Provider
Business
MDS 9000 Family
Systems

MDS 9020*
MDS 9216
and 9216i

MDS 9120 MDS 9506


and 9140 MDS 9509 MDS 9513

MDS 9000
Modules

12-Port, 24- 4-Port IP Storage SSM


14-Port, 16- Port, 48-Port 10Gb FC Services – iSCSI (Virtualization;
Supervis Port, 32-Port 1, 2 & 4Gb FC and FCIP Intelligent
or 1 & 2 Gb FC fabric
1 and 2 Applications)
Mgmt. Cisco Fabric Manager
OS *FabricWare Cisco MDS 9000 Family SAN-OS
StorageWorks market coverage
XP 10000 XP 12000
EVA8000
EVA6000
EVA4000

MSA1500
MSA1510i
MSA1000
Enterprise
MSA60, 70 MSA500 plugged into the
Availability

data center fabric


Departmental to maximize
flexible and scalability and
MSA20, 30, 50 scalable entry- availability
Branch Office level fibre
Simple, affordable, • High connectivity
fault tolerant Smart channel storage
Array technology • High scalability
Workgroups
• High efficiency
high performance • clustering & shared • Scalable
internal / external storage modularity • Highestdisaster
storage with Smart • minimal tolerance
Array technologies • Heterogeneous
infrastructure solutions
• DtS conversion • Ease
of
• price/capacity • Universal
• price/availability administration connectivity and
• Price/scalability
Scalability heterogeneity
The HP Slide
Transition
StorageWorks
Enterprise
Virtual Array
EVA
The EVA family
Leading in array virtualization and ease of use
• A revolutionary redesign of the proven EVA3000 and
EVA5000 Storage Arrays
• Three family members for a broad range of prices,
storage capacities and performance
• 4Gbps FC Controller
• iSCSI Connectivity Option
• Concurrent support of various FC and FATA Disks in
the same Disk Enclosures
EVA4000
− 72, 146, 300GB FC
− 250, 400, 500GB FATA EVA6000
• Virtual RAID Arrays: Vraid0, Vraid1, Vraid5
• Industry standard multi-path failover support
− MPIO
− Pvlink
− DMP etc.
• Native HBAs Support (Sun, IBM, HP)
• Local and remote copy support
• Broad range of solutions and integrations available

*Note: Legacy 36GB FC and 250, 400GB FATA disks are still fully supported EVA8000
The EVA family specifications
EVA4000 EVA6000 EVA8000
Controller HSV200 HSV210
Cache size 4GB 8GB
RAID Levels VRAID0, VRAID1, VRAID5
Windows 2000/2003, HP-UX, Linux, IBM AIX,
Supported OS OpenVMS, Tru64, SUN Solaris, VMWare,
Netware
Supported FC: 72, 146GB/15krpm, 146, 300GB/10krpm
EVA4000
Drives FATA: 250, 400, 500GB
EVA6000
Host ports 4 8
Device ports 4 8
Mirror ports 4
Backend loop
0 2 4
switches

# of Drives 8 - 56 16 – 112 8 – 240


# of
1-4 4-8 2 – 18
Enclosures
Max Capacity 28TB 56TB 120TB
EVA8000
The EVA4000 architecture
Heterogeneous Servers

Management
Server (Windows)

Fabric 1 Fabric 2

• 4Gbps Front-End

• 2 HSV Controllers HSV200 controller 1 HSV200 controller 2

• 1 to 4 Disk
enclosures
• 8 to 56 FC Disks
The EVA6000 architecture
Management
Server (Windows)

Fabric 1 Fabric 2

• 4Gbps Front-End
• 2 HSV Controllers HSV200 controller 1 HSV200 controller 2

FC loop switch FC loop switch


• 2 FC Loop Switches

• 4-8 Disk enclosures

• 16 to 112 FC Disks
The EVA8000 architecture
Heterogeneous Servers

Management
Server (Windows)

Fabric 1 Fabric 2

• 4Gbps Front-
HSV210 controller 1 HSV210 controller 2
• End
2 HSV Controllers

FC loop switch FC loop switch FC loop switch FC loop switch


• 4 FC Loop
Switches

• 2-18 Disk
enclosures
12 in the first rack
6 in the utility
cabinet
• 8 to 240 FC Disks
EVA Performance (based on 2GB controllers)

Controller limits: 100% cache hits


Workload EVA5000 EVA8000
512 B Reads IOPs 141’000 215’000
256kB Reads MB/s 700 1600

Maximum Data Transfer Rates for 128 KB Sequential Workloads (MB/s)


Workload EVA4000 EVA6000 EVA5000 EVA8000
Reads 340 770 530 1,430
Vraid 1 Writes 160 355 165 530
Vraid 5 Writes 260 515 153 525

Throughputs (IOPs) under Random Workloads (4 KB Transfers @<30ms)


Workload EVA4000 EVA6000 EVA5000 EVA8000
Reads 14,500 27,600 50,000 55,900
Vraid 1 Writes 8,000 15,200 20,600 22,300
Vraid 5 Writes 4,400 8,000 12,200 13,000
Vraid 1 OLTP 11,300 21,200 30,400 32,600
(60r/40w)
Vraid 5 OLTP 7,000 13,900 22,100 23,300
(60r/40w)
The Benefits
Transition Slide of
the EVA
Virtualization
Traditional Disk Array Approach

RAID Controller
Traditional Disk Array Approach

RAID Controller
Disk Groups
& RAID
Level

RAID1 RAID5
Dedicated
Spare

Spare
Disk(s)
Traditional Disk Array Approach

Presente
0 1 2 d LUNs

RAID Controller

RAID1 RAID5
LUN 0

Spare
LUN 2
LUN 1
Traditional Disk Array Approach
RAID levels in separate small Disk Groups, dispersed
LUNs, beware of hot-spots

Presente
0 1 2 3 5 7 d LUNs
4 6

RAID Controller

RAID5 RAID0 RAID1 Disk Groups


& RAID
LUN 5 Level
LUN 6

Spare
LUN 7 LUN 3
LUN 4

RAID1 RAID5
Dedicated
LUN 0 Spare Disk

Spare
LUN 2
LUN 1
HP Virtual Array Approach
Disk groups, segments, block mapping tables &
sparing

Spare Virtual Array Controller


Capacity

Block
Mapping
Table

Disk Group(s)
HP Virtual Array Approach
Disk groups
An EVA can have

• 1 to 16 disk groups
• 8 to 240 disks per disk
group
HP Virtual Array Approach
LUN/vdisk allocation

Presented
LUNs 1 2
Virtual Array Controller
LUN 1 (RAID1)

LUN 2 (RAID5)
HP Virtual Array Approach
LUNs/vdisks and their allocation
An EVA can have

• from 1 to 1024 virtual


disks/LUNs
• LUN sizes from 1GB to 2TB
in steps of 1GB
• any combination of
VRAID 0, 1, 5
2
0
HP Virtual Array Approach
Capacity upgrade and load leveling

1 2 3
Virtual Array Controller
LUN 1
LUN 2
LUN 3
HP Virtual Array Approach
Capacity upgrade, disk group growth

2
HP Virtual Array Approach
All RAID levels within a Disk Groups, optimal striping,
no hot-spots
Presented 2 3
LUNs 1

Spare Virtual Array Controller


Capacity
LUN 1 (RAID0)
Block
LUN 2 (RAID1)
Mapping
Table LUN 3 (RAID5)

Disk Group(s)
HP Virtual Array Approach
Online Volume Growth

1 2
Virtual Array Controller
LUN 1
LUN 2
HP Virtual Array Approach
Online Volume Growth

25
The value of the EVA virtualization
Lower management and Improved application
• training costs availability
• Easy to use intuitive
• Enterprise-class availability
web-interface • Dynamic pool and
• Unifies storage into Vdisk (LUN) expansion
a common pool • No storage reconfiguration
• Effortlessly create down time
virtual RAID volumes (LUNs) Improve performance
Buy less – service more customers
• Significantly increase • Vraid striping across all
utilization and reduce disks in disk group
stranded capacity • Eliminate I/O hot spots
• Automatic load leveling
EVA IscsiSlide
Transition
Connectivity
Option
EVA iSCSI connectivity option
• An integrated EVA solution
− Mounted in the EVA cabinet
− Provides LUNs to iSCSI hosts
− Managed and controlled by Command View
− Flexible Connectivity
• Fabric and direct attach on EVA 4/6/8000
• Fabric attach on EVA 3/5000
• Single or dual iSCSI option
− A324A single router configuration
− A325A upgrade to dual router configuration
• High Performance Solution
− 35K target IOPS
• OS Support
− Microsoft Windows 2003, SP1
− Red Hat Enterprise Linux:
• Red Hat™ Enterprise Linux 4 update 3 (kernel 2.6.9-34)
• Red Hat Enterprise Linux 3 update 5
− SUSE® Linux Enterprise Server:
• SUSE Linux Enterprise Server 9, SP3 (2.6.5 kernel)
• SUSE Linux Enterprise Server 8, SP4
EVA iSCSI connectivity option
• An integrated EVA solution
− Mounted in the EVA cabinet
− Provides LUNs to iSCSI hosts
− Managed and controlled by
Command View
HP S
t o
r a
g e Wo
r k
s
MG MT
mp
x10

FC1 FC2 GE 1 GE 2

− Flexible Connectivity HP S
t o
r a
g e Wo
mp
x10
r ks
MGMT
HP S
t o
r a
g e Wo
mp
x 10
r ks
MGMT

• Fabric and direct attach on EVA


FC1 FC2 GE 1 GE 2 FC1 FC2 G E1 GE 2

11 9 7 5 3 1 11 9 7 5 3 1

12 10 8 6 4 2 12 10 8 6 4 2

I NTE RC ON TROLLE R
CAB
ON STBY
UID

4/6/8000
DP1B DP2B MP1 FC1 FC2 FC3 FC4 MP2 DP1A DP2A

IN TE RC ON TR OLLE R
CAB
ON STBY
UID

DP1B DP2B MP1 FC1 FC2 FC3 FC4 MP2 DP1A DP2A

• Fabric attach on EVA 3/5000 only


11 9 7 5 3 1 11 9 7 5 3 1

12 10 8 6 4 2 12 10 8 6 4 2

• Single or dual iSCSI option


− A324A single router configuration
− A325A upgrade to dual router
configuration
• High Performance Solution
− 35K target IOP
EVA iSCSI connectivity option
MPX100 – FC/iSCSI bridge
• OEM Qlogic iSR-6140
• 533MHz PowerPC CPU
• 128MB DDR2 memory
• Internal buses are 133MHz, 64bit PCI-X
• Single power supply
• physical footprint is 1U high, half-rack width.
− Allows dual redundant mpx100 pair in a 1U rack slot
• Mpx100 is the FRU level

Dual 2 Gb/s FC Dual GbE RJ45 Serial 100Mbit/s RJ45


(Qlogic (Qlogic ISP4022 console management
ISP2322 chip) iSCSI / TOE (defaults to port
chip) 115200N81)
EVA iSCSI connectivity option
The iSCSI Host Driver – iSCSI
initiator:
HOST – iSCSI initiator • Resides on the host and provides host-
Applications to-storage connectivity over an IP
network EVA
File System
• Uses the host’s existing TCP/IP stack,
Block Device network drivers and network interface
card(s) (NIC) to provide the same
SCSI Generic functions as native SCSI drivers and
iSCSI Host Bus Adapter (HBA) cards
Driver
• Functions as a transport for SCSI
TCP/IP Stack commands and responses between the
host and the MPX100 on an IP network.
NIC Driver Adapter Driver
(The MPX100 then translates the SCSI
NIC Adapter
commands and responses and
SCSI Adapter communicates directly with the target
(HBA)
Fibre Channel storage devices)
Direct
mpx100 – iSCSI target connect
Direct or
Attached SCSI/TCP Server SAN
Storage
TCP/IP SCSI Driver
Driver
IP Network GigE NIC FC HBA
EVA iSCSI connectivity option
Fabric attached (all EVA models supported)
Single iSCSI router configuration Dual iSCSI router configuration
(Windows and Linux) (Windows only)
FC Server iSCSI FC SAN iSCSI
Server Attached Server Server
Any supported OS Windows or Linux Any supported OS Windows

iSCSI IP iSCSI IP
Network Network

Fibre Channel Fibre Channel Fibre Channel Fibre Channel


Fabric 1 Fabric 2 Fabric 1 Fabric 2

Command View Command View


EVA Mgmt Server mpx100 EVA Mgmt Server mpx100 mpx100

EVA Controller 1 EVA Controller 1


Mgmt IP Mgmt IP
Network Network
EVA Controller 2 EVA Controller 2
EVA iSCSI connectivity option
Direct attached (EVA 4/6/8000 and Windows only
supported)
Single iSCSI router configuration Dual iSCSI router configuration
FC Server iSCSI FC Server iSCSI
Server Server
Any supported OS Windows Any supported OS Windows

iSCSI IP iSCSI IP
Network Network

Fibre Channel Fibre Channel Fibre Channel Fibre Channel


Fabric 1 Fabric 2 Fabric 1 Fabric 2

Command View Command View


EVA Mgmt Server mpx100 EVA Mgmt Server mpx100 mpx100

EVA Controller 1 EVA Controller 1


Mgmt IP Mgmt IP
Network Network
EVA Controller 2 EVA Controller 2
EVA iSCSI connectivity option
Direct attached (EVA 4/6/8000 and Windows only
supported)
Single iSCSI router configuration Dual iSCSI router configuration
iSCSI iSCSI
Server Server
Windows Windows

iSCSI IP iSCSI IP
Network Network

Command View Command View


EVA Mgmt Server mpx100 EVA Mgmt Server mpx100 mpx100

EVA Controller 1 EVA Controller 1


Mgmt IP Mgmt IP
Network Network
EVA Controller 2 EVA Controller 2
Configuration support overview
EVA model iSCSI EVA firmware Single Dual Fabric Direct attached mpx100 2)
Initiator mpx100 mpx100 attached
mpx100
EVA
4/6/8000
Windows ≥ XCS 5.1x0
≥ XCS 6.000
√ √ √ √
≥ XCS 5.031
√ √ √ √ (all other EVA ports will run in
loop mode as well and therefore
only support directly connected
Windows servers)
Linux ≥ XCS 5.1x0
≥ XCS 6.000
√ 1)
√ √
≥ XCS 5.031
√ 1)
√ √ (all other EVA ports will run in
loop mode as well and therefore
only support directly connected
Windows servers)
EVA 3/5000 Windows ≥ VCS 4.004
√ √ √
≥ VCS 3.028
√ √
Linux ≥ VCS 4.004
√ 1)

≥ VCS 3.028
√ √
1) You can run Linux in a dual mpx100 environment as long as you configure only one of the two mpx100 as target. Linux will
run single path while other Windows hosts can run multipath across both mpx100s.
2) Continuous Access EVA is not supported in direct attached environments.
Tiered storage with EVA
FC drives
Online – active data, mirroring, 72, 146, 300GB 15krpm
instant recovery and stale data 72, 146, 300GB 10krpm

Near-Online – 3 to 6 months active, FATA drives


faster recovery, infrequently 250, 400, 500GB
accessed data
Nearline – 1 year active, file
recovery, off-site recovery Tape/Automation

Offline – 5 year active, off-site


storage, disaster recovery Tape and Optical

Archive – 30 year records or


longer, offsite storage, retrievable Optical
EVA interface – Group properties
Building storage classes within an
EVA
File DB Servers
Servers Backup Archive
Server Server

HSV controller 1 HSV controller 2


Fast FC Disks
FC loop switch FC loop switch 73, 146GB 15krpm
Large FC Disks
146, 300GB 10krpm
FATA Disks
250/400/500GB near online
EVA selective storage presentation
• What does it do?
− Provides storage access control assuring
that a host cannot access data belonging
to a different host. HBA HBA HBA HBA
WWN WWN WWN WWN
• How does it work? 1 2 3 4

− Selectively grants access of HBA WWNs to


LUNs

LUN masking

2 1

n
3
Multipathing for EVA3000/5000
VCS 2.x and 3.x
• They use an
active/passive LUN
presentation model, a HBA HBA HBA HBA

LUN is only actively 1 2 3 4

presented on one HSV


controller
• The Multipathing
implementation in the OS
or SecurePath has to
make sure that
− The OS only used the
active controller for a
particular LUN
− Switches over the LUN
ownership and used paths Active path
in case of an error Passive path
− Potentially can do load
balancing
New EVA multipathing
XCS ≥5.x and VCS 4.x
• Uses an active/active LUN presentation
model, a LUN is actively presented on
both HSV controller HBA HBA HBA HBA
1 2 3 4
• Support for industry standard
multipathing solutions like
− MPIO for Windows, AIX and Netware
− MPxIO/STM and DMP for Solaris
− Pvlink and DMP for HP-UX
• A LUN is still owned by one controller. If
an IO comes from the non-owning
controller it is passed over to the owning
controller via the cache mirror ports
• The multipathing implementation in the
OS only has to make sure that
− A single LUN is not presented multiple
times
− Potentially can do load balancing
Multipathing and boot support
EVA 3000/5000 Concurrent
Operating EVA 4/6/8000 and ALUA SAN
with VCS 2.x and attachment
System EVA3000/5000 with VCS 4.x ALB Boot
3.x ¹)
native pvlinks
HP-UX Secure Path v3.0F
Same server
Same HBA
Secure Path v3.0F
Veritas DMP

HP MPIO - AA DSM (full-feature)
Windows
MPIO DSM
Secure Path v4.0C SP2
Same server
Same HBA
Veritas MPIO DSM
Direct server attachment supported
 
Qlogic FO driver; Emulex Lightpulse
Linux
Qlogic FO driver – basic
Secure Path v3.0C SP2
Same server
Same HBA
Md driver planned; DMP support by
Symantec

Tru64 native
Same server
Same HBA
native 
OVMS native
Same server
Same HBA
native 
Solaris
Secure Path v3.0D SP1 Same server
Different HBA
MPxIO/STM (also with non-SUN HBAs) ²)
Veritas DMP  
AIX
Secure Path v2.0D SP3
Antemeta Solution
Same server
Different HBA
MPIO – PCM 
Netware Secure Path v3.0C SP2.1
Same server
Same HBA
Native 
VMware ESX VM MPIO
Same server
Same HBA
VM MPIO 

1) For details see SAN Design Referenc Guide: Heterogeneous server rules on
www.hp.com/go/sandesign
2) See http://www.sun.com/io_technologies/qlogic_corp_.html
ALUA
Asymmetric Logical Units Access defined by INCITS
T10 / Adaptive Load Balance
A LUN can be accessed trough multiple Target Ports.
Target Ports Groups can be defined to manage Target Ports with the same
attributes
The ALUA inquiry string reports one of the following states/attributes
• Active/optimized
Active/optimized path
• Active/non-optimized HBA HBA
1 2
Active/non-optimized path
• Standby
• Unavailable
Target Target Target
Port 1 Port 2 Port n

Target Port Target Port


Group 1 Group n

LU
N
ALB and Windows MPIO
Adaptive Load Balance
HP Implementation of ALUA into the Windows MPIO DSM (Initial Release
2.01.00)
Supported with EVA3/5000 and VCS4.x and EVA4/6/8000
Enabled by DSM CLI command: “hpdsm set device=x alb=y” or DSM manager
GUI HP MPIO Full-Featured DSM for EVA Disk Arrays (Windows
2000/2003)
Maximum number of HBAs per host 8
Maximum number of paths per LUN for EVA 32
Failback Yes
Load balancing Yes
User Interface Yes
Support for Microsoft Cluster Yes
Coexistence with HP MPIO basic failover for EVA Yes
arrays on same server
Coexistence with HP MPIO Full-Featured DSM for Yes
EVA3/5000 VCS 4.x and XP Disk Arrays on same
server

For more Information see: http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html


HP
Transition Slide
StorageWorks
EVA Software
Solutions
EVA software
XCS6.0 / Command View EVA 6.0
− Windows based authentication (same as RSM)
• Impacting GUI, SSSU and API
• Single sign-on
− Support of new firmware features
• Mirrorclone
• Snapshot Restore
• Enhanced async CA
• Non-migrate disk firmware update
• Progress indicators
− Usability enhancements
• Single Page creation of snapshots, snapclones, mirrorclones,
diskgroups, storage initialization
• Delete SnapClone while normalizing
• CA links status
XCS6.0 / CV EVA 6.0 Gotchas
As of 22.11.06

• AppRM (replacement for FRS)


− Not supported for CA volumes
− Only SnapClones supported
• MetroCluster EVA
− Support expected December 06
• Data Protector ZDB and Instant Recovery
− No container support yet
− No MirroClone support yet
• Storage Essentials
− Only supported with SE5.1 SP1 expected December 06
Command View Security
• Totally new security model New with XCS 6.0
− No longer relying on System Management Homepage/WBEM
− Instead CV EVA now uses Windows based authentication
• Use your Windows account to log into CV EVA
− Consistently used across all interfaces (CV GUI, SSSU, API)
− Providing two levels of access
• Admin access: full access to all functionality
• User acess: read-only acess
− Introducing user id based auditing
• If turned on all actions are logged by user
• Written to a file (locally or share)
and/or to the Windows Application Event Log

• Side effect:
− Command View EVA will not longer rely on the System Management
homepage
− Therefore port has been changed to:
https://localhost:2372/command_view_eva
Non-migrate disk drive firmware
update
• Pre-XCS 6 possibilities New with XCS 6.0
− Massive disk drive code load to update all drives
at a time
• Single image applied like an EVA firmware code load
• EVA will be offline for several minutes
− Single ungrouped disk drive code load
• Every drive has to be ungrouped, updated and re-
grouped
• Massive time and effort

• New with XCS 6 (above are still possible)


− Ability to code load disk drives while they are
grouped
− Pre-requisit:
• No VRaid 0 VDisk must exist on that Disk Group
− Process:
• EVA will take disk out of operation,
code load it and then reintroduce it
• Any write to that disk will be buffered
and applied once disk drive is back
• Reads will be generated out of RAID information
HP Command View EVA
Powerfully Simple Management
HP Command View EVA
• Provides a powerfully,
simple management Management Applications
experience for all EVA
arrays
• Automate and aggregate
management tasks

HP ISEE solutions - remote


• Offers proactive remote HP Command View EVA suite
monitoring services for
maximum uptime Configuration, discovery,
• Intuitive, easy-to-use GUI events & monitoring, security
• Enables you to quickly
expand a LUN online, LUN
CLUI-
Performance Basic
configure LUNs or RAID Monitoring masking scripting
replication
monitoring

/agents
groups, or add physical
disks with just a few
mouse clicks SMIS and APIs
• Uses standards-based
SMI-S HP Enterprise Virtual Arrays
• Allows you to easily
CV EVA deployment options
HP Storage Management Appliance (discontinued)
• Choice
and flexibility to OV SOM v1.2
Existing OV SOM
maximize your investment SMA SW v1.2 Installs; includes OV SNM
Or
• BroadMicrosoft Windows CV EVA
CV EVA ≥5.0 required for
Host SMA SW v1.2 EVA4000/6000/8000
OS coverage
• Host-basedor direct host SAN

attached device Up to 16 EVAs EVA family

management
HP ProLiant Storage Management Server-dedicated
General Purpose Server (NAS)
Server Existing OV SOM
Gigabit Ethernet (iSCSI) OV SOM v1.2
Or Fibre channel Installs; includes OV SNM
Customer
application Or
CV EVA
Or NAS OS CV EVA CV EVA ≥5.0 required for
Host EVA4000/6000/8000
CV EVA

SAN
SAN SAN
Up to 16 EVAs
Up to 16 EVAs EVA family
EVA family
HP command view EVAperf
EVA performance analysis
• Performance analysis tool for whole EVA
product line
• Shipped with Command View EVA
• Integrates with Windows PerfMon
• Create your own scripts via a command
prompt
• Monitor in real-time and view historical EVA
performance metrics to more quickly identify
performance bottlenecks
• Easily monitor and display EVA Performance
metrics:
− Host connection data
− port status
− host port statistics
− storage cell data
− physical disk data
See the− virtual
EVAPerf disk data
Whitepaper on:
http://h18006.www1.hp.com/storage/arraywhitepapers.html
− CA statistics
EVA Replication Software
Enhancements with XCS 6.0xx
• Replication Solution Manager 2.1
− Tru64 Host Agent
− Single sign-on
• Business Copy 4.0
− MirrorClone feature with Delta Resync and Instant
Restore
− Instant Restore from a Snapshot
• Continuous Access 3.0
− Enhanced asynchronous performance and distance
support by using buffer-to-disk (journaling)
Replication Solutions Manager 2.1
Familiar browser based navigation

Selectable views

Select hosts or storage volumes


Auto discovery of storage systems
and volumes

Oracle Application Integration

Status monitoring
Context sensitive actions and wizards
Local and remote
Mgmt
Interactive
Topology
Manager
Business copy EVA
4 options available: point-in-time copy capability for the EVA (local copy
• space efficient vSnapShot
• pre-allocated vSnapShot
• vSnapClone
• Mirror Clone
Controlled from Command
View, RSM or SSSU.
Ideally suited to create point-in-
time copies to:
• Keep applications online
while backing up data
• Test applications against
real data before deploying
• Restore a volume after a
corruption
• Mine data to improve
business processes or
customer marketing
Space efficient snapshots
Virtually capacity free
contents contents contents
identical different different

volume volume volume A’


A “A” A’ “A” (contents
(content as of t0)
s
A’ as of t0)
snap
of A updates t3
updates t1 updates t1

volume A
volume A receives more
receives updates
updates (copy
on write)
time

t0 t1 t2 t3 t4
$ create snapshot “A”
Pre-allocated snapshots
Space reservation
contents contents contents
identical different different

volume volume contents volume contents


“A” “A” as of t0 “A” as of t0

snap
of A
updates t3
updates t1 updates t1

volume “A”
volume “A” receives more
receives updates
updates (copy
on write)
time

t0 t1 t2 t3 t4
$ create snapshot “A”
New: Pre-allocated 3-phase
snapshots
Space reservation
contents contents contents
identical different different

volume volume contents volume contents


“A” “A” as of t0 “A” as of t0

snap
of A
updates t3
updates t1 updates t1

volume “A” convert snap to


create volume “A” receives more
empty receives updates an empty
container updates (copy container
on write)

t-x t0 t1 t2 t3 t4 t5
$ create snapshot “A”
SnapClone of virtual disks
Full copy
contents contents relation
identical different suspended

volume volume contents volume Vol B


A A as of t0 A content as
vol A at t0

B
A snap A A B
of A
updates t1

cloning process volume A Cloning finished


starts receives
updates (copy
on write)
time

t0 t1 t2 t3 t4
$ create snapclone“A”
3-phase SnapClone
Full copy
contents contents relation
identical different suspended

volume volume content volume content of


A A of A as A A as at t0
at t0

B
A snap A A B
of A
updates t1

create cloning process volume A Cloning finished convert B to an


empty starts receives updates empty container
container (copy on write)

t-x t0 t1 t2 t3 t4 t5
$ create snapclone“A”
Business Copy 4.0 MirrorClones
• A Mirror Clone is a pre-normalized Clone New with XCS 6.0
− Full Clone of the source
• requires 100% of the capacity (if same raid level)
− Synchronous mirror between source VDisks and MirrorClone
• Once synchronized data is always identical (unless fractured)
− MirrorClone can be in a different Disk Group/have a different
Raid level
• Tiered storage approach
• Can be used to protect against physical failures

• Point-in-time copy is established at moment fracture is made


− differences are tracked via bitmap
− Delta resynch/restore is accomplished by only
resynchronizing/restoring data that is marked different
• Primary advantages
− Data is available at instant of split
− Delta resynch takes less time than a full copy
MirrorClone Tasks
• Initial creation
− Will establish MirrorClone relationship and start initial copy
• Fracture (only permitted when fully synchronized)
− Will establish a point-in-time copy by stopping replication of writes to MirrorClone
− Deltas are tracked in a bitmap (for both source VDisks and MirrorClone)
− Allows MirrorClone to be presented
• Resync (only permitted when fractured)
− Will resync the deltas from the source VDisk to the MirrorClone leading to a
synchronized MirrorClone
• Restore (only permitted when fractured)
− Will restore the source VDisk back to the point-in-time the MirrorClone was
fractured
− Instant access to restored data
• Detach (only permitted when fractured)
− Will break the MirrorClone relation and convert the MirrorClone into a standalone
VDisk
− If exist, Snapshots from the MirrorClone will stay intact and attached to the former
MirrorClone
MirrorClone creation
MirrorClone source
MirrorClone target
(Production VDisk)
Initial situation
• VDisk, presented to the host, volume mounted
Host Read
E: s

Writes

User...
• Creates empty container with same size as source
Host Read VDisk
E: s Container
• Raid level and Disk Group can be different

Writes

User...
• Creates MirrorClone using the Container as target
Host Read EVA...
E: s
• Establishes MirrorClone relationship
Writes • Start inital synchronization of MirrorClone (Volume
Writes
behind copy
stays fully accessible to host)
fence
Synchronized MirrorClone
• Once MirrorClone is synchronized data on both
Host Read volumes is kept identical
E: s • Writes are applied to both volumes
• Reads are satisfied by source only

Writes
MirrorClone fracture and resynch
MirrorClone source MirrorClone
(Production Vdisk) target User...
• Fractures MirrorClone
Host Read EVA ...
E: s • Stops applying writes to the MirrorClone target
• Instead changes are marked in a delta bitmap
Writes

User...
• Can present fractured MirrorClone for various
Host Read Read
Host purposes (Read and write).
E: s s F: EVA ...
• Changes to the source and target are recorded in a
Writes delta bitmap
Writes

User...
Host Read
• Initiates resynchronization of volumes in either direction
EVA ...
E: s
• Copies change block only until source and target are
synchronized
Writes

Synchronized MirrorClone
• Once MirrorClone is synchronized data on both volumes is
Host Read
E: s kept identical
• Writes are applied to both volumes
Writes • Reads are satisfied by source only
Combining Snapshots and
MirrorClone
• MirrorClone and SnapShot can be combined in a way that you take
the Snapshot from the MirrorClone target
t0

• Advantages: Source MC t1

− A way to get around the Snapshot t2

copy-before-write performance impact


− „cross disk group“ Snapshots by
putting MirrorClone into different diskgroup
• The Snapshots will allocate space in the MirrorClone disk group

• Disadvantages:
− No Snapshot restore in the first release
• Workaround could be to detach MirrorClone, then restore and present as
original LUN
• Direct restore is planned for end 2006
Continuous access EVA
Remote copy capability for the EVA
Continuous Access EVA delivers array-based remote data replication –
protecting your data and ensuring continued operation from a disaster.
• Ideal suited to:
• Keep byte for byte copies
of data at a remote site
for instant recovery
• Replicate data from
multiple sites to one for
consolidated backup
• Shift operations to a
recovery site for primary
site upgrades and
maintenance
• Ensure compliance to
government legislation
and business objectives
Continuous Access EVA
Remote copying
• What does it do?
− Replicates LUNs between EVAs
− Provides disaster recovery
− Simplifies workload management
− Allows point-in-time database backup
− Provides restore without latency
• How does it work? Dest
VOL
− Creates up to 256 Copy Sets for all Source
Set
specified logical units in the array VOL
Copy
over Fibre Channel and FC extensions
Source
− Synchronous and asynchronous VOL

support up to 20’000km (200ms Dest


Set
Copy
VOL
round trip time)
− Works with all EVAs
DR groups and managed sets
DR Group
• Consistent Group of replicated copy sets (Vdisks)
− Up to 256 DR Groups or DR Group members/array
− Up to 32 replicated copy sets / DR Group
− IO ordering across members is guaranteed
− Share a single write history log
− Vdisks within a DR Group behave like a single entity
− Management commands like suspend or failover are handled
atomically
− All source members online on same HSV controller
• Therefore a DR Group is the primary level of CA
management
− Write Mode ([Synchronous] / Asynchronous)
− Failsafe Mode (Enabled or [Disabled])
− Suspend Mode ([Resume] / Suspend)
− Failover command
Managed Sets
• Another level of CA management
Vdisk
− Collection of DR groups for the purpose of common
management DR Group

− No consistency as between members of a DR Group Managed Set

− If you perform a management command on a Managed Set


this command will be run for all contained DR Group one
Continuous Access EVA 3.0
Enhanced Async implemented with XCS 6.0

• Replaced previous CA async


• Tunnel
resources are 124 x 8k buffers =
1MB on the fly
• Enhanced Async uses a write history log
−You set size and location when a copy set is
created
−You can force a full copy
−Log is a circular buffer
• Log overflows when tail meets head
• Overflow of log forces a full copy of DR group
• Draining the log will require transition to sync CA
Continuous Access 3.0
Enhanced Async

100th Percentile - Sync CA


MB/sec 95th Percentile - Async CA

50th Percentile - Enhanced Async


CA

0 8am 12 noon 5pm 12pm

Time
Multiple relationships

• Fan-in of multiple
relationships
− The ability of one EVA to act
as the destination for EVA3000
different LUNs from more EVA6000
than one source EVA EVA4000

• Fan-out of multiple
relationships
− The ability for different LUNs EVA5000
on one EVA to replicate to
different destination EVA
EVA8000 EVA6000

• Bidirectional
− one array with copy sets EVA8000
acting as the source and EVA5000
destination across the same
EVA CA SAN configuration
2 fabric configuration
Shared SAN for host and CA traffic.
Server1 Server2
Managemen Managemen
t Server t Server

All EVA ports are used for


host IO and some also for
A B CA IO A B

EVA1 EVA2
EVA CA SAN configurations
Physically separated 6 fabric configuration
CA traffic is only going through the CA
Server1 SAN Server2
No host IO cross-site possible > CLX EVA
Managemen Managemen
t Server t Server

4 ports per EVA used for


host IO 4 ports per EVA used
A B for CA IO A B

EVA1 EVA2
CA configurations: Dedicated
CA fabrics
Physically separated & zoned 4 fabric configuration
CA traffic is only going through the CA SAN if the
Server1 EVA ports are properly zoned off in the host SAN Server2
Host IO cross-site possible -> Stretched Cluster
Managemen Managemen
t Server t Server

4 ports per EVA used for


Hosts 4 ports per EVA used
A B for CA A B

EVA1 EVA2
CA configuration: Dedicated CA
zone
Zoned 2 fabric configuration
CA traffic is only going through the CA zone if the
Server1 EVA ports are properly zoned off in the host zones Server2
Host IO cross-site possible -> Stretched Cluster
Managemen Managemen
t Server t Server

4 ports per EVA used for


Hosts 4 ports per EVA used
A B for CA A B

EVA1 EVA2
EVA Solutions
Transition Slide
Zero downtime backup
Recovering in minutes not hours
• Description Client network
− Data Protector provides no impact backup,by
performing backup on the copy of the HP-UX Solaris NT
production data; with the option to copy it or W2k
move it to tape.
Data
− NEW with Data Protector 6.0:
Incremental ZDB for files Protector
Server
• Usage SAN
− Data that requires:
• Non-disruptive protection
• Application-aware
• Zero impact backup
− SAN protection
• Benefits P-Vol S-Vol
− Fully automates the protection process
− All options can be easily configured using simple
selections
− The Data Protector GUI permits complete
control of the mirror specification
− Administrators can choose the schedule of the
backup
Oracle database integration
• What does it do?
− Maps the Oracle DB to
Vdisks, DR Groups etc.
− Replicates all Vdisks of
specified Oracle
Databases
− Allows creating local or
remote replicas
− Easy control via RSM GUI
− Can quiesce and resume
Oracle
− Provides a topology MAP

– Supported with BC 2.3 /


RSM2.0
– Chargeable RSM Option:
Application Integration
LTU T4390A
Instant recovery for XP and EVA
Recovering in minutes not hours
• Description Client network
− Allows Instant Recovery by retrieving the data HP-UX Solaris NT
directly from the replicas on disk.
W2k
− This technology moves Zero Downtime Backup
a step further, allowing to keep multiple replicas Data
on disk available and rotating Protector
• Usage Server
− Critical data that has to be recovered within SAN
minutes, instead of hours
• Benefits
− Fully automated protection process, including
creation and rotation of replicas P-Vol

− Disk operations permit non-disruptive,


application-aware protection as frequently as BC1 BC3
once an hour BC2
− Administrators can choose disk protection,
tape protection, or scheduled combinations;
to meet their protection requirements
• Prerequisits t0 t- t-2
1
− Data Protector or
− AppRM (1H07)
HP VLS 300 EVA Gateway
• Seamless integration VLS Gateway SAN attached
Node 1 Servers
– Emulates popular tape drives and Node 2
Node n
libraries
– Same easy to use GUI as VLS6000
– Allows deployment of existing
EVA systems for backup use
SAN
• Easily scale capacity and
performance

• Utilizes existing infrastructure


– Switches
EVA 1 EVA 2 EVA n
– Arrays
Application Recover Manager
• Application support
− First release: Exchange 2003 ,SQL 2000/2005,
NTFS File system
− Future: take over all ZDB/IR integrations DP has
(other Apps, other OSs)
• Array support
− EVA arrays support including copy back restore
− Disk array independence through
VSS/VDS (unmap/swap restore only)
• Features
− Round-robin replicas
− Built-in scheduler
− User Management
− Sophisticate logging and monitoring
• Distributed architecture
− Central management using ‘DP like’ GUI & CLI
− Clustered Cell Server
− Remote client deployment
Application Recovery Manager
(AppRM)
• A new solution has been created that encapsulates and delivers
Data Protector’s VSS functionality
− Announced in May 06, released in Nov 06
− Replacing “Fast Recovery Solutions for Exchange”

• AppRM is
− Disk-based (VSS) replication and restore only, no tape backup possible,
but can be used as pre-exec to 3rd party backup application
• AppRM is based on Data Protector 6.0 code
− DP will offer same feature set as AppRM, whereas AppRM offers only a
subset of DP functionality (ZDB and IR)
• Target customers:
− NON – DP accounts, with the desire of a VSS instant recovery solution,
but no need for a “full” backup software product
− Potential up-sell opportunity to migrate existing backup product to Data
Protector
Application Recover Manager
• AppRM follows Data Protector licensing scheme
− Capacity based
− More expensive than FRS, especially for only a few larger
systems
− but also more functionality
− TB licenses are based on the source capacity independent
of the number of copies

• T4395A HP StorageWorks AppRM Cell Manager Win LTU


• T4396A HP StorageWorks AppRM Online Backup Win LTU
• T4399A HP StorageWorks AppRM Inst. Recovery EVA 1 TB
• T4400A HP StorageWorks AppRM Inst. Recovery EVA 10 TB

• First version AppRM 6.0


− Aligned with DP version numbering
MetroCluster EVA for HP-UX
• What does it do?
– Provides manual or automated site-failover
for Server and Storage resources
ServiceGuard
for HP-UX • Supported environments:
– HP-UX 11i V1 & 11i V2
Metro Cluster EVA – Serviceguard ≥11.15
• Requirements:
HP Continuous
– EVA Disk Arrays
Access EVA
– Metrocluster
– Continuous Access EVA
– Max 200ms network round-trip delay
DataCenter 1 DataCenter 2 – Command View EVA & SMI-S
Up to
several 100km
Cluster extension EVA for Windows
• What does it do?
– Provides manual or automated site-failover for
MSCS on Windows
Server and Storage resources
• Supported environments:
Cluster Extension XP – Microsoft Windows 2003 Enterprise Edition (32-
bit & 64-bit)
– Microsoft Windows 2003 Data Center
HP Continuous Server(64-bit)
Access EVA
– NAS4000 & 9000
– HP Proliant Storage Server
– Microsoft Cluster Service 5.2
DataCenter 1 DataCenter 2
– Up to 500km
Up to 500km • Requirements:
– EVA Disk Arrays
– Cluster Extension EVA
– Continuous Access EVA
– Max 20ms network round-trip delay
– Command View EVA & SMI-S
Cluster extension EVA for Linux
• What does it do?
– Provides manual or automated site-failover
Serviceguard/Li
for Server and Storage resources
• Supported environments:
Cluster Extension EVA – Serviceguard for Linux as the cluster service
– SG 11.16.02 with RH EL 4
– SG 11.16.01 for SuSe SLES 9
HP Continuous
Access EVA

• Requirements:
– EVA Disk Arrays
DataCenter 1 DataCenter 2
– Cluster Extension EVA
Up to 500km – Continuous Access EVA
– Max 20ms network round-trip delay
– Command View EVA & SMI-S
Windows 2003 stretched cluster
with CA

App A App A

App B App B
Quoru Quoru
Quoru
m m
m

• CA
• Failover
Restart
DRG A Servers
(Rescan)
Continuous Access
DRG B EVA
Cluster extension EVA CLX
– Manual Move of App A
Quorum or
Witness
Server
App A App A

App B
Quoru Quoru
m m

• Move App A

DRG A

Continuous Access
DRG B EVA
Cluster extension EVA
– Storage Failure

App A App A

App B App B
Quoru Quoru
Quoru
m m
m

DRG A

Continuous Access
DRG B EVA
Majority Node Set Quorum
– File Share Witness
• What is it?
− A patch for Windows 2003 SP1 clusters provided by Microsoft (KB921181)

• What does it do?


− Allows the use of a simple file share to provide a vote for an MNS quorum-based 2-
node cluster
• In addition to introducing the file share witness concept, this patch also introduces a
configurable cluster heartbeat (for details see MS Knowledge Brief)

• What are the benefits?


− The „arbitrator“ node is no longer a full cluster member.
• A simple fileshare can be used to provide this vote.
• No single subnet requirement for network connection to the „arbitrator“.
− One arbitrator can serve multiple clusters. However, you have to set up a
separated share for each cluster.
− The „abitrator“ exposing the share can be
• a standalone server
• a different OS architecture (e.g. a 32-bit Windows server providing a vote for a IA64 cluster)

95 March 7, 2008
Majority Node Set Quorum
– File Share Witness
\\arbitrator\share

Get
vote

App A App A

App B

# cluster # node failures


nodes
2 0 (1 with MNS fileshare
0 witness)
3 1
4 1
5 2
App A 6 2
7 3
8 3
App B
HP SAN certification and support
HP SAN http://www.hp.com/go/sandesign
architecture
Rules

• HP StorageWorks SAN Design Guide


– Architecture guidance
– Massive configuration support
– Implementation best practices
– Incorporation of new technologies
– Include now IP Storage implementation
like iSCSI, NAS/SAN Fusion, FC-IP
• Provides the benefit of HP
engineering when building a
scalable, highly available enterprise
storage network
• Documents HP Services SAN
integration, planning and support
services
The EVA global service portfolio
HP StorageWorks EVA
base product warranty

Foundation Service Solution


• 2 years parts, 2 years of labor and 2 years
of hardware onsite 24x7, 4-hour response
for EVA controller pair and drive shelves
(enclosures) as defined by product SKU and
the Hard Disk Drives purchased with the
array
• 2 years, 24x7, 2-hour response phone-in
and updates for Virtual Controller Software
(VCS)
• Array Installation and Startup (includes
drive shelves and hard disks purchased
with the EVA)
HP care pack services for storage
• HP H/W Support
− 4-hour 24 x 7
− Years 3, 4, and 5
• HP S/W Support
− 24 x 7 technical support
− Software product updates
• Premium Hardware &
Software Services
− Support Plus 24
− Proactive 24
− HP Critical Service
Why should you choose the new
EVA?
Inherited from the current EVA
• Easiest management and setup
• Virtualization allows better use of resources and automatic
striping to prevent hot spots
• Dynamic LUN expansion
• Full set of local and remote copy options
• Known solid HP support
Added with the new EVA
• Easier implementation and coexistence due to support of
industry standard multipathing and native HBAs
• Higher performance
• Higher capacities – The EVA8000 supports architecturally >
200TB
Hp
Transition Slide
StorageWorks™
– the Right
Choice
Hp logo black on white

You might also like