You are on page 1of 81

IBM Storwize V7000

Technical Overview
Bill Wiegand
Consulting I/T Specialist
North America Storage Specialty Team, IBM

2014 IBM Corporation

Agenda

Virtualization Basics
Next Generation Storwize V7000 Hardware
Whats new in V7.3 Software
Whats new in V7.4 Software

2014 IBM Corporation

Storwize V7000 Built on Virtualization

Servers

Servers

Servers

Storwize V7000
Virtualized
Storage Pools

Storwize V7000
Virtualized
Storage Pools

Storwize V7000
Virtualized
Storage Pools

External Array

Internal
Increases disk utilization
and reduces hot spots
Improves administrative
productivity by up to 2X

External
External arrays inherit all
Storwize V7000 functions
Extends life of existing
storage arrays
Optional feature

Migrate
Migrate data from external
LUNs to Storwize V7000
Thin or fully provisioned
target volumes
Included at no cost for 45
days from migration start

See https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003798 for the list of supported external storage arrays


3

2014 IBM Corporation

Logical Building Blocks


The Storwize V7000 uses basic storage units called managed disks and
collects them into one or more storage pools
These storage pools then provide the physical capacity to create volumes for
use by hosts

Volumes

Storage Pool

Managed Disks (MDisks)

2014 IBM Corporation

Managed Disks
The basic unit of storage in the Storwize V7000 is the managed disk/MDisk
A managed disk must be protected by RAID to prevent loss of the entire
storage pool
In Storwize V7000 we can have two different types:
Internal Array MDisk
The internal RAID implementation inside the system takes drives and builds a RAID array
with protection against drive failures

External SAN attached MDisk


An external storage system provides the RAID function and presents a Logical Unit (LU) to
the Storwize V7000

2014 IBM Corporation

Storage Pools
A storage pool is a collection of managed disks
The primary property of a storage pool is the extent size which by default
using the GUI is 1GByte
This extent size is the smallest unit of allocation from the pool

When you add managed disks to a pool they should have similar performance
characteristics:

Same RAID level


Roughly the same number of drives per array
Same drive type (SAS, NL SAS, SSD except if using Easy Tier)
Similar performance characteristics for external storage system MDisks

This is because data from each volume will be spread across all MDisks in
the pool, so the volume will perform approximately at the speed of the slowest
MDisk in the pool
The exception to this rule is that if using Easy Tier you can have 2-3 different tiers of
storage in the same pool but the MDisks within the tiers should still have the
same performance characteristics but less a concern with V7.3 code due to
Automatic Storage Pool Balance feature
6

2014 IBM Corporation

Volumes
A1 A2 A3 A4 A5 A6 A7 A8 A9

Volume A

B1 B2 B3 B4 B5 B6 B7 B8

Volume B

C1 C2 C3 C4 C5 C6 C7 C8
Volumes
Managed Disks

A1

A2

A3

A4

A5

A6

A7

A8

A9

B1

B2

B3

B4

B5
C1

B6

B7

B8

Storage Pool
7

2014 IBM Corporation

Thin Provisioned
or Compressed
Volume C

When the managed disks are added


into storage pools, the managed disks
are split into chunks of storage known
as extents
The size of these extents is a property
of the storage pool
Default is 1GByte

Whenever you create a new volume


you must pick a single storage pool to
provide the physical capacity
By default the created volume will stripe
all of its data across all the managed
disks in the storage pool as shown on
the left diagram
Thin provisioned and compressed
volumes consume extents only as
actual data is written to disk

Virtualization The Big Picture


Designed to be redundant, modular and scalable solution

System consists of one


to four I/O Groups
managed as a single
entity

Control enclosure with 2


node canisters (NC)
makes up an I/O Group
and owns given volumes

Storage Area
Network
Volumes
NC

NC

Control Enclosure

Volumes
NC

NC

Control Enclosure

Volumes
NC

Volumes
NC

Control Enclosure

Managed Disks a Cluster Resource

2014 IBM Corporation

NC

NC

Control Enclosure

Virtualization The Big Picture


Volumes

Single Multipath driver


(SDD, MPIO, DMP)

Up to 2,048/8,192 Volumes per IO group/cluster


Max. Volume size of 256TiB
Image Mode = Native Mode
Managed Mode = Virtualized Mode

iSCSI, FCP, FCoE


Host Mapping

LAN

Hosts
Up to 512/2048 Hosts per IO group/cluster
Up to 512/2048 FC ports per IO group/cluster
Up to 512 iSCSI sessions per IO group

SAN

Storwize V7000
SSD

SAS

NL-SAS

Storage Pools

Pool
Vol2 1
Vol1

Pool
Vol3 2

Pool
Vol7
Vol4 3 Pool
Vol5 4 Pool
Vol6 5

Internal Storage

Internal Storage
1 to 16 drives (HDD) to form an Array
1 Array Managed Disk (MDisk)

2014 IBM Corporation

Up to 4,096 MDisks

Up to 128 Storage Pools


Up to 128 MDisks per Pool

External Storage

Max. External MDisk


Size of 2TiB
Or greater*

External Storage
Multiple Storage Subsystems
1 LUN Managed Disk (MDisk)

Agenda

Virtualization Basics
Next Generation Storwize V7000 Hardware
Whats new in V7.3 Software
Whats new in V7.4 Software

10

2014 IBM Corporation

Storwize V7000 Hardware Refresh: 2076-524 Control Enclosure

Control enclosure is 2U, same physical size as previous model


Only comes in 24 drive SFF configuration for the control enclosure
and both SFF and LFF configurations for expansion enclosures
Back layout is different to make room for the more powerful canisters

PSU 1
11

2014 IBM Corporation

Canister 1

Canister 2

PSU 2

Storwize V7000 Hardware Refresh: Rear View


1GbE ports

SAS expansion ports

Host Interface slots

Compression
accelerator slot

PSU

PSU
Technician port

12

2014 IBM Corporation

Dual controller/node canisters

Storwize V7000 Hardware Refresh: Exploded View

Canisters

PSU
Fan Cage
Enclosure Chassis
Midplane
Drive Cage
Drives
13

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Block Diagram of Node Canister


*Optional

High speed
cross card
communications

16GB DIMM
16GB DIMM
16GB DIMM
16GB DIMM

Ivy Bridge PCIe V3-1GB full duplex


1.9GHz
8 lanes
E5-2628L-V2

PLX

DMI

To Control
Enclosure
Drives on
SAS Chain 0

14

2014 IBM Corporation

COLETO
CREEK

SAS
EXP

PLX

Mezz Conn

*Optional 2nd
Compression
Acceleration
Card

Standard

COLETO
CREEK
TPM

4 phys
4 phys

Boot
128GB SSD

SPC

Quad
1GbE

HBAs
8/16Gb FC
or
10GbE

1GbE

USB

To Expansion Enclosure
Drives on
SAS
Chain 1
SAS Chain 12Gb/phy 4 phys
4 phys
SAS Chain 2

Storwize V7000 Hardware Refresh: Expansion Card Options


There are three expansion slots numbered 1-3 left to right when viewed from the rear
Ports on a particular card are numbered top to bottom starting with 1
Supported expansion cards
Compression pass-through comes standard with system to enable on-board compression engine

Slot

Supported cards

Compression pass-through, Compression Acceleration card

None, 4 port 8Gb FC, 2 port 16Gb FC, 4 port 10GbE**

None, 4 port 8Gb FC, 2 port 16Gb FC, 4 port 10GbE**

** Only one 10GbE card supported per node canister

15

2014 IBM Corporation

Storwize V7000 Hardware Refresh: 8Gb FC Card


Same adapter as used in current Storwize V7000 Models
PMC-Sierra Tachyon QE8
SW SFPs included
LW SFPs optional

Up to two can be installed in each node canister for total of 16 FC ports in control enclosure

16

2014 IBM Corporation

Storwize V7000 Hardware Refresh: 16Gb FC Card


Requires Storwize Family Software V7.4
New dual-port 16Gbps Fibre Channel adapter supported on Gen2 Storwize V7000
Unified,Storwize V7000 Gen2 and SVC DH8
Connect to legacy 8Gbps servers or storage through switches
Overall system throughput largely unchanged
Up to double single-stream single port throughput (to 1.5GB/s)
can benefit analytics workloads
Both ports are full bandwidth 16Gb

SW (standard) and LW (optional) SFPs available


MUST be plugged in to 16 Gb FC switch
Auto negotiating to 8Gb requires RPQ

17

2014 IBM Corporation

Storwize V7000 Hardware Refresh: 10GbE Card


The new 4 port 10GbE adapter supports both FCoE and
iSCSI
Can be used for IP replication too

Support for one 10GbE adapter in each node canister of


the 2076-524
Support for IBM 10Gb optical SFP+ only
FCoE frame routing, FCF, performed by CEE switch or
passed-thru to FC switch
No direct attach of hosts or storage to these ports

Software allows using FCoE/iSCSI protocols


simultaneously as well as IP replication on same port
Best practice is to separate these protocols onto different
ports on the card

18

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Compression Accelerator Card


New Storwize V7000 model has one on-board compression accelerator standard and
supports volume compression without any additional adapter installed
This configuration will have a pass-through adapter in slot 1 to allow the on-board compression
hardware to be utilized

One additional Compression Accelerator card (see picture) can optionally be installed in
canister slot 1, replacing the pass-through adapter, for a total of two Compression
Accelerator cards per node canister

19

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Internal Battery


The battery is located within the node canister
Provides independent protection for each node canister

A 5-second AC power loss ride-through is provided


After this period, if power is not restored, we initiate a graceful shutdown
If power is restored during the ride-through period, the node will revert back to main power and the battery will revert
to 'armed state
If power is restored during the graceful shutdown, the system will revert back to main power and the node canisters
will shutdown and automatically reboot

A one second full-power test is performed at boot before the node canister comes online
A periodic test on the battery (one at the time) is performed within the node canister, only if
both nodes are online and redundant, to check whether the battery is functioning properly

20

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Internal Battery (2)


Power Failure
If power to a node canister fails, the node canister uses battery power to write cache and state data
to its boot drive
When the power is restored to the node canister, the system restarts without operator intervention
How quickly it restarts depends on whether there is a history of previous power failures
The system restarts only when the battery has sufficient charge to power the node canister for the
duration of saving the cache and state data again
If the node canister has experienced multiple power failures, and the battery does not have sufficient
charge to save the critical data, the system starts in service state and does not permit I/O operations
to be restarted until the battery has sufficient charge

Reconditioning
Reconditioning ensures that the system can accurately determine the charge in the battery. As a
battery ages, it loses capacity. When a battery no longer has capacity to protect against two power
loss events it reports the battery end of life event and should be replaced.
A reconditioning cycle is automatically scheduled to occur approximately once every three months,
but reconditioning is rescheduled or cancelled if the system loses redundancy. In addition, a two day
delay is imposed between the recondition cycles of the two batteries in one enclosure.

21

2014 IBM Corporation

Storwize V7000 Hardware Refresh: 2076-24/12F Exp Enclosure


The expansion enclosure front view:

The expansion enclosure back view:

22

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Expansion Enclosure


Available in 2.5- and 3.5-inch drive models
2076 Models 24F and 12F respectively

23

Attach to new control enclosure using 12Gbps SAS


Mix drive classes within enclosure including different drive SAS interface speeds
Mix new enclosure models in a system even on same SAS chain
All drives dual ported and hot swappable

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Expansion Enclosure Cabling

24

2014 IBM Corporation

Storwize V7000 Hardware Refresh: SAS Chain Layout


Each control enclosure supports two
expansion chains and each can
connect up to 10 enclosures
Unlike previous Storwize V7000 the
control enclosure drives are not on
either of these two SAS chains
There is a double-width high-speed link
to the control enclosure and SSDs
should be installed in control enclosure
There is as much SAS bandwidth
dedicated to these 24 slots as there is to
other two chains combined
The control enclosure internal drives are
shown as being on port 0 where this matters

SSDs can also go in other enclosures if


more then 24 required for capacity reasons
HDDs can go in control enclosure if desired
Mix of SSDs and HDDs is fine too
25

2014 IBM Corporation

SAS port 0
Chain 0

Node Canister
Internal
SAS links

SAS Adapter

Control
SAS port 1
Chain 1

SAS port 2
Chain 2

Expansion

Expansion

Expansion

Expansion

8 more

8 more

Clustered Storwize V7000


Storwize V7000
One I/O Group
System

Control Enclosure

Control Enclosure

Expand

Cluster

Expand

Control Enclosure

Storwize V7000
2-4 I/O Groups
Clustered System

No interconnection of SAS chains between


control enclosures as control enclosures
communicate via FC via a minimum of two
FC ports per node canister
Expansion
Enclosures

Expansion
Enclosures

Cluster

Expansion
Enclosures

An I/O Group is a control


enclosure and its associated SAS
attached expansion enclosures
Up to 21 enclosures per I/O Group

Clustered system can consist of 24 I/O Groups


Scale capacity/throughput 4x
Almost 4PB raw capacity and up to
1056 drives in 2-4 42U racks

Non-disruptive upgrades
From smallest to largest
configurations
Purchase hardware only when you
need it
No extra feature to order and no
extra charge for a clustered system
Configure one system using USB
stick and then add second using
GUI

An I/O Group is a
control enclosure
and its associated
SAS connected
expansion
enclosures
26

2014 IBM Corporation

Virtualize storage arrays behind


Storwize V7000 for even greater
capacity and throughput

Storwize V7000 Hardware Refresh: Enclosure Configuration


Examples
Valid maximum configurations:
Two control enclosures, 40 24F expansions in 4 chains of 10 -> 1008 SFF drives
Four control enclosures, 40 24F expansions in 4 chains of 10 -> 1056 SFF drives
Four control enclosures, 40 24F expansions in 8 chains of 5 -> 1056 SFF drives
Four empty control enclosures, 80 12F expansions in 8 chains of 10 -> 960 LFF drives
Four full control enclosures, 80 12F expansions in 8 chains of 10 -> 960 LFF, 96 SFF =
1056 Total

27

2014 IBM Corporation

Clustered System Example 2 IOGs and Max of 40 SFF Expansion Enclosures


I/O Group 0

I/O Group 1

Control Enclosure SAS Chain 0

28

2014 IBM Corporation

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 2

SAS Chain 1

SAS Chain 2

Clustered System Example 4 IOGs and Max of 40 SFF Exp


Enclosures

I/O Group 0

I/O Group 1

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 2

SAS Chain 1

SAS Chain 2

I/O Group 2

Control Enclosure SAS Chain 0

29

2014 IBM Corporation

Control Enclosure SAS Chain 0

I/O Group 3

Control Enclosure SAS Chain 0

Clustered System Example 4 IOGs and Max of 40 SFF Exp


Enclosures

I/O Group 0

I/O Group 1

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 2

SAS Chain 1

SAS Chain 2

I/O Group 2

I/O Group 3

Control Enclosure SAS Chain 0

30

Control Enclosure SAS Chain 0

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 2

SAS Chain 1

SAS Chain 2

2014 IBM Corporation

Clustered System Example 4 IOGs and Max of 80 LFF Exp


Enclosures

I/O Group 0

I/O Group 1

Control Enclosure SAS Chain 0

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 1

SAS Chain 1

SAS Chain 1

I/O Group 2

I/O Group 3

Control Enclosure SAS Chain 0

31

Control Enclosure SAS Chain 0

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

SAS Chain 1

SAS Chain 1

SAS Chain 1

SAS Chain 1

2014 IBM Corporation

Clustered Storwize V7000


SAN
Node Canister Node Canister

All cabling
shown is
logical

Default behavior is a
storage pool per I/O
Group per drive class
and volumes owned by
same I/O Group

Storage Pool A

Storage Pool C

MDisk

MDisk

MDisk

2014 IBM Corporation

MDisk

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure

Expansion Enclosure
Storage Pool B

MDisk
MDisk

MDisk
MDisk

MDisk

Expansion Enclosure

Expansion Enclosure

I/O Group 0

I/O Group 1

Expansion enclosures are connected through one control enclosure


and can be part of only one I/O group
Storage pools can contain MDisks from more than one I/O group

32

Node Canister

Control Enclosure #2

MDisk

Optionally you can create a


storage pool with MDisks
from multiple I/O Groups and
volumes by default will be
balanced across I/O Groups
and node canisters

Node Canister

Control Enclosure #1

Inter-control enclosure communications happens over the SAN


All MDsks are accessed via owning I/O group
A volume is serviced by only one I/O group

New Storwize V7000: Migration and Investment Protection


Can mix new and existing Storwize V7000 systems in a cluster
Provides complete protection for existing Storwize V7000 investments
All existing and new storage can be accessed by any host

Migration from existing system with no downtime at all


For systems that support non-disruptive volume move (NDVM)
No competitive system can make similar claim

New Storwize V7000 can virtualize existing Storwize V7000


Provides conventional Storwize family migration using standard Image mode
virtualization capability

R
O

Existing V7000
Gen1 System

Virtualize

Existing
Storwize System
33

2014 IBM Corporation

Replicate
O
R

Cluster

New V7000 Gen2


System

Existing
Storwize or SVC
System

Hardware Compatibility within the Storwize family


Expansion Enclosures
The V7000 Gen2 expansion enclosures can only be used with a V7000 Gen2 control enclosure
The V7000 Gen1 expansion enclosures can only be used with a V7000 Gen1 control enclosure
The V3x00/V5000/SVC-DH8 and Flex SystemV7000 expansion enclosures cannot be used with a
V7000 Gen2 control enclosure and drives can not be swapped between models either
Note that Flex System V7000 will not support V7.3

Control Enclosures
V7000 Gen2 control enclosures can cluster with V7000 Gen1 control enclosures
Allows for non-disruptive migration from Gen1 to Gen2 or long-term system growth
No clustering between V7000 Gen2 and V3x00/V5000 and Flex System V7000

Remote Copy
No remote-copy restrictions as we can replicate amongst any of the SVC/Storwize models as well as
FlashSystem V840/V90000 systems

Virtualization
Fibre-channel and FCoE external storage virtualization with appropriate HBAs
No iSCSI external storage virtualization
No SAS host support or SAS storage support with 2076-524

34

2014 IBM Corporation

Technician Port (1)


Technician port is marked with a T (Ethernet port 4)
Technician port is used for the Initialization of the system
As soon as the system is installed and the user connects to the Technician Port they will be directed to
the initialization Welcome panel
This port will run a dedicated DHCP server in order to facilitate service/maintenance and initial set up
Service IP will NOT be associated with the Technician Port, but will continue to be assigned to
Ethernet port 1 (lowest Ethernet port for management)

If DHCP configured on laptop, nearly all do, it will automatically configure to bring up initialization panel
If not you will need to set manually set IP of laptop Ethernet adapter to 192.168.0.2 192.168.0.20
NOTE: Technician port is disabled if connected to a LAN
35

2014 IBM Corporation

Technician Port (2)

2) Waiting panel, while the


system initialization completes

1) Example if the enclosure has a


stored cluster ID, while attempting to
create a cluster

36

2014 IBM Corporation

Technician Port (3)

37

2014 IBM Corporation

Storwize V7000 Gen1 vs Gen2


Attribute
(per control
enclosure)

Storwize V7000

Storwize V7000 Gen2

CPU

8 cores

16 cores

Memory/cache

16GB

64GB to 128GB

Host I/O

4x - 1GbE
8x - 8Gb FC
4x - 10GbE (3xx model)

6x - 1GbE
8x to 16x - 8Gb FC
4x to 8x - 16Gb FC
8x - 10GbE

Compression
resources

6 cores (Dont recommend


RtC with Gen1 systems)

8 cores
On-board CA Engine
Optional second CA Engine

Drive expansion

Up to 9 expansions
(240 drives per controller)

Up to 20 expansions
(504 drives per controller)

Drive fabric

6Gb SAS

12Gb SAS

38

IBM
2014 IBM Corporation

Confidential

Storwize Family Performance Comparison

V3700

39

V3700T

V5000

V7000

V7000 Gen 2

Cache Read MB/s

3,300

5,950

5,700

5,500

11,000

Cache Write MB/s

1,100

2,250

2,450

3,300

7,000

Cache Read IOPs

200,000

300,000

500,000

700,000

900,000

Cache Write IOPs

48,000

100,000

200,000

200,000

300,000

Disk Read MB/s

2,500

3,850

5,500

3,800

10,000

Disk Write MB/s

815

1,450

2,400

2,270

4,000

Disk Read IOPs

45,000

65,000

85,500

130,000

240,000

Disk Write IOPs

12,300

16,200

18,200

29,000

50,000

70/30 Mixed IOPs

23,000

37,500

40,500

52,000

100,000

IBM
2014 IBM Corporation

Confidential

Storwize Family Performance Comparison with Compression


V7000
2076-124
Worst Case

V7000
2076-524 (Gen2)

Best Case

Worst Case

Best Case

Read Miss IOPS

2K

44K

46K

149K

Write Miss IOPS

2K

17K

31K

78K

70/30 Miss IOPS

2K

33K

36K

115K

Read Miss MBs

800

Write Miss MBs


420

All numbers shown are actual measurements, 65% compression

40

2,100
1,800

Worst Case = 100% random workload defeat RTC algorithm not application realistic

Best Case = Pseudo random workload ideal for RTC algorithm. Similar to Database-style
workloads. Actual RTC performance is application-dependent

All optional hardware upgrades fitted: additional memory and Compression Accelerator feature

2014 IBM Corporation

New Simplified Storwize V7000/5000 Licensing Structure


License
License Per
Per Enclosure
Enclosure
Option
Option 11
FLEXIBLE
FLEXIBLE OPTIONS
OPTIONS
Controller Expansion External

Base
Advanced Functions

Easy Tier
FlashCopy
Remote Mirror
Compression*

* Storwize V7000 Only

Option
Option 22
FULL
FULL BUNDLE
BUNDLE
Controller Expansion External

Base
Full Bundle

Easy Tier
FlashCopy
Remote Mirror
Compression*

* Storwize V7000 Only

Summary
License/PID for Controller, Expansion, External Data Virtualization
Advanced Functions are identically priced for Controller, Expansion & External
Advanced function feature codes, when selected, are required for all enclosures
No capping and no complexity
Cheaper to buy bundle then individual feature codes-no upgrade to full bundle
41

2014 IBM Corporation

Storwize V3500

Storwize V3700

Storwize V5000

Storwize V7000

V7000 Gen2

Machine Type
Code

2071
Machine Code

2072
Machine Code

2077 / 2078
Software

2076
Software

2076
Software

Standard Host
Interface

6Gb SAS
1Gb iSCSI

6Gb SAS
1Gb iSCSI

6Gb SAS, 1Gb


iSCSI
8Gb FC or 10Gb
iSCSI/FCoE

8Gb FC
1Gb iSCSI

1Gb iSCSI

8Gb FC

8Gb FC or
10Gb iSCSI / FCoE
or
1Gb iSCSI

None

10Gb iSCSI / FCoE

2 x ( 8Gb FC or
10Gb iSCSI / FCoE )

4GB

4GB or 8GB

8GB

8GB

32GB or 64GB

None

Up to 4

Up to 6

Up to 9

Up to 20

No

Yes

Yes

Yes

Yes

None

Keys

Honor

Honor

Honor

No

Yes

No

No

No

64 targets

64 targets +
license for more

License

Yes

License

Turbo Performance
Remote Copy
Easy Tier

No
No
No

License
License
License

System Clustering

No

No

Yes
License
License
Yes 2 control
enclosures

Yes
License
Yes
Yes 4 control
enclosures

Yes
License
License
Yes 4 control
enclosures

General External
Virtualization

No

No

License

Data Migration
from external
storage

License

License

Yes

Yes

Yes

Compression

No

No

No

Optional Host
Interface
RAM (per node
canister)
Expansion
enclosures
(per control
enclosure)
SSD Support
Licensed Function
Enforcement
Licensed Function
Trials
FlashCopy

42

Compression
Hardware

No

No

NAS

No

No

2014 IBM Corporation

License

License

No

No

Yes, plus optional


extra

No

Yes, V7000 Unified

Yes, V7000 Unified

Storwize V7000 Gen2 Functionality


Data Migration Included
Move data between storage pools an/or storage tiers non-disruptively
Migrate to new disk hardware when old hardware comes off lease

Thin Provisioning Included


Allows storage optimization by consuming real capacity only when data is written

Volume Mirroring Included


Mirror data between storage pools for high availability
Add mirror to convert to thin-provisioning to reclaim capacity
Add mirror to migrate to another storage pool or performance tier

Storage Pool Balancing Included


Automatically balances workload across MDisks in a storage pool

43

2014 IBM Corporation

Storwize V7000 Gen2 Functionality


Easy Tier Optional
Allows for management of hotspots automatically by migrating extents from
spinning drives to solid state drives as needed for higher performance
FlashCopy Optional
Allows creation of point-in-time copies of volumes for backup and/or test
purposes
FlashCopy Manager optionally available to help manage
Metro and/or Global Mirror Optional
Allows you to replicate data synchronously or asynchronously between up to
four Storwize V7000 systems and/or SVC clusters
Compression Optional
Allows for in-band compression of data to minimize use of physical capacity
External Virtualization Optional
Allows you to bring existing external fibre channel disk systems under
virtualization software providing access to all the functions of the above
features

44

2014 IBM Corporation

Statements of Direction
The cache upgrade feature on the new Storwize V7000 provides performance benefit only
when Real-time Compression is used. IBM intends to enhance IBM Storwize Family
Software for Storwize V7000 to extend support of this feature to also benefit uncompressed
workloads.
The second CPU with 32 GB memory feature on SVC Storage Engine Model DH8 provides
performance benefit only when Real-time Compression is used. IBM intends to enhance
IBM Storwize Family Software for SVC to extend support of this feature to also benefit
uncompressed workloads.

45 2014 IBM Corporation


45

Agenda

Virtualization Basics
Next Generation Storwize V7000 Hardware
Whats new in V7.3 Software
Whats new in V7.4 Software

46

2014 IBM Corporation

Easy Tier and Storage Analytics Engine Together


Three Tier Easy Tier
Hybrid Storage Pool of any
combination of Flash/SSD, Enterprise
Disk, Nearline Disk
Easy Tier automatically moves the
most active extents to Flash/SSD to
improve performance

Storage Analytics Engine


Recommends migrating whole volumes
between storage pools
Included with VSC

Tier 0 - Flash
Highest performance Flash/SSD, or
combination of Flash/SSD and HDD
leveraging EasyTier technology

Tier 1 - Enterprise
High performance HDD, possibly from older
storage systems and lower priority workloads

Tier 2 Nearline
Cost effective, high capacity HDD for
workloads with lower performance
requirements

47

2014 IBM Corporation

Flash/

IBM Storwize Family delivers .


Easy Flash Optimized Solutions

* Audited Storage Performance Council benchmark


48

2014 IBM Corporation

Easy Tier v3: Support for up to 3 Tiers


Support any combination of 2-3 tiers
SSD always is Tier-0 and only SSD can be Tier-0
Note that ENT is always Tier-1 but NL can be Tier-1 or Tier-2
ENT is Enterprise 15K/10K SAS or FC and NL is NL-SAS 7.2K or SATA

49

Tier 0

Tier 1

Tier2

SSD

ENT

NL

SSD

ENT

NONE

SSD

NL

NONE

NONE

ENT

NL

SSD

NONE

NONE

NONE

ENT

NONE

NONE

NONE

NL

2014 IBM Corporation

Automated Storage Pool Balancing


Extents moved between MDisks of the
same tier to balance workload

Storage Pool Balancing applies to


Single or Multi-tier Pools

Balances I/O skew and capacity utilization


Workload performance
Additional capacity added to the Pool

Proactively avoids hot spots


Eliminates need to manually re-stripe
extents
Adding capacity triggers balancing to start
within minutes
Enabled by default in Storwize Family
Included in base virtualization software

50

2014 IBM Corporation

Extents migrated to balance


workload across MDisks
1
2

MDisk 1

MDisk 2

Flash/SSD or
Enterprise Disk or
Near-line Disk

MDisk 3

Real-time Compression What is it?


A compressed Volume is a third type of Volume
Regular (fully allocated)
Thin Provisioned
Compressed

A compressed Volume is a kind of Thin-Provisioning


Only allocates and uses physical storage to store the compressed data for the Volume

Recommend using RtC only with Storwize V7000 Gen2, SVC DH8 and CG8
Chubby node hardware and V7.3 or V7.4

51

2014 IBM Corporation

Real-time Compression Support


Support the same as for Thin-Provisioning
Compression supports all TP features (e.g. Auto-expand)
Maximum 512 compressed volume copies per I/O group

Compressed volumes can be source and target of remote copy


relationships and/or FlashCopy mappings
Data between copies is uncompressed

GUI Support for Compressed Volumes


GUI also has a preset for compressed volumes

52

2014 IBM Corporation

Pre-Decide Algorithm in Core Engine V7.3 Enhancement


New feature avoids compressing data with expected low savings
Un-compressible patterns that may be found in compressible workloads will not
consume system resources without providing value
Engine automatically generates a compressibility estimation based on advanced
finger-printing technology (developed with IBM Haifa Research Lab)
Bitmap Visualization: Compressible Data

Improves write performance

Very High Compression Rate

Use Huffman
Compression
Normal Compression Rate

Compressibility
Estimate

Use Deflate
Compression
Very Low Compression Rate

Only Store Data

53

2014 IBM Corporation

Bitmap Visualization: Incompressible Data

Real-time Compression Details

Compression operates via RACE


running in a controlled binary
executable

C l ie n t s
F ro n t E n d
R e m o te C o p y

Sits parallel to Thin Provisioning


and acts in coordination with it

C ache
F la s h C o p y
M ir r o r in g
T h in P r o v is io n in g
V ir tu a liz a t io n

R andom Access
C o m p r e s s io n
E n g in e

TP provides the space-efficiency on


disk by allocating only what RACE
requires

B ack E nd

S to ra g e

Back-end can be internal or


external storage
S V C S /W C o m p o n e n t
R A C E S /W C o m p o n e n t

54

2014 IBM Corporation

Real-time Compression Random Access


Example
RACE takes a variable data stream and produces a
fixed output

1 MB
chunk

100 byte
update

The compressed Volume has a consistent logical


layout
Avoids the overhead of traditional compression

Real-time Compression
Variable input
Fixed output

Traditional
Compression

IBM Real-time
Compression

1 MB Read

0 MB Read

1 MB
Decompress

0 MB
Decompress

100 Byte
Update

0 Byte
Update

1 MB
Compress

100 Byte
Compress

1 MB Write

<100 Byte
Write

Total I/O:
2 MB

Total I/O:
<100 Bytes

Ma p

Compressed
volume
Original I/O

55

2014 IBM Corporation

Real-time Compression Temporal Locality


Applications make multiple updates to
data

1
2

Location
Compression
Window

Traditional and post-process compression


uses fixed-sized chunks and compresses
each update based on its location on a
volume

# = Data Update RACE compression acts on data that is


written around the same time (temporal
locality) not according to location

Temporal locality is more related to real


application operations
Temporal
Compression
Window

1 2 3

RACE takes advantage of the structure of


the data and its application level relations
Better compression efficiency and
performance

Time
56

2014 IBM Corporation

56

Progressive Compressed Block Write


1
64K
2
4K
3
32K

Each write from the host is


compressed independently
Compression rate of the resulting block
is nearly identical to the compression
ratio of compressing the entire data in
one operation

20K

2K

7K

Compression dictionary is preserved


between the independent writes

32K Grain

57
2014 IBM Corporation
Storwize 2010 Storwize Confidential and Proprietary

57

Compression Implementation Guidelines


Which workloads are good candidates for compression
More than 45% savings highly recommend to compress
Between 25-45% savings recommend evaluating workload with compression
Less than 25% savings recommend avoiding compression

Common workloads suitable for compression

Databases(OLTP/DW) DB2, Oracle, MS-SQL, etc.


Applications based on databases SAP, Oracle Applications, etc.
Server Virtualization KVM, VMware, Hyper-V, etc.
Email systems Notes, MS-Exchange, etc.
Other compressible workloads engineering, seismic, collaboration, etc.

Common workloads NOT suitable for compression

58

Workloads using pre-compressed data types such as video, images, audio, etc.
Workloads using encrypted data
Heavy sequential write oriented workloads
Other workloads using incompressible data or data with low compression rate

2014 IBM Corporation

Compresstimator Utility
Comprestimator is a host based utility for a fast estimation of a block device
compression ratio
Objectives:
Run over a block device
Estimate:
Portion of non-zero blocks in the volume
Compression rate of non-zero blocks

Performance:

Runs FAST! < 60 seconds, no matter what the volume size is


Provides a guarantee on the estimation: ~5 % max error

Can improve guarantee with more samples (longer running time)

Method:
Random sampling and compression throughout the volume

Collect enough non-zero samples to gain desired confidence


More zero blocks slower (takes more time to find non-zero blocks)
Mathematical analysis gives confidence guarantees

Note: the tool is estimating compression during migration of a volume into RtC (data at rest)

59

2014 IBM Corporation

Sample Comprestimator Output


Shows:

Sample size
Device
Current data set size
Size after compression
Overall space saved
Overall space savings
Savings by Real-time Compression
Savings by Thin Provisioning (All compressed volumes are thinly provisioned)
% Error

Sample#

2348

60

Device

Current
size
(GB)

Compressed
Size (GB)

/dev/sda

8.000

2.143

2014 IBM Corporation

Overall
Space
Saved
(GB)
5.857

Overall
Savings
(%)

Compression
Savings (%)

73.2%

56.9%

Thin
Provisioning
Savings (%)
37.8%

Error
Range
%
5.7%

Agenda

Virtualization Basics
Next Generation Storwize V7000 Hardware
Whats new in V7.3 Software
Whats new in V7.4 Software

61

2014 IBM Corporation

Data at Rest Encryption


Gen2 versions of Storwize V7000 now support encrypting data on
internal drives
HDD and SDD drives in control and expansion enclosures

Encryption is performed in the control enclosure node canisters


Applies to all existing drives: no need to buy new drives
Complies with FIPS-140 but not certified

Operates with all existing functions including:


Real-time Compression
Easy Tier

No performance impact
Requires Storwize Family Software V7.4

62

2014 IBM Corporation

Storwize V7000 Hardware Refresh: Block Diagram of Node Canister


*Optional
Data in cache is
not encrypted

High speed
cross card
communications

16GB DIMM
16GB DIMM
16GB DIMM
16GB DIMM

63

2014 IBM Corporation

SAS
EXP

PLX

Mezz Conn

To Control
Enclosure
Drives on
SAS Chain 0

COLETO
CREEK

Fire hose dump of


cached writes to flash
drive is not encrypted

Ivy Bridge
PCIe V3-1GB full duplex
1.9GHz
8 lanes
E5-2628L-V2

PLX

DMI
*Optional 2nd
Compression
Acceleration
Card

Standard

4 phys
4 phys

Boot
128GB SSD

COLETO
CREEK
TPM

Quad
1GbE

HBAs
8Gb FC
or
10GbE

1GbE

USB

To Expansion Enclosure
Drives on
SAS
SAS Chain 1
Controller SAS Chain 12Gb/phy 4 phys
4 phys
SAS Chain 2
Encrypts/decrypts as
data is written to or read
from physical drives

Implementing Encryption

Create new encrypted pool


For existing data, move volumes from existing pool to new pool
No convert in place function to encrypt existing pools
May require additional capacity as swing space

Unencrypted Pool

64

2014 IBM Corporation

Encrypted Pool

Using Encryption
Encryption is enabled at the array level
When enabled in a system, all new
arrays/MDisks are created encrypted by default

Storage pools contain multiple arrays/MDisks


Volumes are usually striped across MDisks in a
storage pool
With Easy Tier, volumes migrate among MDisk
tiers in a pool
To ensure all data is properly protected, encrypt
all or none of the arrays/MDisks in a pool
All volumes created in an encrypted pool are
automatically encrypted
No new limitations on replication or movement
of volumes between pools
If using FS840 or FS900 for Tier-0 in Easy Tier
pool ensure encryption is enabled on it

Volumes Striped
Unencrypted Pool

Volumes Striped
Encrypted Pool
T0
T1

T0
T1

Encrypted Easy Tier Pool


65

2014 IBM Corporation

Encryption Licenses

Encryption is a licensed feature


It uses key-based licensing similar to v3x00
There are no trial licenses for encryption, otherwise when that license runs out, the
customer would lose access to their data
A license must be present for each v7000 Gen2 control enclosure before enabling
encryption
Any Gen2 control enclosure added after encryption is enabled must have a license for
encryption
addnode will fail if the new control enclosure does not have the correct license
It will be possible to add licenses to the system for enclosures that are not currently
part of the system

Licenses are generated by DSFA based off the serial number/MTM of the enclosure

66

2014 IBM Corporation

Enabling/Disabling Encryption

Run chencryption -usb enable


Insert 3 or more IBM USB flash devices into the system
Run chencryption -usb newkey -key prepare

This validates the USB flash devices


Generates new master keys and writes them to a file on each USB device
Writes keys to SAS controller hardware, but does not yet enable them
Stores the number of USB flash devices the keys have been written to
Any failure here will fail this command

Run chencryption -usb newkey -key commit


This will only work if the -key prepare step worked
Enables the keys in SAS controller hardware

Disabling encryption on an array/MDisk


Run mkarray with the -encrypt no parameter
There is not an option in the GUI

To disable encryption entirely you must meet following criteria:


You must have NO encrypted arrays/MDisks
This will remove encryption key from SAS controller restoring system to known
state
67

2014 IBM Corporation

Encryption Key Management


Storwize V7000 Gen2 has built-in key management
Two types of keys
Master key (one per system)
Data encryption key (one per encrypted array)

Master key is created when encryption enabled


Stored on USB devices
Required to use a system with encryption enabled
May be changed

Data encryption key is used to encrypt data and is created automatically when an
encrypted array/MDisk is created

Stored in secure memory in SAS controller hardware


Stored encrypted with the master key
No way to view data encryption key
Cannot be changed
Discarded when an array is deleted (secure erase)

68 2014 IBM Corporation


68

Master Key
Master key is stored on USB devices
At least 3 devices required when encryption enabled

Stored as a simple file


May be copied or backed up as necessary

Should be stored securely


Enables access to encrypted data

Master key is required for a system with encryption enabled


System will not operate without access to master key
Regardless of whether or not any arrays are encrypted
Protect the USB devices holding the master key and consider secure backup copies

When a node restarts, software obtains master key


From other node in control enclosure if operational
From other nodes in a clustered system
From a USB device plugged into the canister

69 2014 IBM Corporation


69

Treatment of USB Devices Holding Master Key


USB devices may be permanently plugged into node canisters
Ensures master key will be available in event of a system restart
Eliminates any delay
Enables access to data if malicious individual removes entire system

USB devices may be stored securely apart from Storwize V7000 system
At least one will be required in event of system restart (but not for a node restart)
May cause delay in access to data
Eliminates risk of access to data if system removed

USB devices not plugged into node canisters and any backup copies of master
key file should be stored securely
Only IBM USB devices supported for encryption key use so order them in eConfig

70

2014 IBM Corporation

Encryption Statements of Direction


IBM intends to extend support for encryption on Storwize V7000 to include
externally virtualized storage
IBM intends to implement encryption on SAN Volume Controller to include
externally virtualized storage and SAS attached storage (including flash drives) in
the expansion enclosures
IBM intends to enhance encryption on Storwize V7000 and SAN Volume
Controller to include support of IBM Security Key Life Cycle Manager as an option
for managing encryption keys
IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information
regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code, or
functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any
future features or functionality described for our products remain at our sole discretion.

71

2014 IBM Corporation

Dual RACE
Dual compression engines for Storwize V7000 Gen2
Takes advantage of multi-core controller architecture
Makes better use of Compression Accelerator cards
Delivers up to 2x IOPs for compressed workloads

Requires 64GB per node/canister and software upgrade to R7.4 on supported


configurations
Storwize V7000 with Cache Upgrade, and 2 Compression Accelerator cards

Compression Accelerator cards are shared between the RACE instances


All other compression resources are divided evenly among the RACE instances
2 cores for each RACE instance and memory split evenly

Dual compression engines automatically used for compression when hardware


requirements met
Strongly recommend to always configure dual Compression Accelerator cards in
systems using Real-time Compression
72

2014 IBM Corporation

T10 DIF Support


T10 DIF - Data Integrity Field - An extension of SCSI
The Storwize V7000 Gen2 will support T10 DIF between the internal RAID layer
and the drives attached to supported enclosures
No host to back-end drive support at this time

73

2014 IBM Corporation

4K Drive Support
Hard disk drive companies have been migrating away from the legacy sector size of 512
bytes to a larger, more efficient sector size of 4096 bytes
Generally referred to as 4K sectors
Also referred to as the Advanced Format by IDEMA (The International Disk Drive Equipment and
Materials Association)

SVC/Storwize will support 4096 byte native drive blocksize without requiring clients to
change their blocksize
Allows for intermix of 512 and 4096 backend drive native blocksizes within an array
The GUI will represent drives of different block sizes as different classes

Externally virtualized LUNs must use legacy sector size of 512 bytes

http://en.wikipedia.org/wiki/Advanced_Format
74

2014 IBM Corporation

Child Pools
A new object that is created from a physical storage pool
Allows most of the functions of a traditional Storage Pool (MDisk group)
Size can be specified on creation and modified after
Can be used to dedicate a pool, yet have data spread across the larger pool

The maximum number of storage pools (child and parent combined) is 128
Example: 1 parent, 127 children
Example: 128 parents
Example: 64 parents with 1 child each

Only volume mirroring can be used to transfer data between child pools
Older migration method can only transfer data from parent pool to parent pool
Image mode disks cannot make use of child pools
No MDisks are associated with child pools

Child pool can only be created from the command line


mkmdiskgrp name <name> -size 14 -unit gb -parentmdiskgrp <name>
GUI will display child pools as children of the parent pool
75

2014 IBM Corporation

VLAN Support for iSCSI and IP replication


When VLAN ID is configured for the IP addresses used for either iSCSI host attach or IP
replication on SVC, appropriate VLAN settings on the Ethernet network and servers must also be
properly configured in order not to experience connectivity issues. Once VLANs have been
configured, changes to VLAN settings will disrupt iSCSI and/or IP replication traffic to/from
SVC/Storwize
During VLAN configuration for each IP address individually, the user must be aware that if VLAN
settings for the local and failover ports on 2 nodes of an iogroup are different, then switches must
be configured such that failover VLANs are configured on the local switch ports as well such that
failover of IP adresses from failing node to surviving node succeeds. In cases where this is not
done, user will experience loss of paths to SVC/Storwize storage during a node failure.

76

2014 IBM Corporation

IBM_Storwize:Wasabi:superuser>svcinfo
lsportip 1
id 1
node_id 1
node_name node1
IP_address 9.71.57.44
mask 255.255.255.0
gateway 9.71.57.1
.
.
.
vlan 105
vlan_6
adapter_location 0

New Drives and Additional V5000 Expansion


Require Storwize Family Software V7.4

6TB 3.5-inch 7,200 RPM 12Gbps NL-SAS drive


50% more capacity supported in all Storwize family systems
Enable up to 1.5PB raw capacity in single Storwize V7000 or 6PB in clustered system (up to 30PB
with compression)
Approximately 10% lower $/GB drive cost
Great new option for high capacity big data or archive

1.8TB 2.5-inch 10,000 RPM 12Gbps SAS drive


Also available packaged in a 3.5-inch carrier for Storwize V3500 and Storwize V3700
New intermediate performance drive option supported in all Storwize family systems

Additional Storwize V5000 Expansion


Now supports up to 19 expansion enclosures for up to 480 drives / 960 drives in a clustered system

77

2014 IBM Corporation

Drive Type

IBM Code Name

Vendor

Drive Family

Drive Sizes

SSD
6Gb SAS

Ralston Peak
or
Optimus

HGST
or
SmartStorage

Ultrastar SSD400M
or
Optimus TXA2D20

200/400GB
200/400/800GB

Sunset Cove

HGST

Ultrastar SSD800MM

200/400/800GB

Cobra E
or
Lightning

HGST
or
Seagate

Ultrastar C10K900
or
Savvio 10K.6

300/600/900GB

Cobra E Plus
or
Ironman

HGST
or
Seagate

Ultrastar C10K1200
or
ST1200MM0007

1.2TB

Cobra F

HGST

Ultrastar C10K1800

1.8TB (4K Sectors)

Yellow Jacket
or
AL12SX

Seagate
or
Toshiba

Savvio 15K.3
or
MK01GRRB

146/300GB
146/300GB

Cobra F

HGST

Ultrastar C15K600

300/600GB

Airwalker

Seagate

Constellation.2

500GB**/1TB

SSD
12Gb SAS

Storwize
Family
Drive
Options
2.5" Small
Form Factor
(SFF)

10K
6Gb SAS

10K
12Gb SAS
15K
6Gb SAS
15K
12Gb SAS
7.2K
6Gb NL-SAS

300/600/900GB

1.2TB

No restrictions on mixing of drive types with same form factor within the same enclosure

3.5" Large
Form Factor
(LFF)

7.2K
6Gb NL-SAS

12Gb NL-SAS
78

Muskie
Mantaray
Megalodon
or
Mars K
or
Makara

Seagate
Seagate
Seagate
or
HGST
or
Seagate

Constellation ES
Constellation ES.2
Constellation ES.3
or
Ultrastar 7K3000
or
ST6000NM0014

** Optional for Flex System V7000 and Storwize V3700/3500 only

2TB
3TB
2/3/4TB
2/3TB
or
6TB (4K Sectors)

Protect Volume on deletion based on last I/O access time


Based on a global system setting, some commands (most of those that either explicitly or implicitly
remove host-volume mappings and/or delete volumes) will be policed to prevent the removal of
mappings to volumes (or volumes) which are considered active. Active means that the system has
detected recent IO activity to the volume from any host.
Removal commands will be failed if the affected volume(s) has received >0 IOs from ANY host
within a period defined by the user.
This behaviour will be enabled by a system wide setting, however default setting will be disabled
(i.e. no policing).
If the user is blocked from deleting/unmapping a volume, but does not want to figure out why (i.e.
really wants to proceed with the delete) then the only option is to disable the policing globally,
remove the mapping/volume, and re-enable.
Otherwise they have to investigate the volume in question: So checking which hosts are
mapped, looking at their view of the volume/mapping and so on.
The -force flag will NOT work with this setting enabled
The volume (detailed) view will contain a new field which indicates (approximately) when the
volume was last accessed - by commands which are counted for the policing purposes. It will not
give host specific info.
This view will always be updated regardless of whether the policing is enabled or not.
79

2014 IBM Corporation

Storwize Family Software V7.4 Miscellaneous


Enhancements
Increased Global Mirror round-trip latency
Supported network latency for round-trip between primary and secondary locations increased
from 80ms to 250ms
Supports greater distance between sites and lower quality networks
Software
version

Hardware type

7.3.0 and earlier

All

7.4.0

2145-CG8, with 2nd HBA


2145-DH8
2076-524

Others

Partnership type
FC

1Gb IP

10Gb IP

80ms

80ms

10ms

250ms

80ms

Increased FlashCopy Consistency Groups (all Storwize family systems)


Doubled from 127 to 255

Support for SMI-S 1.6 (all Storwize family systems)


80

2014 IBM Corporation

81

2014 IBM Corporation

You might also like