You are on page 1of 27

RAID Technology
Metadata - Disk Data Format (DDF)

DDF - Common RAID Disk Data Format Specification developed by the SNIA
Common RAID Disk Data Format Technical Working Group
Specification defines standard data structure describing how data is formatted
across disks in a RAID group
Common RAID DDF structure allows a basic level of interoperability, such as
data-in-place migration, between different suppliers of RAID technology
Adaptec implementation of DDF is AMF – Adaptec Metadata Format
AMF provides the space to expand metadata for new features and array types
like RAID 6
Metadata - Adaptec Metadata Format (AMF)

• AMF uses more space than previous metadata formats and is at the opposite
‘end’ of the disk occupying the largest block addresses instead of the smallest
• Although it uses more space, ARC truncates available disk space to the highest
100 MB multiple that fits within the physical size of the drive, so there may not be
a difference
• AMF metadata resides in the last 96MB of the disk

It starts at LBA (capacity - 96MB - 1) and ends at LBA (capacity - 1)

LBA 0

User Data Run time Configuratio


Workspac n
e Metadata
64MB 32M
B
No RAID

• Storage with no RAID logic is often referred • Contiguous Data


to as Volume or JBOD • No Fault Tolerance
• Groups of disks known as Disk Array • Minimum 1 Drive

HDD 1 HDD 2 HDD 3 HDD 4

A E I M
B F J N
C G K O
D H L P
No RAID

• Contiguous Data
• Storage with no RAID logic is often referred • No Fault Tolerance
to as Volume or JBOD • Minimum 1 Drive

HDD 1 HDD 2 HDD 3 HDD 4

A E I M
B F J N
C G K O
D H L P

All data is lost because there is no redundancy!!


RAID 0

• RAID 0 spreads data across several drives • Block Striping


for speed using block level striping • No Fault Tolerance
• High Read and Write performance • Minimum 2 Drives

HDD 1 HDD 2 HDD 3 HDD 4

A B C D
E F G H
Stripe I J K L
M N O P
RAID 0

• RAID 0 spreads data across several drives • Block Striping


for speed using block level striping • No Fault Tolerance
• High Read and Write performance • Minimum 2 Drives

HDD 1 HDD 2 HDD 3 HDD 4

A B C D
E F G H
Stripe I J K L
M N O P

All data is lost because there is no redundancy!!


RAID 1

• Mirroring
• Writes all data to both drives in the • No Striping
mirrored pair • 100% Redundancy
• 50% Capacity Loss
• Requires 2 Drives

HDD 1 HDD 2

A A
B B
C C
D D

Mirror
Drive
RAID 1

• Mirroring
• Writes all data to both drives in the • No Striping
mirrored pair • 100% Redundancy
• 50% Capacity Loss
• Requires 2 Drives

HDD 1 HDD 2

A A
B B
C C
D D

Mirror
No data was lost because of the mirrored drive! Drive
RAID 10

• Can survive a disk failure in both mirror sets • Striped array of mirrors
• High I/O rates achieved due to multiple • Minimum 4 Drives
stripe segments • 50% Capacity Loss
• Good write performance

HDD 1 HDD 2 HDD 3 HDD 4

A A B B
C C D D
E E F F
G G H H
RAID 10

• Can survive a disk failure in both mirror sets • Striped array of mirrors
• High I/O rates achieved due to multiple • Minimum 4 Drives
stripe segments • 50% Capacity Loss
• Good write performance

HDD 1 HDD 2 HDD 3 HDD 4

A A B B
C C D D
E E F F
G G H H

Supports 2 disk failures: 1 failure per mirror set


Parity

RAID 5 - Redundancy is achieved by the use of parity blocks


If a single drive in the array fails, data blocks and a parity block from the
working drives can be combined to reconstruct the missing data
Exclusive-OR (XOR) is the logical operation used to generate parity
XOR compares every two bits
If the two bits are the same then an even XOR parity bit (0) is generated

If the bits are different then an odd parity bit (1) is created
XOR

• Exclusive OR
• Logical operation that generates parity for every 2 data
bits

Data Byte Data Byte


01010011 10010110

1 10 00 10 1
Parity Byte
RAID 5

• Block Striping
• Data is striped and includes parity protection • Distributed Parity
• Parity is also striped for higher performance • 100% Redundancy
• Better use of capacity than
RAID 1

HDD 1 HDD 2 HDD 3 HDD 4

A B C Parity
D E Parity F
Stripe G Parity H I
Parity J K L

Parity is distributed to reduce a single drive bottleneck


RAID 5

• Block Striping
• Data is striped and includes parity protection • Distributed Parity
• Parity is also striped for higher performance • 100% Redundancy
• Better use of capacity than
RAID 1

HDD 1 HDD 2 HDD 3 HDD 4

A B C Parity
D E Parity F
Stripe G Parity H I
Parity J K L

No data is lost because of the distributed parity!


RAID 50

• Can survive a disk failure in each sub-array • Striped array of RAID 5 arrays
• Data and parity striped across RAID 5 arrays • Minimum 6 Drives

HDD 1 HDD 2 HDD 3 HDD 4 HDD 5 HDD 6

A B Parity C D Parity
E Parity F G Parity H
Parity I J Parity K L
M N Parity O P Parity
RAID 50

• Can survive a disk failure in each sub-array • Striped array of RAID 5 arrays
• Data and parity striped across RAID 5 arrays • Minimum 6 Drives

HDD 1 HDD 2 HDD 3 HDD 4 HDD 5 HDD 6

A B Parity C D Parity
E Parity F G Parity H
Parity I J Parity K L
M N Parity O P Parity

Supports 2 disk failures: 1 failure per RAID 5


sub-array

Advanced Data Protection Suite
Data Protection
Adaptec sets the RAID Feature Standard

Industry Standard RAID Features


RAID 0, 1, 10, 5, 50, JBOD
Striping, Mirroring, Rotating Parity
Online Array Optimizations
Online Capacity Expansion
Add a drive or expand an array
RAID Level Migration
Optimize for protection, capacity, or performance
Stripe Size Configuration
Optimize performance based on data access patterns
Large Array Size
Seamlessly support >2 TB arrays, up to 512 TB
Configurable Hot Spares Dedicated or global

Hot Swap Disk Support Auto-rebuild after a disk failure

Flexible Initialization Schemes Instant availability, background, or clear

Stripe multiple arrays across the same physical disks


Multiple arrays on a single set of drives

Adaptec Unique Features


No wasted space with different drive sizes
Optimized Disk Utilization
Hot Space (RAID-5EE) No more spindles sitting idle

Dual Drive Failure Protection (RAID-6) Double the tolerance to drive failures over RAID-5
SP DA

Striped Mirror (RAID-1E) Spread a mirror across an odd number of drives


Copyback Hot Spare
Auto-rebuild to original setup after drive replacement
Data Protection

Striped Mirror (RAID-1E)

RAID level-1 Enhanced (RAID-1E) combines mirroring and data striping


Stripes data and copies of the data across all of the drives in the array
As with standard RAID level-1, the data is mirrored and the capacity of the logical drive
is 50% of the total actual disk capacity (N/2)
RAID 1E requires a minimum of 3 drives and supports a maximum of 16 drives

copy
Data Protection
Hot Space (RAID 5EE)

Similar to RAID-5 but includes efficient distributed spare drive


Extra spindle for better performance and faster rebuild times
Stripes data and parity across all the drives in the array
Spare drive part of the RAID-5EE array - Spare space interleaved with parity blocks
Spare space dedicated to the array
N+2 drives to implement – Minimum of 4 drives up to 16 drives maximum
Data Protection
Dual Drive Failure Protection (RAID-6)

Similar to RAID 5 except it uses a second set of independently calculated and


distributed parity information for additional fault tolerance
This extra fault tolerance ensures data availability in the event of two drives failing
before a drive replacement can occur (physically or through a hot spare rebuild)
To lose access to the data, three disks would have to fail within the mean time to
repair (MTTR) interval and the probability of this occurring is thousands of times less
likely than simultaneous failure of both disks in a RAID 1 array
Requires N+2 drives to implement
RAID 60

• For use with high availability solutions • Striped array of RAID 6 arrays
• Can survive 2 disk failures in each sub-array • Minimum 8 Drives
• Data and parity striped across RAID 6 arrays

HDD 1 HDD 2 HDD 3 HDD 4 HDD 5 HDD 6 HDD 7 HDD 8

Parity A B Parity Parity C D Parity


Parity Parity E F Parity Parity G H
I Parity Parity J K Parity Parity L
M N Parity Parity O P Parity Parity

RAID 60

• For use with high availability solutions • Striped array of RAID 6 arrays
• Can survive 2 disk failures in each sub-array • Minimum 8 Drives
• Data and parity striped across RAID 6 arrays

HDD 1 HDD 2 HDD 3 HDD 4 HDD 5 HDD 6 HDD 7 HDD 8

Parity A B Parity Parity C D Parity


Parity Parity E F Parity Parity G H
I Parity Parity J K Parity Parity L
M N Parity Parity O P Parity Parity

Supports 4 disk failures: 2 failures per RAID 6


sub-array
Data Protection

Selecting a RAID level

Available Read Write


RAID Level Capacity Performance Performance Built-in Spare Minimum Drives Maximum Drives
Volume 100% =1 drive =1 drive No 1 32

RAID 0 100% ♦♦♦♦ ♦♦♦♦ No 2 128

RAID 1 N/2 ♦♦♦ ♦♦♦ No 2 2

RAID 1E N/2 ♦♦♦ ♦♦♦ No 3 16

RAID 5 N-1 ♦♦♦ ♦♦ No 3 16

RAID 5EE N-2 ♦♦♦ ♦♦♦ Yes 4 16

RAID 6 N-2 ♦♦♦ ♦♦ No 4 16

RAID 10 N/2 ♦♦♦♦ ♦♦♦ No 4 16

N-1 per member


RAID 50
array ♦♦♦ ♦♦♦ No 6 128

N-2 per member


RAID 60
array ♦♦♦ ♦♦ No 8 128
Data Protection
Advanced Data Protection Suite

Copyback Hotspare

When a drive fails, data from the failed drive is built to the hotspare during the rebuild
With Copyback enabled, the data is moved back to its original location after the controller
detects that the failed drive has been replaced
After the data has been copied back to the replaced drive, the hotspare becomes
available again
Concept of hotspare location
Useful particularly if a SATA hotspare is protecting a SAS RAID array
Copyback mode is enabled using Adaptec Storage Manager, HRConf or ARCConf
Advanced Data Protection Suite

Click to edit Master text styles


Second level
● Third level
RAID 5 Array
● Fourth level

● Fifth level

Hotspare
Disk Failure!
New Array Member

Hotspare kicks in…


New Disk Replacement

Copyback Process takes place…

You might also like