Professional Documents
Culture Documents
RAID Technology
Metadata - Disk Data Format (DDF)
•
DDF - Common RAID Disk Data Format Specification developed by the SNIA
Common RAID Disk Data Format Technical Working Group
Specification defines standard data structure describing how data is formatted
across disks in a RAID group
Common RAID DDF structure allows a basic level of interoperability, such as
data-in-place migration, between different suppliers of RAID technology
Adaptec implementation of DDF is AMF – Adaptec Metadata Format
AMF provides the space to expand metadata for new features and array types
like RAID 6
Metadata - Adaptec Metadata Format (AMF)
•
• AMF uses more space than previous metadata formats and is at the opposite
‘end’ of the disk occupying the largest block addresses instead of the smallest
• Although it uses more space, ARC truncates available disk space to the highest
100 MB multiple that fits within the physical size of the drive, so there may not be
a difference
• AMF metadata resides in the last 96MB of the disk
●
It starts at LBA (capacity - 96MB - 1) and ends at LBA (capacity - 1)
LBA 0
A E I M
B F J N
C G K O
D H L P
No RAID
•
• Contiguous Data
• Storage with no RAID logic is often referred • No Fault Tolerance
to as Volume or JBOD • Minimum 1 Drive
A E I M
B F J N
C G K O
D H L P
A B C D
E F G H
Stripe I J K L
M N O P
RAID 0
•
A B C D
E F G H
Stripe I J K L
M N O P
• Mirroring
• Writes all data to both drives in the • No Striping
mirrored pair • 100% Redundancy
• 50% Capacity Loss
• Requires 2 Drives
HDD 1 HDD 2
A A
B B
C C
D D
Mirror
Drive
RAID 1
•
• Mirroring
• Writes all data to both drives in the • No Striping
mirrored pair • 100% Redundancy
• 50% Capacity Loss
• Requires 2 Drives
HDD 1 HDD 2
A A
B B
C C
D D
Mirror
No data was lost because of the mirrored drive! Drive
RAID 10
•
• Can survive a disk failure in both mirror sets • Striped array of mirrors
• High I/O rates achieved due to multiple • Minimum 4 Drives
stripe segments • 50% Capacity Loss
• Good write performance
A A B B
C C D D
E E F F
G G H H
RAID 10
•
• Can survive a disk failure in both mirror sets • Striped array of mirrors
• High I/O rates achieved due to multiple • Minimum 4 Drives
stripe segments • 50% Capacity Loss
• Good write performance
A A B B
C C D D
E E F F
G G H H
If the bits are different then an odd parity bit (1) is created
XOR
• Exclusive OR
• Logical operation that generates parity for every 2 data
bits
1 10 00 10 1
Parity Byte
RAID 5
•
• Block Striping
• Data is striped and includes parity protection • Distributed Parity
• Parity is also striped for higher performance • 100% Redundancy
• Better use of capacity than
RAID 1
A B C Parity
D E Parity F
Stripe G Parity H I
Parity J K L
• Block Striping
• Data is striped and includes parity protection • Distributed Parity
• Parity is also striped for higher performance • 100% Redundancy
• Better use of capacity than
RAID 1
A B C Parity
D E Parity F
Stripe G Parity H I
Parity J K L
• Can survive a disk failure in each sub-array • Striped array of RAID 5 arrays
• Data and parity striped across RAID 5 arrays • Minimum 6 Drives
A B Parity C D Parity
E Parity F G Parity H
Parity I J Parity K L
M N Parity O P Parity
RAID 50
•
• Can survive a disk failure in each sub-array • Striped array of RAID 5 arrays
• Data and parity striped across RAID 5 arrays • Minimum 6 Drives
A B Parity C D Parity
E Parity F G Parity H
Parity I J Parity K L
M N Parity O P Parity
Dual Drive Failure Protection (RAID-6) Double the tolerance to drive failures over RAID-5
SP DA
copy
Data Protection
Hot Space (RAID 5EE)
•
• For use with high availability solutions • Striped array of RAID 6 arrays
• Can survive 2 disk failures in each sub-array • Minimum 8 Drives
• Data and parity striped across RAID 6 arrays
• For use with high availability solutions • Striped array of RAID 6 arrays
• Can survive 2 disk failures in each sub-array • Minimum 8 Drives
• Data and parity striped across RAID 6 arrays
Copyback Hotspare
When a drive fails, data from the failed drive is built to the hotspare during the rebuild
With Copyback enabled, the data is moved back to its original location after the controller
detects that the failed drive has been replaced
After the data has been copied back to the replaced drive, the hotspare becomes
available again
Concept of hotspare location
Useful particularly if a SATA hotspare is protecting a SAS RAID array
Copyback mode is enabled using Adaptec Storage Manager, HRConf or ARCConf
Advanced Data Protection Suite
•
● Fifth level
Hotspare
Disk Failure!
New Array Member