Professional Documents
Culture Documents
2/1/2012
Page 2
2/1/2012
Notices
Copyright 2012 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. The information provided in this document is distributed AS IS without any warranty, either express or implied. IBM EXPRESSLY DISCLAIMS any warranties of merchantability, fitness for a particular purpose OR INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 USA
Trademarks
The following trademarks may appear in this Paper. AIX, AS/400, DFSMSdss, Enterprise Storage Server, Enterprise Storage Server Specialist, FICON, FlashCopy, HyperSwap, IBM, OS/390, RMF, System/390, S/390, Tivoli, TotalStorage, z/OS, System I, System p and System z are trademarks of International Business Machines Corporation or Tivoli Systems Inc. Other company, product, and service names may be trademarks or registered trademarks of their respective companies.
Page 3
2/1/2012
Abstract
As capacity of high end disk storage subsystems grows, a common question facing more and more clients today is as I mix additional and different workloads on the same DS8800 platform, how can I maintain and manage the same levels of quality of services as I would have if the various workloads spread across separate disk subsystems. ? This white paper explores the various advanced functions available on the IBM DS8800 Disk Storage Subsystem and presents the case as to how one can consolidate data from a variety of platforms while maintaining the required quality of service required by the business.
Introduction
The following chart illustrates over the time period of 2002 to 2009 the dramatic increase in the amount of average and storage capacity per disk subsystem. With the industry average now being 60+TBS for a disk subsystem, the impact of a disk subsystem failure becomes more wide spread when it occurs. Further, the disk subsystems now typically contain data from multiple server environments. How can one manage and maintain the quality of service required by the various business workloads ? Quality of service means for each application, access to the data that it needs, when it needs it. High availability of data and Performance are key factors in providing high quality of service levels. to the business. Finally, the motivation for combining data from multiple platforms actually help one reduce their Total Cost of Ownership ? The DS8800 provides for large amounts of data to be stored on smaller form factor drives, yielding industry leading space and power savings over other disk storage subsystems. The ease of use management capabilities provide further overall TCO savings to clients consolidating multiple workload environments to the DS8000.
Page 4
2/1/2012
GB / Subsystem 70000 60000 50000 40000 30000 20000 10000 0 2002 2003 2004 2005 2006 2007 2008 2009
Page 5
Metro Mirror Global Mirror Metro Global Mirror Common Management Software GDPS inter-operability with PowerHA Solutions and Open Cluster Management TPC End to End Workload Management Software Synergy (ex. zWLM)
2/1/2012
Page 6
2/1/2012
I/O priorities have been very useful in the past for various operational scenarios. A prime example is when backup programs run over into the OLTP workload starting. Backup programs could continue to run and finish, while the important OLTP transactions received higher priority. The optional priced DS8700/DS8800 I/O Priority Manager feature now provides additional tuning knobs for programs like zWLM, while also expanding the use of I/O Priority within the disk storage subsystem.
Easy Tier
Easy Tier automatically manages the quality of service for various workload on extents the physical drives by dynamically moving the hot extents to faster drives across the three tiers of drives (SSD, Fibre Channel or SAS drives and SATA or nearline SAS drives) Hot Spots on the physical hard disks are also managed by Easy Tier by moving extents through auto-rebalancing. The IBM Power 6 and Power 6+ engines within the DS8700/DS8800 disk subsystem are used to monitor all I/O to each physical hard drive and identify which extents are hot and which extents are cold. Extents that are cold, dynamically get moved over time to the lower cost, slower SATA drives. I/O history patterns are maintained by Easy Tier, so that data used say once a week or once a month at end of week or month processing, ends up on hard disks at a higher tier, when history dictates that those extents will be active again. Easy Tier is a free feature on the DS8700/DS8800 and works in the background with little to no external management by Storage Administrators after it is initially turned on. As a result, especially in large DS8700/DS8800 configurations, data optimization occurs within the disk subsystem automatically. Storage administrators can monitor the performance of the storage subsystem and focus more on future applications and capacity growth requirements of the business rather than manually optimizing application quality of service requirements on a daily basis. Application Quality of Service exception conditions can be monitored and managed on an exception basis.
Page 7
2/1/2012
HyperSwap provides both a Planned and Unplanned dynamic swap of applications I/O to an alternate (metro mirror target volume) either on command (planned) or via a HyperSwap trigger. (unplanned) In addition, with DS8700/DS8800 R6.2 LIC, the DS8000 can now also notify hosts that it will be running in some sort of degraded error recovery mode, which in turn permits the host the ability to perform a HyperSwap to maintain high availability access to data for the critical production workloads. HyperSwap was originally introduced to the System z marketplace by IBM in 2002. For System z, over the last 10 years, four HyperSwap configuration options have been introduced and are now provided: z/OS Basic HyperSwap DS8K Metro Mirror running on the same data center floor across two disk subsystems with TPC for Replication (TPC/R) Basic Edition. (z/OS Only) (2008)
- TPC-R Full Function HyperSwap DS8K Metro Mirror running on the same data
center floor or across two local data centers up to 300km. (z/OS Only) (2008)
- GDPS/PPRC HyperSwap Manager - DS8K Metro Mirror running on the same data
center floor or across two local data centers up to 200km (z/OS, zVM, zLinux, zTPF and zVSE) (2006)
- GDPS/PPRC Full Function HyperSwap - DS8K Metro Mirror running on the same
data center floor or across two local data centers up to 200km. In this case GDPS automation also manages the Server, workload, data with a coordinated Network Switch on a site switch. (z/OS, zVM, zLinux, zTPF and zVSE) (2002) AIX also now provides both planned and unplanned HyperSwap for non-clustered Systems. The HyperSwap configuration is defined and maintained via TPC/R full function.
Page 8
2/1/2012
Page 9
2/1/2012
FlashCopy
FlashCopy provides the ability to form a logical and/or physical point in time copy of a volume/lun. A single source volume can have up to 12 target volumes. The FlashCopy Source/Target relationship is dynamically established /withdrawn on a volume/lun by volume/lun basis.
Page 10
2/1/2012
Consistent FlashCopy provides the ability to create a common PiT FlashCopy across multiple volumes/luns on the same or across multiple DS8K platforms. While the various volume/Lun source/target relationships are being logically created, all I/O is held queued in the host via Extended Long Busy (ECKD volumes) or Queue Full (Distributed Systems) host to DS8000 interfaces. After all volumes/Lun source/target relationships are logically established, then all host I/O is released. Multiple DS8Ks talk between themselves via a Fibre channel link. Consistent FlashCopy supports PiT Consistent FlashCopies of any combination of System z and/or distributed data volumes/luns. Note that Consistent FlashCopy only provides an I/O Consistent copy of the data Host data base/file system buffers and not flushed (written to disk) by the DS8K Consistent FlashCopy function. Therefore, if one is attempting a volume/lun backup using FlashCopy on distributed systems, only data that has been written out to the DS8K can be FlashCopied. Various Backup Software can provide the additional function of flushing data base and file system host buffers before it invokes the DS8K FlashCopy function. IBM Tivoli Storage FlashCopy Manager helps deliver the highest levels of protection for mission critical IBMDB2, SAP, Oracle, Microsoft Exchange, and Microsoft SQL Server applications via integrated, application-aware snapshot backup and restore capabilities. This is achieved through the exploitation of advanced IBM storage hardware snapshot technology to create a high performance, low impact application data protection solution.
Metro Mirror
Metro Mirror is on a volume/lun to volume/lun basis. Multiple volume/lun pairs can be combined under management software like IBMs TPC-R replication manager software which exploits special commands that were created to enable software to create and manage consistency groups of volumes across the same or multiple DS8000s as well as System z volumes and/or distributed data luns.. GDPS can also be used as the metro mirror management software. GDPS and TPC-R both also manage the HyperSwap function when and if desired. GDPS uses a unique interface into the DS8000 to manage distributed system Luns. Commands can be sent on an ECKD device address to the DS8000, but with an indicator that this command is for distributed system Luns. This interface has actually been available since day one of distributed systems data replication support.
Page 11
2/1/2012
Global Mirror
Global Mirror is a formal session that is defined by host software, but actually runs outboard across a single or multiple primary DS8000s to a set of Target DS8000 volumes/Luns. Volume/Lun pairs are defined to the DS8000 and then added to the Global Mirror session. TPC-R and GDPS automation are both examples of some of the software that can be used to setup the volumes/luns contained in the Global Mirror session.
TPC
The Tivoli Productivity Center of products provides management software to monitor, set up and manage the IBM storage environment including the DS8000, SVC, XIV and DS4K, DS5K product sets.
Page 12
2/1/2012
Page 13
Performance
2/1/2012
Sequential in GB/s
DS8700
1024 Drives in 5 Frames
A B C D
DS8800 kW
6.8 5.4 6.5
DS8700 kW
6.8 7.1 6.1 6.1 3.0
DS8800
1056 Drives in 3 Frames
~40% less power
E Total 18.7
29.1
DS8700 1-Frame
128 Drives
DS8800 1-Frame
240 Drives
Same power consumption Same Weight 87% more disks
Page 14
2/1/2012
Disks 2U Storage Enclosure for either 24 - 2.5 Disk Slots or 12 - 3.5 Disk Slots
Per Rack
Storage Enc. 2.5 Disks (Max) 3.5 Disks (Max) 10 240 120 10 240 120 216 360 14 336 168 24 576 288 518 864
20 480 240 3.5 Disks NEW in R6.2 64 1536 768 1382 2304 900 GB 2.5 3 TB 3.5 New in R6.2
2012 IBM Corporation
Cumulative
Enclosures 2.5 Disks (Max) 3.5 Disks (Max) 2.5 900GB Cap (TB) 3.5 3TB Cap (TB)
Conclusions
The DS8000 family is designed to support the most demanding business applications with its exceptional all-around performance and data throughput. This, combined with its world-class business resiliency and encryption features, provides a unique combination of high availability, performance, and security. Its tremendous scalability, broad server support, and virtualization capabilities can help simplify the storage environment by consolidating multiple storage systems onto a single DS8000.
Page 15
2/1/2012
Author
Bob Kern - IBM Advanced Technical Support Americas ( bobkern@us.ibm.com). Mr. Kern is an IBM Master Inventor & Executive IT Architect. He has 37 years experience in large system design and development and holds numerous patents in Storage related topics. For the last 28 years, Bob has specialized in disk device support and is a recognized expert in continuous availability, disaster recovery and real time disk mirroring. He created the DFSMS/MVS subcomponents for Asynchronous Operations Manager and the System Data Mover. Bob was named in 2003 a Master Inventor by the IBM Systems & Technology Group and is one of the inventors of Concurrent Copy, PPRC, XRC, GDPS and zCDP solutions. He continues to focus in the Disk Storage Architecture area on HW/SW solutions focused on Continuous Availability, and Data Replication. He is a member of the GDPS core architecture team and the GDPS Customer Design Council with focus on storage related topics.