You are on page 1of 15

Page 1

2/1/2012

IBM DS8800 Data Consolidation Features


,

IBM Advanced Technical Skills, Americas February 1, 2012

Copyright IBM Corporation, 2012

Page 2

2/1/2012

Notices
Copyright 2012 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. The information provided in this document is distributed AS IS without any warranty, either express or implied. IBM EXPRESSLY DISCLAIMS any warranties of merchantability, fitness for a particular purpose OR INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 USA

Trademarks
The following trademarks may appear in this Paper. AIX, AS/400, DFSMSdss, Enterprise Storage Server, Enterprise Storage Server Specialist, FICON, FlashCopy, HyperSwap, IBM, OS/390, RMF, System/390, S/390, Tivoli, TotalStorage, z/OS, System I, System p and System z are trademarks of International Business Machines Corporation or Tivoli Systems Inc. Other company, product, and service names may be trademarks or registered trademarks of their respective companies.

Copyright IBM Corporation, 2012

Page 3

2/1/2012

Abstract
As capacity of high end disk storage subsystems grows, a common question facing more and more clients today is as I mix additional and different workloads on the same DS8800 platform, how can I maintain and manage the same levels of quality of services as I would have if the various workloads spread across separate disk subsystems. ? This white paper explores the various advanced functions available on the IBM DS8800 Disk Storage Subsystem and presents the case as to how one can consolidate data from a variety of platforms while maintaining the required quality of service required by the business.

Introduction

The following chart illustrates over the time period of 2002 to 2009 the dramatic increase in the amount of average and storage capacity per disk subsystem. With the industry average now being 60+TBS for a disk subsystem, the impact of a disk subsystem failure becomes more wide spread when it occurs. Further, the disk subsystems now typically contain data from multiple server environments. How can one manage and maintain the quality of service required by the various business workloads ? Quality of service means for each application, access to the data that it needs, when it needs it. High availability of data and Performance are key factors in providing high quality of service levels. to the business. Finally, the motivation for combining data from multiple platforms actually help one reduce their Total Cost of Ownership ? The DS8800 provides for large amounts of data to be stored on smaller form factor drives, yielding industry leading space and power savings over other disk storage subsystems. The ease of use management capabilities provide further overall TCO savings to clients consolidating multiple workload environments to the DS8000.

Copyright IBM Corporation, 2012

Page 4

2/1/2012

Business impact of subsystem failure

GB / Subsystem 70000 60000 50000 40000 30000 20000 10000 0 2002 2003 2004 2005 2006 2007 2008 2009

Maintaining Disk Subsystem Quality of Service (QoS) for All Data


The majority of the IBM DS8800 Disk subsystems deployed in the marketplace today contain data from multiple server types. Customers have learned to reduce their TCO by consolidating data onto a smaller set of disk storage subsystems (saving space and power), while maintaining or actually increasing their QoS to their business clients. The DS8800 Disk Subsystem provides a number of features and functions that enable an Enterprise to maintain customized QoS to clients. These features and functions include: I/O Priority Manager Easy Tier HyperSwap Data Replication technology Data replication resource Management Consistency Groups across System z & Distributed Server Data FlashCopy

Copyright IBM Corporation, 2012

Page 5
Metro Mirror Global Mirror Metro Global Mirror Common Management Software GDPS inter-operability with PowerHA Solutions and Open Cluster Management TPC End to End Workload Management Software Synergy (ex. zWLM)

2/1/2012

I/O Priority Manager


The DS8000 I/O Priority Manager provides the ability for Clients to provide an I/O Priority on a Lun/Volume basis such that workload I/O can now be prioritized based on the quality of service required for that specific workload and the volume/luns where its data is stored. DS8000 I/O Priority Manager constantly monitors system resources to help applications meet their performance targets automatically, without operator intervention. The DS8000 storage hardware resources that are monitored by the I/O Priority Manager for possible contention are the RAID ranks and device adapters. For distributed servers a server typically manages a single workload. The DS8000 I/O Priority Manager has been designed to dynamically throttle a lower prioritized workload only when actual resource constraint conditions are occurring such that the targeted quality of service for a higher prioritized workload would otherwise be missed. In workload contention situations, DS8000 I/O Priority Manager delays I/Os with lower priority performance policies in order to help I/Os with higher performance policies meet their QoS targets. For z/OS, multiple workloads run on the same platform, zWLM manages I/O priorities based on the WLM Service Class assigned to the workload. This translates to an I/O priority being given to each I/O sent to the disk subsystem. z/OS IOS will actually utilize the I/O priority to prioritize the I/O over the Ficon channel. The DS8000 will manage all workload I/Os from all servers based on the I/O Priorities provided on the I/O (z/OS and AIX DB2) , as well as I/O Priorities given to I/O Priority Manager for a Lun/Volume or by a default Priority value specified within the DS8000. zWLM has the ability to provide I/O priority based on the various workloads that are running on the Sysplex. In addition, zWLM also has the ability to tell the DS8000 to slow down some workloads that are over achieving their Quality of Service requirements and as a result provide higher priority workloads additional storage subsystem resources to meet their quality of service requirements..

Copyright IBM Corporation, 2012

Page 6

2/1/2012

I/O priorities have been very useful in the past for various operational scenarios. A prime example is when backup programs run over into the OLTP workload starting. Backup programs could continue to run and finish, while the important OLTP transactions received higher priority. The optional priced DS8700/DS8800 I/O Priority Manager feature now provides additional tuning knobs for programs like zWLM, while also expanding the use of I/O Priority within the disk storage subsystem.

Easy Tier
Easy Tier automatically manages the quality of service for various workload on extents the physical drives by dynamically moving the hot extents to faster drives across the three tiers of drives (SSD, Fibre Channel or SAS drives and SATA or nearline SAS drives) Hot Spots on the physical hard disks are also managed by Easy Tier by moving extents through auto-rebalancing. The IBM Power 6 and Power 6+ engines within the DS8700/DS8800 disk subsystem are used to monitor all I/O to each physical hard drive and identify which extents are hot and which extents are cold. Extents that are cold, dynamically get moved over time to the lower cost, slower SATA drives. I/O history patterns are maintained by Easy Tier, so that data used say once a week or once a month at end of week or month processing, ends up on hard disks at a higher tier, when history dictates that those extents will be active again. Easy Tier is a free feature on the DS8700/DS8800 and works in the background with little to no external management by Storage Administrators after it is initially turned on. As a result, especially in large DS8700/DS8800 configurations, data optimization occurs within the disk subsystem automatically. Storage administrators can monitor the performance of the storage subsystem and focus more on future applications and capacity growth requirements of the business rather than manually optimizing application quality of service requirements on a daily basis. Application Quality of Service exception conditions can be monitored and managed on an exception basis.

High Availability _HyperSwap Technology


High availability for disk storage subsystems is a major focus items for the DS8000 disk storage platform. As disk subsystem capacities continue to increase it is important to minimize or eliminate the impact of a disk subsystem failure or even the impact on performance of various maintenance actions on the disk subsystem or the IT environment. For System z and AIX IBM provides a function called HyperSwap that can help to mask all disk subsystem failures from affecting production workloads. On various distributed systems, software mirroring of Luns can provide a similar HyperSwap like high availability option.

Copyright IBM Corporation, 2012

Page 7

2/1/2012

HyperSwap provides both a Planned and Unplanned dynamic swap of applications I/O to an alternate (metro mirror target volume) either on command (planned) or via a HyperSwap trigger. (unplanned) In addition, with DS8700/DS8800 R6.2 LIC, the DS8000 can now also notify hosts that it will be running in some sort of degraded error recovery mode, which in turn permits the host the ability to perform a HyperSwap to maintain high availability access to data for the critical production workloads. HyperSwap was originally introduced to the System z marketplace by IBM in 2002. For System z, over the last 10 years, four HyperSwap configuration options have been introduced and are now provided: z/OS Basic HyperSwap DS8K Metro Mirror running on the same data center floor across two disk subsystems with TPC for Replication (TPC/R) Basic Edition. (z/OS Only) (2008)

- TPC-R Full Function HyperSwap DS8K Metro Mirror running on the same data
center floor or across two local data centers up to 300km. (z/OS Only) (2008)

- GDPS/PPRC HyperSwap Manager - DS8K Metro Mirror running on the same data
center floor or across two local data centers up to 200km (z/OS, zVM, zLinux, zTPF and zVSE) (2006)

- GDPS/PPRC Full Function HyperSwap - DS8K Metro Mirror running on the same
data center floor or across two local data centers up to 200km. In this case GDPS automation also manages the Server, workload, data with a coordinated Network Switch on a site switch. (z/OS, zVM, zLinux, zTPF and zVSE) (2002) AIX also now provides both planned and unplanned HyperSwap for non-clustered Systems. The HyperSwap configuration is defined and maintained via TPC/R full function.

Data Replication Technology


The IBM DS8K storage subsystems support a variety of storage based data replication functions. FlashCopy, Global Mirror, Metro Global Mirror and Global Copy are all supported on the DS8K platform on a volume by volume basis. Therefore multiple consistency groups are possible for each function, providing the ability to manage consistency across volumes/luns on a single or multiple server(s) or subset of volumes attached to a server.

Copyright IBM Corporation, 2012

Page 8

2/1/2012

D8700/DS8800 Data Replication Resource Groups


Copy Services scope management is the ability to specify policy-based limitations on Copy Services requests. With the combination of policy-based limitations and the DS8000 inherent volume-addressing limitations, it is now possible to control which volumes can be in a Copy Services relationship, which network users or host servers (or LPARs) issue Copy Services requests on which resources, and other Copy Services operations. This functionality is implemented through a logical construct called Resource Group. The Copy Services scope management capabilities, using Resource Groups, allow you to separate and protect from each other volumes in a Copy Services relationship. This can facilitate multi-tenancy support by assigning specific resources to specific tenants, limiting Copy Services relationships so that they exist only between resources within each tenant's scope of resources. When managing a single-tenant installation, the partitioning capability of Resource Groups can be used to isolate various subsets of the environment as if they were separate tenants, for example, to separate mainframes from open servers, Windows from Unix, or accounting department applications from those of marketing.

Data Replication Link Utilization


The DS80000 permits data replication links, to be dynamically assigned between LSS:LSS pairs (LSS Logical Storage Subsystem) and links may be shared across multiple MM, GM, MGM and GC sessions, providing optimal link utilization. Therefore, depending on the availability and performance requirements of a specific applications, links may be dedicated or shared. The DS8000 technology provides for Metro Mirror, Global Mirror and Global Copy a function called Pre-Deposit Write. This function reduces the standard Fibre Channel Protocol which uses two protocol exchanges per write to a single protocol exchange. In addition, since the DS8700/DS8800 have a great deal of internal processing power via the Power 6 and Power 6+ engines the primary DS8K can keep the links full with multiple data transfers, knowing that if the target disk subsystem host adapter interface is busy, the DS8K can offload the work to the Power 6/6+engine and accept the data transfer. The combination of optimized link utilization and pre deposit write provides for optimized data transfer for both synchronous and asynchronous data replication of single or multiple data replication workload/server sessions on the same or across multiple DS8700/DS8800.

Copyright IBM Corporation, 2012

Page 9

2/1/2012

Consistency Groups across System z & Distributed Server Data


The IBM Enterprise Storage Server (ESS) in 2000, provided the ability to form consistency groups across System z, and Distributed Systems Volumes/Luns. This ability was then extended to all of the DS8000 family of disk subsystems. The volumes/Lun can all be consolidated on a single DS8K disk subsystem or across multiple ESS and DS8K disk subsystems of the same or different models. This provides for flexibility in configuring volumes as well as data replication sessions. Data can also be typically migrated from one disk subsystem to another with minimal disruption of service. Consistency Groups are an important function that is typically used by customers to obtain a common point of consistency for Backups, as well as, both planned and unplanned site switch scenarios. More and more customers have applications that span multiple server types. Each part of the application running on various server types may store data. A typically application scenario is for a transaction to come into the Enterprise via say a Windows (VMWARE) system that does some front end processing, stores some data, and then passes the transaction on to say a middle tier AIX system, which again may do some additional processing, which includes storing some data. The transaction is then passed to say z/OS, where more transaction processing occurs and more data is stored. It is important for Application Backups as well as site switch scenarios that a common I/O consistency point in time is managed across all data (volumes/luns) for the application running across the three server types. IBM DS8K FlashCopy, Metro Mirror, Global Mirror, Metro Global Mirror and various Global Copy scenarios support cross volume/Lun and cross DS8K consistency groups. This enables a quick recovery from a backup, or in the event of a site switch due to a disaster or some sort of unplanned event, all volumes/luns will be recovered to the same I/O consistent point in time. The result of having all data on all related application platforms I/O consistent to a single point in time, drastically simplifies recovery. The application pieces on each of the Server platforms need only be RESTARTED from that common I/O consistent point in time after the server is re-ipled. Consolidating data from all three server types onto the DS8Ks enables cross server platform consistency groups and as a result simplifies the recovery actions of the end to end application(s).

FlashCopy
FlashCopy provides the ability to form a logical and/or physical point in time copy of a volume/lun. A single source volume can have up to 12 target volumes. The FlashCopy Source/Target relationship is dynamically established /withdrawn on a volume/lun by volume/lun basis.

Copyright IBM Corporation, 2012

Page 10

2/1/2012

Consistent FlashCopy provides the ability to create a common PiT FlashCopy across multiple volumes/luns on the same or across multiple DS8K platforms. While the various volume/Lun source/target relationships are being logically created, all I/O is held queued in the host via Extended Long Busy (ECKD volumes) or Queue Full (Distributed Systems) host to DS8000 interfaces. After all volumes/Lun source/target relationships are logically established, then all host I/O is released. Multiple DS8Ks talk between themselves via a Fibre channel link. Consistent FlashCopy supports PiT Consistent FlashCopies of any combination of System z and/or distributed data volumes/luns. Note that Consistent FlashCopy only provides an I/O Consistent copy of the data Host data base/file system buffers and not flushed (written to disk) by the DS8K Consistent FlashCopy function. Therefore, if one is attempting a volume/lun backup using FlashCopy on distributed systems, only data that has been written out to the DS8K can be FlashCopied. Various Backup Software can provide the additional function of flushing data base and file system host buffers before it invokes the DS8K FlashCopy function. IBM Tivoli Storage FlashCopy Manager helps deliver the highest levels of protection for mission critical IBMDB2, SAP, Oracle, Microsoft Exchange, and Microsoft SQL Server applications via integrated, application-aware snapshot backup and restore capabilities. This is achieved through the exploitation of advanced IBM storage hardware snapshot technology to create a high performance, low impact application data protection solution.

Metro Mirror
Metro Mirror is on a volume/lun to volume/lun basis. Multiple volume/lun pairs can be combined under management software like IBMs TPC-R replication manager software which exploits special commands that were created to enable software to create and manage consistency groups of volumes across the same or multiple DS8000s as well as System z volumes and/or distributed data luns.. GDPS can also be used as the metro mirror management software. GDPS and TPC-R both also manage the HyperSwap function when and if desired. GDPS uses a unique interface into the DS8000 to manage distributed system Luns. Commands can be sent on an ECKD device address to the DS8000, but with an indicator that this command is for distributed system Luns. This interface has actually been available since day one of distributed systems data replication support.

Copyright IBM Corporation, 2012

Page 11

2/1/2012

Global Mirror
Global Mirror is a formal session that is defined by host software, but actually runs outboard across a single or multiple primary DS8000s to a set of Target DS8000 volumes/Luns. Volume/Lun pairs are defined to the DS8000 and then added to the Global Mirror session. TPC-R and GDPS automation are both examples of some of the software that can be used to setup the volumes/luns contained in the Global Mirror session.

Metro Global Mirror


Metro Global Mirror is a formal session that is defined by host software, but actually runs outboard across a single or multiple primary DS8000s to a set of Target DS8000 volumes/Luns. Volume/Lun pairs are defined to the DS8000 and then added to the Metro Global Mirror session. TPC-R and GDPS automation are both examples of some of the software that can be used to setup the volumes/luns contained in the Metro Global Mirror session.

Common Management Software GDPS + PowerHA Solutions + Open Cluster Management


The GDPS family of products can inter-operate with Open Clusters managed by Tivoli Application Manager, Veritas Cluster Manager, and the IBM PowerHA for System p cluster system solutions to provide an end to end fully automated Server, Workload , Data with a coordinated Network site switch solution for Multi-Site Management. GDPS/PPRC , GDPS/GM and GDPS/MGM solutions work in such a configuration. GDPS can manage the data replication for System z and the distributed systems under the DS8000, or the two environments data replication functions can be managed separately. Site failover/fallback is managed by business policy that becomes input to the customized automation. The business policy tells the automation code, depending on what system/data fails, what related systems/data need to failover on a site switch.

TPC
The Tivoli Productivity Center of products provides management software to monitor, set up and manage the IBM storage environment including the DS8000, SVC, XIV and DS4K, DS5K product sets.

Copyright IBM Corporation, 2012

Page 12

2/1/2012

zWLM & eWLM


zWLM synergy with the IBM DS8700/DS8800 R6.2 LIC is another critical synergy item between z/OS and the DS8000. As mentioned previously, zWLM manages the workload end to end, across the System z hardware, z/OS, the data base middleware and the DS8000. Workloads are now managed based on zWLM Service Class to meet the business defined quality of service levels. The DS8000 processes mixed workloads from a number of different server types. I/O Priority Manager and Easy Tier become valuable tools to dynamically interface with host software like zWLM and eWLM to provide the desired quality of service for all production workloads. eWLM (Enterprise Workload Manager) is an IBM Tivoli product within the IBM Virtualization Engine Suite of products focused on providing enterprise wide workload management capabilities.

Data Consolidation to the DS8700/DS8800 Savings


The DS8800 provides large storage capacities in minimal floor space and power consumption. Mixing a number of different types of workloads on the DS8800 has become the norm for many businesses around the world. Hardware and Host software capabilities have been provided to dynamically manage the data on the DS8800 to fully optimize the available storage capacities, while maintaining the desired quality of service required by the production workloads. Simplified management, consolidation of storage frames, floor space and power savings without sacrificing performance and availability to data makes the DS8800 the ideal platform for data consolidation.

DS8800 Space & Power


The DS8800 is the ideal platform for data consolidation. With smaller 2.5 inch drives high capacity drives, the DS8800 provides the ability to pack a large amount of disk storage capacity into a small amount of floor space.. But, in addition, the DS8800 has two IBM Power 6+ engines within its frames such that the DS8800 performance continues to be industry leading. A smaller footprint, high performance as well as savings on physical power are all illustrated on the following two charts. The 3rd chart illustrates the maximum configuration aavailable of the DS8800 with the R6.2 Lic.

Copyright IBM Corporation, 2012

Page 13
Performance

2/1/2012

Sequential in GB/s

Random (70/30/50 database) in Thousands IO/s

2012 IBM Corporation

DS8800 increases density / capacity, saving power

DS8700
1024 Drives in 5 Frames
A B C D

DS8800 kW
6.8 5.4 6.5

DS8700 kW
6.8 7.1 6.1 6.1 3.0

DS8800
1056 Drives in 3 Frames
~40% less power

E Total 18.7

29.1

~33% less weight ~40% less floor space

DS8700 1-Frame
128 Drives

DS8800 1-Frame
240 Drives
Same power consumption Same Weight 87% more disks

2012 IBM Corporation

Copyright IBM Corporation, 2012

Page 14

2/1/2012

DS8800 Maximum Configuration


4th Rack NEW in R6.2
Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks Disks

Disks 2U Storage Enclosure for either 24 - 2.5 Disk Slots or 12 - 3.5 Disk Slots

Per Rack
Storage Enc. 2.5 Disks (Max) 3.5 Disks (Max) 10 240 120 10 240 120 216 360 14 336 168 24 576 288 518 864

20 480 240 44 1056 528 960 1584

20 480 240 3.5 Disks NEW in R6.2 64 1536 768 1382 2304 900 GB 2.5 3 TB 3.5 New in R6.2
2012 IBM Corporation

Cumulative
Enclosures 2.5 Disks (Max) 3.5 Disks (Max) 2.5 900GB Cap (TB) 3.5 3TB Cap (TB)

Conclusions
The DS8000 family is designed to support the most demanding business applications with its exceptional all-around performance and data throughput. This, combined with its world-class business resiliency and encryption features, provides a unique combination of high availability, performance, and security. Its tremendous scalability, broad server support, and virtualization capabilities can help simplify the storage environment by consolidating multiple storage systems onto a single DS8000.

Copyright IBM Corporation, 2012

Page 15

2/1/2012

Author
Bob Kern - IBM Advanced Technical Support Americas ( bobkern@us.ibm.com). Mr. Kern is an IBM Master Inventor & Executive IT Architect. He has 37 years experience in large system design and development and holds numerous patents in Storage related topics. For the last 28 years, Bob has specialized in disk device support and is a recognized expert in continuous availability, disaster recovery and real time disk mirroring. He created the DFSMS/MVS subcomponents for Asynchronous Operations Manager and the System Data Mover. Bob was named in 2003 a Master Inventor by the IBM Systems & Technology Group and is one of the inventors of Concurrent Copy, PPRC, XRC, GDPS and zCDP solutions. He continues to focus in the Disk Storage Architecture area on HW/SW solutions focused on Continuous Availability, and Data Replication. He is a member of the GDPS core architecture team and the GDPS Customer Design Council with focus on storage related topics.

Copyright IBM Corporation, 2012

You might also like