Professional Documents
Culture Documents
Page 1 of 12
VIEW SUMMARY
Overview
Key Findings
With the inclusion of solid-state drives in arrays, performance is no longer a differentiator in its
own right, but a scalability enabler that improves operational and financial efficiency by facilitating
storage consolidation.
NOTE 1
Z/OS SUPPORT
This research compares storage arrays that support
z/OS mainframe environments with arrays that do not.
This difference in the presence or absence of z/OS
support is taken into account only in the array
ecosystem ratings, where it contributes positively to
arrays supporting z/OS, and has no influence on arrays
not supporting z/OS. It has no influence on other
ratings or the rating weights used in the tool.
Recommendations
Move beyond technical attributes to include vendor service and support capabilities, as well as
acquisition and ownership costs, when making your high-end storage array buying decisions.
Don't always use the ingrained, dominant considerations of incumbency, vendor and product
reputations when choosing high-end storage solutions.
Vary the ratios of SSDs, Serial Attached SCSI and SATA hard-disk drives in the storage array, and
limit maximum configurations based on system performance to ensure that SLAs are met during
the planned service life of the system.
Select disk arrays based on the weighting and criteria created by your IT department to meet your
organizational or business objectives, rather than choosing those with the most features or
highest overall scores.
Gartner expects that, within the next four years, arrays using legacy software will need major reengineering to remain competitive against newer systems that achieve high-end status, as well as
hybrid storage solutions that use solid-state technologies to improve performance, storage efficiency
and availability. In this research, the aggregated scores among the arrays are minimal. Therefore,
clients are advised to look at the individual capabilities that are important to them, rather than the
overall score.
Because array differentiation has decreased, the real challenge of performing a successful storage
infrastructure upgrade is not designing an infrastructure upgrade that works, but designing one that
optimizes agility and minimizes total cost of ownership (TCO). Another practical consideration is that
choosing a suboptimal solution is likely to have only a moderate impact on deployment and TCO for the
following reasons:
Product advantages are usually short-lived and temporary. Gartner refers to this phenomenon as
the "compression of product differentiation."
Most clients report that differences in management and monitoring tools, as well as ecosystem
support among various vendors' offerings, are not enough to change staffing requirements.
Storage TCO, although growing, still accounts for less than 10% (6.5% in 2013) of most IT
budgets.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 2 of 12
Analysis
Introduction
The arrays evaluated in this research include scale-up, scale-out, hybrid and unified storage
architectures. Because these arrays have different availability characteristics, performance profiles,
scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against
operational needs, planned new application deployments, and forecast growth rates and asset
management strategies.
Midrange arrays with scale-out characteristics can satisfy the high-availability criteria when configured
with four or more controllers and multiple disk shelves. Whether these differences in availability are
enough to affect infrastructure design and operational procedures will vary by user environment, and
will also be influenced by other considerations, such as host system/capacity scaling, downtime costs,
lost opportunity costs and the maturity of the end-user change control procedures (e.g., hardware,
software, procedures and scripting), which directly affect availability.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 3 of 12
Figure 4. Vendors' Product Scores for the Server Virtualization and VDI Use Case
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 4 of 12
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 5 of 12
Vendors
DataDirect Networks SFA12K
The SFA12KX, the newest member of the SFA12K family, increases SFA12K performance/throughput
via a hardware refresh and through software improvements. Like other members of the SFA12K family,
it remains a dual-controller array that, with the exception of an in-storage processing capability,
prioritizes scalability, performance/throughput and availability over value-added functionality, such as
local and remote replication, thin provisioning and autotiering. These priorities align better with the
needs of the high-end, high-performance computing (HPC) market than with general-purpose IT
environments. Further enhancing the appeal of the SFA12KX in large environments is dense packaging:
84 HDDs/4U or 5 PB/rack, and GridScaler and ExaScaler gateways that support parallel file systems,
based on IBM's GPFS or the open-source Lustre parallel file system.
The combination of high bandwidth and high areal densities has made the SFA12K a popular array in
the HPC, cloud, surveillance and media markets that prioritize automatic block alignment and
bandwidth over input/output operations per second (IOPS). The SFA12K's high areal density also makes
it an attractive repository for big data and inactive data, particularly as a backup target for backup
solutions doing their own compression and/or deduplication. Offsetting these strengths are limited
ecosystem support beyond parallel file systems and backup/restore products; lack of vSphere API for
Array Integration (VAAI) support, which limits its appeal for use as VMware storage; zero bit detection,
which limits its appeal with applications such as Microsoft Exchange and Oracle Database; and quality of
service (QoS) and security features that could limit its appeal in multitenancy environments.
EMC VMAX
The maturity of the VMAX 10K, 20K and 40K hardware, combined with the Enginuity software and wide
ecosystem support, provides proven reliability and stability. However, the need for backward
compatibility has complicated the development of new functions, such as data reduction. The VMAX3,
which has recently become generally available, has not yet had time to be market-validated, because it
only became available on 26 September 2014. Even with new controllers, promised Hypermax software
updates and a new InfiniBand internal interconnect, mainframe support is not available, nor is the littleused Fibre Channel over Ethernet (FCoE) protocol. Nevertheless, with new functions, such as in-built
VPLEX, recover point replication, virtual thin provisioning and more processing power, customers should
move quickly to the VMAX3, because it has the potential to develop further.
The new VMAX 100K, 200K and 400K arrays still lack independent benchmark results, which, in some
cases, leads users to delay deploying a new feature into production environments until the feature's
performance has been fully profiled, and its impact on native performance is fully understood. The lack
of independent benchmark results has also led to misunderstandings regarding the configuration of
back-end SSDs and HDDs into redundant array of independent disks (RAID) groups, which have
required users to add capacity to enable the use of more-expensive 3D+1P RAID groups to achieve
needed performance levels, rather than larger, more-economical 7D+1P RAID groups.
EMC's expansion into software-defined storage (SDS; aka ViPR), network-based replication (aka
RecoverPoint) and network-based virtualization (aka VPLEX) suggests that new VMAX users should
evaluate the use of these products, in addition to VMAX-based features, when creating their storage
infrastructure and operational visions.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 6 of 12
basis; therefore, customers do not need to spend more as they increase the capacity of the arrays. The
DX8700 S2 series was updated with a new level of software to improve performance and improved
QoS, which not only manages latency and bandwidth, but also integrates with the DX8700 Automated
Storage Tiering to move data to the required storage tier to meet QoS targets. It is a scale-out array,
providing up to eight controllers.
The DX8700 S2 has offered massive array of idle disks (MAID) or disk spin-down for years. Even
though this feature has been implemented successfully without any reported problems, it has not been
adopted, nor has it gained popular market acceptance. The same Eternus SF management software is
used across the entire DX product line, from the entry level to the high end. This simplifies
manageability, migration and replication among Fujitsu storage arrays. Customer feedback is positive
concerning the performance, reliability, support and serviceability of the DX8700 S2, and Gartner
clients report that the DX8700 S2 RAID rebuild times are faster than comparable systems. The
management interface is geared toward storage experts, but is simplified in the Eternus SF V16,
thereby reducing training costs and improving storage administrator productivity. To enable workflow
integration with SDS platforms, Fujitsu is working closely with the OpenStack project.
HDS HUS VM
The Hitachi Data Systems (HDS) Hitachi Unified Storage (HUS) VM is an entry-level version of the
Virtual Storage Platform (VSP) series. Similar to its larger VSP siblings, it is built around Hitachi's crossbar switches, has the same functionality as the VSP, can replicate to HUS VM or VSP systems using
TrueCopy or Hitachi Universal Replicator (HUR), and uses the same management tools as the VSP.
Because it shares APIs with the VSP, it has the same ecosystem support; however, it does not scale to
the same storage capacity levels as the HDS VSP G1000. Similarly, it does not provide data reduction
features. Hardware reliability and microcode quality are good; this increases the appeal of its Universal
Volume Manager (UVM), which enables the HUS VM to virtualize third-party storage systems.
Hitachi Data Systems offers performance transparency with its arrays, with SPC-1 performance and
throughput benchmark results available. Client feedback indicates that the use of thin provisioning
generally improves performance and that autotiering has little to no impact on array performance.
Snapshots have a measurably negative, but entirely acceptable, impact on performance and
throughput. Offsetting these strengths are the lack of native Internet Small Computer System Interface
(iSCSI) and 10-Gigabit Ethernet (GbE) support, which is particularly useful for remote replication, as
well as relatively slow integration with server virtualization, database, shareware and backup offerings.
Integration with the Hitachi NAS platform adds iSCSI, Common Internet File System (CIFS) and
Network File System (NFS) protocol support for users that need more than just Fibre Channel support.
HP XP7
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 7 of 12
Sourced from Hitachi Ltd. under joint technology and OEM agreements, the HP XP7 is the next
incremental evolution of the high-end, frame-based XP-Series that HP has been selling since 1999.
Engineered to be deployed in support of applications that require the highest levels of resiliency and
performance, the HP XP7 features increased capacity scalability and performance over its predecessor,
the HP XP P9500, while leveraging the broad array of proven HP-XP-series data management software.
Beyond expected capacity and performance improvements, the new Active-Active High Availability and
Active-Active data mobility functions that elevate storage system and data center availability to higher
levels, as well as providing nondisruptive, transparent application mobility among hosts servers at the
same or different sites are two notable enhancements. The HP XP7 shares a common technology base
with the Hitachi/HDS VSP G1000, and HP differentiates the XP7 in the areas of broader integration and
testing with the full HP portfolio ecosystem and the availability of Metro Cluster for HP Unix, as well as
by restricting the ability to replicate between XP7 and HDS VSPs.
Positioned in HP's traditional storage portfolio, the primary mission of the XP7 is to serve as an upgrade
platform to the XP-Series installed base, as well as to address opportunities involving IBM mainframe
and storage for HP Nonstop infrastructures. Since HP acquired 3PAR, XP-Series revenue continues to
decline annually, as HP places more go-to-market weight behind the 3PAR StoreServ 10000 offering.
IBM DS8870
The DS8870 is a scale-up, two-node controller architecture that is based and dependent on IBM's Power
server business. Because IBM owns the z/OS architecture, IBM has inherent cross-selling, product
integration and time-to-market advantages supporting new z/OS features, relative to its competitors.
Snapshot and replication capabilities are robust, extensive and relatively efficient, as shown by features
such as FlashCopy; synchronous, asynchronous three-site replication; and consistency groups that can
span arrays. The latest significant DS8870 updates include Easy Tier improvements, as well as a High
Performance Flash Enclosure, which eliminates earlier, SSD-related architectural inefficiencies and
boosts array performance. Even with the addition of the Flash Enclosure, the DS8870 is no longer IBM's
high-performance system, and data reduction features are not available unless extra SAN Volume
Controller (SVC) devices are purchased in addition to the DS8870.
Overall, the DS8870 is a competitive offering. Ease-of-use improvements have been achieved by taking
the XIV management GUI and implementing it on the DS8870. However, customer reports are that the
new GUI still needs a more detailed administrative approach, and is not yet suited to high-level
management, as provided by the XIV icon-based GUI. Due to the dual-controller design, major software
updates can disable one of the controllers for as long as an hour. These updates need to be planned,
because they can reduce the availability and performance of the system by as much as 50% during the
upgrade process. With muted traction in VMware and Microsoft infrastructures, IBM positions the
DS8870 as its primary enterprise storage platform to support z/OS and AIX infrastructures.
IBM XIV
The current XIV is in its third generation. The freedom from legacy dependencies is apparent from its
modern, easy-to-use, icon-based operational interface, and a scale-out distributed processing and RAID
protection scheme. Good performance and the XIV management interface are winning deals for IBM.
This generation enhances performance with the introduction of SSD and a faster InfiniBand interconnect
among the XIV nodes. The advantages of the XIV are simple administration and inclusive software
licenses, which make buying and upgrading the XIV simple, without hidden or additional storage
software license charges. The mirror RAID implementation creates a raw versus usable capacity, which
is not as efficient as traditional RAID 5/6 designs; therefore, the scalability only reaches 325TB.
However, together with inclusive software licensing, the XIV usable capacity is priced accordingly, so
that the price per TB is competitive in the market.
A new Hyper-Scale feature enables IBM to federate a number of XIV platforms to create a PB+ scale
infrastructure under the Hyper-Scale Manager to enable the administration of several XIV systems as
one. Positioned as IBM's primary high-end storage platform for VMware, Microsoft Hyper-V and cloud
infrastructure deployments, IBM has released several new and incremental XIV enhancements,
foremost of which are three-site mirroring, multitenancy and VMware vCloud Suite integration.
NetApp FAS8000
The high-end FAS series model numbers were changed from FAS6000 to FAS8000. The upgrade
included faster controllers and storage virtualization built into the system and enabled via a software
license. Because each FAS8000 HA node pair is a scale-up, dual-controller array, to qualify for inclusion
in this Critical Capabilities research requires that the NetApp FAS8000 series must be configured with at
least four FAS8000 nodes managed by Clustered Data Ontap. This supports a maximum of eight nodes
for deployment with storage area network (SAN) protocols and up to 24 nodes with NAS protocols.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 8 of 12
Depending on drive capacity, Clustered Data Ontap can support a maximum raw capacity of 2.6PB to
23.0PB in a SAN infrastructure, and 7.8PB to 69.1PB in a NAS infrastructure.
The FAS system is no longer the flagship high-performance, low-latency storage array for NetApp
customers that value performance over all other criteria. They can now choose NetApp products such as
the FlashRay. Seamless scalability, nondisruptive upgrades, robust data service software, storageefficiency capabilities, flash-enhanced performance, unified block-and-file multiprotocol support,
multitenant support, ease of use and validated integration with leading independent software vendors
(ISVs) are key attributes of an FAS8000 configured with Clustered Data Ontap.
Oracle FS1-2
The hybrid FS1-2 series replaces the Oracle Pillar Axiom storage arrays and is the newest array family
in this research. Even though the new system has fewer SSD and HDD slots, scalability in terms of
capacity is increased by approximately 30% to a total of 2.9PB, which includes up to 912TB of SSD. The
design remains a scale-out architecture with the ability to cluster eight FS1-2 pairs together. The FS1
has an inclusive software licensing model, which makes upgrades simpler from a licensing perspective.
The software features included within this model are QoS Plus, automated tiered storage, thin
provisioning, support for up to 64 physical domains (multitenancy) and multiple block-and-file protocol
support. However, if replication is required, Oracle MaxRep engine is a chargeable optional extra.
The MaxRep product provides synchronous and asynchronous replication, consistency groups and
multihop replication topologies. It can be used to replicate and, therefore, migrate older Axiom arrays
to newer FS1-2 arrays. Positioned to provide best-of-breed performance in an Oracle infrastructure, the
FS1-2 enables Hybrid Columnar Compression (HCC) to optimize Oracle Database performance, as well
as engineered integration with Oracle's virtual machine (VM) and its broad library of data management
software. However, the FS1 has yet to fully embrace integration with competing hypervisors from
VMware and Microsoft.
Context
Even as much of the storage array market is consolidating into one general-purpose market, Gartner
appreciates the entrenched usage and appeal of simple labels. Therefore, even though the terms
"midrange" and "high end" no longer accurately describe present array capabilities, user buying
behaviors or future market directions, Gartner has chosen to publish separate midrange and high-end
Critical Capabilities research (see Note 1). By doing so, Gartner can provide analyses of more arrays in
a potentially more traditional, client-friendly format.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 9 of 12
snapshot and remote copy features that are not interoperable. By contrast, integrated or unified
storage implementations use the same primitives independent of protocol, which enable them to create
snapshots that span SAN and NAS storage, and dynamically allocate server cycles, bandwidth and
cache, based on QoS algorithms and/or policies.
Mapping the strengths and weaknesses of these different storage architectures to various use cases
should begin with an overview of each architecture's strengths and weakness, as well as an
understanding of workload requirements (see Table 1).
Mature architectures:
Reliable
Cost-competitive
Large ecosystems
Independently upgrade:
Host connections
Weaknesses
Performance and bandwidth do not scale with capacity
Limited compute power may result in the use of efficiency
and data protection features negatively affecting
performance
Electronics failures and microcode updates may be highimpact events
Back-end capacity
May offer shorter RPOs over
asynchronous distances
Scale
Out
Hybrid
Consistent performance
experience with minimal tuning
Excellent price/performance
Low environmental footprint
Unified
RAS
Reliability, availability and serviceability (RAS) is a design philosophy that consistently delivers high
availability by building systems with reliable components, "derates" components to increase their mean
times between failures, and designs systems and clocking to tolerate marginal components.
RAS also involves hardware and microcode designs that minimize the number of critical failure modes in
the system; serviceability features that enable nondisruptive microcode updates; diagnostics that
minimize human errors when troubleshooting the system; and nondisruptive repair activities. Uservisible features can include tolerance of multiple disk and/or node failures, fault isolation techniques,
built-in protection against data corruption, and other techniques (such as snapshots and replication) to
meet customers' RPOs and recovery time objectives (RTOs).
Performance
This collective term describes IOPS, bandwidth (MB/second) and response times (milliseconds per I/O)
visible to attached servers. In well-designed systems, the potential performance bottlenecks are
encountered at the same time when supporting various common workload profiles.
When comparing systems, users are reminded that performance is more a scalability enabler than a
differentiator in its own right.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Page 10 of 12
These features protect against and recover from data corruption problems caused by human and
software errors, and technology and site failures, respectively. They are also useful in reducing backup
windows and minimizing the impact of backups on production workloads.
Archiving also benefits from these features in the same way as backups.
Scalability
This refers to the ability of the storage system to grow not just capacity, but performance and host
connectivity. The concept of usable scalability links capacity growth and system performance to SLAs
and application needs.
Ecosystem
This refers to the ability of the platform to support third-party ISV applications, such as databases,
backup/archiving products and management tools, hypervisor and desktop virtualization offerings, and
various OSs.
Storage Efficiency
This refers to the ability of the platform to support storage-efficiency technologies, such as
compression, deduplication, thin provisioning and autotiering, to improve utilization rates, while
reducing storage acquisition and ownership costs.
Use Cases
Overall
Overall use case is a generalized usage scenario. It does not represent the ways specific users will
utilize or deploy technologies or services in their enterprises.
Consolidation
This simplifies storage management and disaster recovery, and improves economies of scale by
consolidating multiple, dissimilar storage systems into fewer, larger systems.
RAS, performance, scalability, and multitenancy and security are heavily weighted selection criteria,
because the system becomes a shared resource, which magnifies the effects of outages and
performance bottlenecks.
OLTP
Online transaction processing (OLTP) is affiliated with business-critical applications, such as database
management systems.
These require 24/7 availability and subsecond transaction response times. Hence, the greatest
emphasis on RAS and performance features, followed by snapshots and replication, which enable rapid
recovery from data corruption problems and technology or site failure. Manageability, scalability and
storage efficiency are important, because they enable the storage system to scale with data growth,
while staying within budget constraints.
Analytics
This applies to storage consumed by big data applications using map/reduce technologies.
It also involves all analytic applications that are packaged, or provide business intelligence (BI)
capabilities for a particular domain or business problem (see definition in "Hype Cycle for Analytic
Applications, 2013").
Cloud
This applies to storage arrays used in private, hybrid and public cloud infrastructures, and how they
apply to specific, cost, scale, manageability and performance needs.
Hence, storage efficiency and resiliency are important selection considerations, and are highly
weighted.
Inclusion Criteria
This research evaluates the high-end, general-purpose storage systems supporting the use cases
enumerated in Table 2.
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Critical Capabilities
Overall
Consolidation
OLTP
Server Virtualization
and VDI
Analytics
Cloud
Manageability
13%
12%
10%
13%
15%
16%
RAS
17%
18%
20%
14%
15%
15%
Performance
16%
5%
25%
20%
20%
10%
Snapshot and
Replication
10%
5%
10%
12%
15%
10%
Scalability
13%
15%
15%
9%
10%
15%
Ecosystem
8%
8%
5%
10%
7%
9%
11%
18%
5%
10%
8%
15%
Multitenancy and
Security
Storage Efficiency
Total
Page 11 of 12
12%
19%
10%
12%
10%
10%
100%
100%
100%
100%
100%
100%
As of November 2014
DataDirect
Networks
SFA12K
EMC
VMAX
Fujitsu
Eternus
DX8700
S2
HDS
HUS
VM
HDS
VSP
G1000
HP 3PAR
StoreServ
10000
HP
XP7
Huawei
OceanStor
18000
IBM
DS8870
Manageability
4.0
4.2
3.8
4.0
4.0
4.5
4.0
3.5
4.0
RAS
3.7
4.3
4.2
4.3
4.5
3.7
4.5
4.2
4.2
Performance
4.5
3.8
4.2
3.7
4.3
4.0
4.3
4.0
4.0
Snapshot and
Replication
1.0
4.0
4.0
4.2
4.2
4.0
4.2
4.0
4.0
Scalability
4.5
4.3
4.5
3.3
4.5
4.0
4.5
4.0
3.8
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015
Product or
Service
Ratings
DataDirect
Networks
SFA12K
EMC
VMAX
Fujitsu
Eternus
DX8700
S2
HDS
HUS
VM
HDS
VSP
G1000
HP 3PAR
StoreServ
10000
HP
XP7
Huawei
OceanStor
18000
IBM
DS8870
Ecosystem
2.0
4.5
3.2
4.0
4.0
4.0
4.0
3.3
3.5
Multitenancy
and Security
3.3
3.7
4.0
4.0
4.2
4.0
4.2
4.0
4.0
Storage
Efficiency
3.2
3.5
3.5
3.5
3.5
4.2
3.5
3.3
3.7
Page 12 of 12
As of November 2014
Table 4 shows the product/service scores for each use case. The scores, which are generated by
multiplying the use case weightings by the product/service ratings, summarize how well the critical
capabilities are met for each use case.
DataDirect
Networks
SFA12K
EMC
VMAX
Fujitsu
Eternus
DX8700
S2
HDS
HUS
VM
HDS
VSP
G1000
HP 3PAR
StoreServ
10000
HP
XP7
Huawei
OceanStor
18000
IBM
DS8870
Overall
3.46
4.03
3.98
3.87
4.18
4.04
4.18
3.83
3.93
Consolidation
3.46
4.00
3.94
3.85
4.13
4.04
4.13
3.79
3.91
OLTP
3.63
4.04
4.06
3.85
4.23
4.01
4.23
3.89
3.96
Server
Virtualization
and VDI
3.38
4.02
3.95
3.88
4.16
4.05
4.16
3.81
3.92
Analytics
3.38
4.03
3.98
3.90
4.18
4.05
4.18
3.84
3.95
Cloud
3.42
4.05
3.97
3.88
4.18
4.06
4.18
3.82
3.93
As of November 2014
To determine an overall score for each product/service in the use cases, multiply the ratings in Table 3
by the weightings shown in Table 2.
2014 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced
or distributed in any form without Gartners prior written permission. If you are authorized to access this publication, your use of it is subject to the Usage Guidelines for
Gartner Services posted on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all
warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions expressed herein are subject to
change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research
should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered
in Gartner research. Gartners Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research
organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see
Guiding Principles on Independence and Objectivity.
About Gartner | Careers | Newsroom | Policies | Site Index | IT Glossary | Contact Gartner
http://www.gartner.com/technology/reprints.do?id=1-1RO1Z8Z&ct=140310&st=sb
10/1/2015