Professional Documents
Culture Documents
com
Contents
Revision history..........................................................................................................................................4
Introduction................................................................................................................................................. 5
System overview.........................................................................................................................................7
System architecture and components.................................................................................................... 7
Base configurations and scaling.............................................................................................................9
Connectivity overview...........................................................................................................................11
Segregated network architecture................................................................................................... 12
Unified network architecture.......................................................................................................... 17
Storage layer............................................................................................................................................. 30
Storage overview..................................................................................................................................30
EMC VNX series storage arrays.......................................................................................................... 30
Replication............................................................................................................................................32
Scaling up storage resources...............................................................................................................32
Storage features support......................................................................................................................35
Network layer............................................................................................................................................ 37
Network overview................................................................................................................................. 37
IP network components........................................................................................................................37
Port utilization.......................................................................................................................................38
Cisco Nexus 5548UP Switch - segregated networking..................................................................39
Cisco Nexus 5596UP Switch - segregated networking..................................................................40
Cisco Nexus 5548UP Switch – unified networking........................................................................ 40
Cisco Nexus 5596UP - unified networking.....................................................................................42
Cisco Nexus 9396PX Switch - segregated networking..................................................................43
Storage switching components............................................................................................................ 44
Virtualization layer....................................................................................................................................46
Virtualization overview..........................................................................................................................46
2
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Contents VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Management..............................................................................................................................................50
Management components overview.....................................................................................................50
Management hardware components....................................................................................................50
Management software components..................................................................................................... 51
Management network connectivity....................................................................................................... 52
System infrastructure...............................................................................................................................57
VCE Systems descriptions................................................................................................................... 57
Cabinets overview................................................................................................................................ 58
Intelligent Physical Infrastructure appliance......................................................................................... 58
Power options.......................................................................................................................................58
Configuration descriptions...................................................................................................................... 60
VCE Systems with EMC VNX8000...................................................................................................... 60
VCE Systems with EMC VNX7600...................................................................................................... 63
VCE Systems with EMC VNX5800...................................................................................................... 66
VCE Systems with EMC VNX5600...................................................................................................... 69
VCE Systems with EMC VNX5400...................................................................................................... 72
Sample configurations............................................................................................................................. 75
Sample VCE System with EMC VNX8000........................................................................................... 75
Sample VCE System with EMC VNX5800........................................................................................... 81
Sample VCE System with EMC VNX5800 (ACI ready)........................................................................86
Additional references............................................................................................................................... 92
Virtualization components.................................................................................................................... 92
Compute components.......................................................................................................................... 92
Network components............................................................................................................................93
Storage components............................................................................................................................ 93
3
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Revision history
Revision history
Date VCE System Document Description of changes
revision
August 2015 Gen 3.3 3.8 Updated to include the VxBlock System 340. Added support
for VMware vSphere 6.0 with VMware VDS on the VxBlock
System and for existing Vblock Systems.
Added information on Intelligent Physical Infrastructure (IPI)
appliance
February 2015 Gen 3.2 3.7 Added support for Cisco B200 M4 Blade.
September 2014 Gen 3.2 3.5 Modified elevations and removed aggregate section.
July 2014 Gen 3.2 3.4 Added support for VMware VDS.
May 2014 Gen 3.2 3.3 • Updated for Cisco Nexus 9396 Switch and 1500 drives
for EMC VNX8000
• Added support for VMware vSphere 5.5
January 2014 Gen 3.1 3.2 Updated elevations for AMP-2 reference.
November 2013 Gen 3.1 3.1 Updated network connectivity management illustration.
4
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Introduction VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Introduction
This document describes the high-level design of the VCE System. This document also describes the
hardware and software components that VCE includes in the VCE System.
In this document, the Vblock System and VxBlock System are referred to as VCE Systems.
The VCE Glossary provides terms, definitions, and acronyms that are related to VCE.
To suggest documentation changes and provide feedback on this book, send an e-mail to
docfeedback@vce.com. Include the name of the topic to which your feedback applies.
Related information
5
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Accessing VCE documentation
Role Resource
Customer support.vce.com
A valid username and password are required. Click VCE Download Center to access the
technical documentation.
6
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
System overview
• Optimized, fast delivery configurations based on the most commonly purchased components
• Standardized cabinets with multiple North American and international power solutions
• Support for multiple features of the EMC operating environment for EMC VNX arrays
• Granular, but optimized compute and storage growth by adding predefined kits and packs
• Unified network architecture provides the option to leverage Cisco Nexus switches to support IP
and SAN without the use of Cisco MDS switches.
VCE Systems contain the following key hardware and software components:
Resource Components
7
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
Resource Components
Storage • EMC VNX storage array (5400, 5600, 5800, 7600, 8000) running the VNX Operating
Environment
• (Optional) EMC unified storage (NAS)
8
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
VCE Systems have a different scale point based on compute and storage options. VCE Systems can
support block and/or unified storage protocols.
The VCE Release Certification Matrix provides a list of the certified versions of components for VCE
Systems. For information about VCE System management, refer to the VCE Vision™ Intelligent
Operations Technical Overview.
The VCE Integrated Data Protection Guide provides information about available data protection solutions.
Related information
Within the base configuration, the following hardware aspects can be customized:
Compute blades Cisco UCS B-Series blade types include all supported VCE blade
configurations.
Edge servers Four to six Cisco UCS B-series Blade Servers, including the B200 M4 with VIC
(with optional VMware NSX) 1340 and VIC 1380.
For more information, see the VCE VxBlock™ Systems for VMware NSX
Architecture Overview.
Storage hardware Drive flexibility for up to three tiers of storage per pool, drive quantities in each
tier, the RAID protection for each pool, and the number of disk array enclosures
(DAEs).
Storage EMC VNX storage - block only or unified (SAN and NAS)
9
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
Tier 0
100/200GB SLC SSD
100/200/400GB eMLC SSD
Tier 1
300/600GB 15K SAS
600/900GB 10K SAS
Tier 2
1/2/3/4 TB 7.2K NL-SAS
Supported RAID types Tier 0: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1)
Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)**
Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)**
*file virtual pool only
**block virtual pool only
Management hardware options The second generation of the Advanced Management Platform (AMP-2)
centralizes management of VCE System components. AMP-2 offers minimum
physical, redundant physical, and highly available models. The standard option
for this platform is the minimum physical model.
The optional VMware NSX feature requires AMP-2HA Performance.
Data Mover enclosure (DME) Available on all VCE Systems. Additional enclosure packs can be added for
packs additional X-Blades on VCE Systems with EMC VNX8000, VCE Systems with
EMC VNX7600, and VCE Systems with EMC VNX5800.
Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the
compute and storage arrays in the system. All components have N+N or N+1 redundancy.
These resources can be scaled up as necessary to meet increasingly stringent requirements. The
maximum supported configuration differs from model to model. To scale up compute resources, add
blade packs and chassis activation kits.
To scale up storage resources, add RAID packs, DME packs, and DAE packs. Optionally, expansion
cabinets with additional resources can be added.
VCE Systems are designed to keep hardware changes to a minimum if the storage protocol is changed
after installation (for example, from block storage to unified storage). Cabinet space can be reserved for
all components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.)
ensuring that network and power cabling capacity for these components is in place.
Related information
10
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Connectivity overview
This topic describes the components and interconnectivity within the VCE Systems.
These components and interconnectivity are conceptually subdivided into the following layers:
Layer Description
Compute Contains the components that provide the computing power within a VCE System. The Cisco UCS blade
servers, chassis, and fabric interconnects belong to this layer.
Network Contains the components that provide switching between the compute and storage layers within a VCE
System, and between a VCE System and the network. Cisco MDS switches and the Cisco Nexus
switches belong to this layer.
In the segregated network architecture, LAN and SAN connectivity is segregated into separate switches
within the VCE System. LAN switching uses the Cisco Nexus switches. SAN switching uses the Cisco
MDS 9148 Multilayer Fabric Switch.
In the unified network architecture, LAN and SAN switching is consolidated onto a single network device
(Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches) within the VCE System. This removes
the need for a Cisco MDS SAN switch.
Note: The optional VMware NSX feature uses the Cisco Nexus 9396 switches for LAN switching. For
more information, see the VCE VxBlock™ Systems for VMware NSX Architecture Overview.
In addition, all management interfaces for infrastructure power outlet unit (POU), network, storage, and
compute devices are connected to redundant Cisco Nexus 3048 switches. These switches provide
connectivity for Advanced Management Platform (AMP-2) and egress points into the management stacks
for the VCE Systems components.
Related information
11
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
The following illustration shows a block-only storage configuration for VCE Systems with the X-Blades
absent from the cabinets. However, space can be reserved in the cabinets for these components
12
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
(including optional EMC RecoverPoint Appliances). This design makes it easier to add the components
later if there is an upgrade to unified storage.
13
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
In all VCE Systems configurations, the VMware vSphere ESXi blades boot over the Fibre Channel (FC)
SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through
the SAN. In a unified storage configuration, the boot devices are presented over FC and data service can
be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, the
boot devices are presented over FC and data devices are through NFS shares. Storage can also be
presented directly to the VMs as CIFS shares.
14
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following illustration shows the components (highlighted in a red, dotted line) that are leveraged to
support SAN booting in VCE Systems:
15
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X-
Blades connect to the Cisco Nexus switches within the network layer over 10 GbE, as shown in the
following illustration:
16
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Related information
With unified network architecture, access to both block and file services on the EMC VNX is provided
using the Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch. The Cisco Nexus 9396PX Switch
is not supported in unified network architecture.
17
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
In this example, there are no X-Blades providing NAS capabilities. However, space can be reserved in
the cabinets for these components (and including the optional EMC RecoverPoint Appliance). This design
makes it easier to add the components later if there is an upgrade to unified storage.
18
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
In a unified storage configuration for block and file, the storage processors also connect to X-Blades over
Fibre Channel (FC). The X-Blades connect to the Cisco Nexus switches within the network layer over 10
GbE.
In all VCE Systems configurations, VMware vSphere ESXi blades boot over the FC SAN. In block-only
configurations, block storage devices (boot and data) are presented over FC through the Cisco Nexus
unified switch. In a unified storage configuration, the boot devices are presented over FC and data
devices can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only
configuration, boot devices are presented over FC, and data devices over NFS shares. The remainder of
the storage can be presented either as NFS or as VMFS data stores. Storage can also be presented
directly to the VMs as CIFS shares.
19
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
The following illustration shows the components that are leveraged to support SAN booting in VCE
Systems:
20
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X-
Blades connect to the Cisco Nexus switches within the network layer over 10 GbE.
21
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System overview
The following illustration shows a unified storage configuration for VCE Systems:
22
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System overview VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Related information
23
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer
Compute layer
Compute overview
This topic provides an overview of the compute components for the VCE Systems.
Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power within VCE
Systems.
Fabric extenders (FEX) within the Cisco UCS chassis connect to Cisco fabric interconnects over
converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to
the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP
and storage traffic.
VCE has reserved some of these ports to connect to upstream access switches within the VCE Systems.
These connections are formed into a port channel to the Cisco Nexus switch and carry IP traffic destined
for the external network 10 GbE links. In a unified storage configuration, this port channel can also carry
NAS traffic to the X-Blades within the storage layer.
Each fabric interconnect also has multiple ports reserved by VCE for Fibre Channel (FC) ports. These
ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and
the storage layer. In a unified storage configuration, port channels carry IP traffic to the X-Blades for NAS
connectivity. For SAN connectivity, SAN port channels carrying FC traffic are configured between the
fabric interconnects and upstream Cisco MDS or Cisco Nexus switches.
Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unified
network fabric with enterprise-class, x86-based servers (the Cisco B-Series).
• Reduced cabling
The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades.
24
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Compute layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Related information
The Cisco UCS fabric interconnects provide the management and communication backbone for the
blades and chassis. The Cisco UCS fabric interconnects provide LAN and SAN connectivity for all blades
within their domain. Cisco UCS fabric interconnects are used for boot functions and offer line-rate, low-
latency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions.
VCE Systems use Cisco UCS 6248UP Fabric Interconnects and Cisco UCS 6296UP Fabric
Interconnects. Single domain uplinks of 2, 4, or 8 between the fabric interconnects and the chassis are
provided with the Cisco UCS 6248UP Fabric Interconnects. Single domain uplinks of 4 or 8 between the
fabric interconnects and the chassis are provided with the Cisco UCS 6296UP Fabric Interconnects.
The optional VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the
port count needed for VMware NSX external connectivity (edges). For more information, see the VCE
VxBlock™ Systems for VMware NSX Architecture Overview.
Related information
Cisco TPM provides authentication and attestation services that provide safer computing in all
environments. Cisco TPM is a computer chip that securely stores artifacts such as passwords,
certificates, or encryption keys that authenticate the VCE System. Cisco TPM provides authentication and
attestation services that provide safer computing in all environments.
Cisco TPM is available by default within the VCE System as a component within the Cisco UCS B-Series
M3 Blade Servers and Cisco UCS B-Series M4 Blade Servers, and is shipped disabled. The Vblock
System Blade Pack Reference contains additional information about Cisco TPM.
VCE supports only the Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Because
making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant
experience in trusted computing, VCE defers to the software stack vendor for configuration and
operational considerations relating to the Cisco TPMs.
25
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer
Related information
www.cisco.com
To scale up compute resources, you can add uplinks, blade packs, and chassis activation kits to enhance
Ethernet and Fibre Channel (FC) bandwidth either when VCE Systems are built, or after they are
deployed.
The following table shows the maximum chassis and blade quantities that are supported for VCE
Systems with EMC VNX5400, VCE Systems with EMC VNX5600, VCE Systems with EMC VNX5800,
VCE Systems with EMC VNX7600, and VCE Systems with EMC VNX8000:
VCE Systems 2-link Cisco UCS 4-link Cisco 4-link Cisco 8-link Cisco 8-link Cisco
with 6248UP Cisco UCS 6248UP UCS 6296UP UCS 6248UP UCS 6296UP
UCS 2204XP IOM Cisco UCS Cisco UCS Cisco UCS Cisco UCS
2204XP IOM 2204XP IOM 2208XP IOM 2208XP IOM
For VCE Systems with EMC VNX5600, EMC VNX5800, EMC VNX7600, and EMC VNX8000, the
Ethernet I/O bandwidth enhancement increases the number of Ethernet uplinks from the Cisco UCS
6296UP fabric interconnects to the network layer to reduce oversubscription. To enhance Ethernet I/O
bandwidth performance increase the uplinks between the Cisco UCS 6296UP fabric interconnects and
the Cisco Nexus 5548UP Switch for segregated networking, or the Cisco Nexus 5596UP Switch for
unified networking.
FC I/O bandwidth enhancement increases the number of FC links between the Cisco UCS 6248UP fabric
interconnects or Cisco UCS 6296UP fabric interconnects and the SAN switch, and from the SAN switch to
the EMC VNX storage array. The FC I/O bandwidth enhancement feature is supported on VCE Systems
with EMC VNX5800, EMC VNX7600, and EMC VNX8000.
Blade packs
Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The base
configuration of each VCE System includes two blade packs. The maximum number of blade packs
26
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Compute layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
depends on the type of VCE System. Each blade type must have a minimum of two blade packs as a
base configuration and then can be increased in single blade pack increments thereafter.
Each blade pack is added along with the following license packs:
• Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only)
• EMC PowerPath/VE
Note: License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switches, and EMC
PowerPath are not available for bare metal blades.
The Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades.
The power supplies and fabric extenders for all chassis are populated and cabled, and all required
Twinax cables and transceivers are populated.
As more blades are added and additional chassis are required, chassis activation kits (CAK) are
automatically added to an order. The kit contains software licenses to enable additional fabric
interconnect ports.
Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis
activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades
across a larger number of chassis.
Related information
While it is possible for VCE Systems to support these workloads (with caveats noted below), due to the
nature of bare metal deployments, VCE is able to provide only “reasonable effort" support for systems
that comply with the following requirements:
• VCE Systems contain only VCE published, tested, and validated hardware and software
components. The VCE Release Certification Matrix provides a list of the certified versions of
components for VCE Systems.
• The operating systems used on bare-metal deployments for compute and storage components
must comply with the published hardware and software compatibility guides from Cisco and EMC.
27
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Compute layer
• For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.),
those hypervisor technologies are not supported by VCE. VCE Support is only provided on
VMware Hypervisors.
VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether a
VCE Systems is operating correctly, and assistance in problem resolution to the extent possible.
VCE is unable to reproduce problems or provide support on the operating systems and applications
installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating
systems or applications. The OEM support vendor should be contacted directly for issues and patches
related to those operating systems and applications.
Related information
Upstream disjoint layer 2 networks allow two or more Ethernet clouds that never connect to be accessed
by servers or VMs located in the same Cisco UCS domain.
28
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Compute layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following illustration provides an example implementation of disjoint layer 2 networking into a Cisco
UCS domain:
Virtual port channels (VPCs) 101 and 102 are production uplinks that connect to the network layer of VCE
Systems. Virtual port channels 105 and 106 are external uplinks that connect to other switches.
If you use Ethernet performance port channels (PC 103 and 104 by default), port channels 101 through
104 are assigned to the same VLANs.
29
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer
Storage layer
Storage overview
EMC VNX series are fourth-generation storage platforms that deliver industry-leading capabilities. They
offer a unique combination of flexible, scalable hardware design and advanced software capabilities that
enable them to meet the diverse needs of today’s organizations.
EMC VNX series platforms support block storage and unified storage. The platforms are optimized for
VMware virtualized applications. They feature flash drives for extendable cache and high performance in
the virtual storage pools. Automation features include self-optimized storage tiering, and application-
centric replication.
Regardless of the storage protocol implemented at startup (block or unified), VCE Systems can include
cabinet space, cabling, and power to support the hardware for all of these storage protocols. This
arrangement makes it easier to move from block storage to unified storage with minimal hardware
changes.
• EMC VNX5400
• EMC VNX5600
• EMC VNX5800
• EMC VNX7600
• EMC VNX8000
Note: In all VCE Systems, all EMC VNX components are installed in VCE cabinets in VCE-specific
layout.
The EMC VNX series storage arrays connect to dual storage processors (SPs) using 6Gb/s four-lane
serial attached SCSI (SAS). Each storage processor connects to one side of each two, four, eight, or
sixteen (depending on the VCE System) redundant pairs of four-lane x 6Gb/s serial attached SCSI (SAS)
buses, providing continuous drive access to hosts in the event of a storage processor or bus fault. Fibre
Channel (FC) expansion cards within the storage processors connect to the Cisco MDS switches in the
network layer over FC.
30
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Storage layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The storage layer in the VCE System consists of an EMC VNX storage array. Each EMC VNX model
contains some or all of the following components:
• The disk processor enclosure (DPE) houses the service processors for the EMC VNX5400, EMC
VNX5600, EMC VNX5800 and EMC VNX7600. The DPE provides slots for two service
processors, two battery backup units (BBU), and an integrated 25 slot disk array enclosure (DAE)
for 2.5" drives. Each SP provides support for up to 5 SLICs (small I/O cards).
• The EMC VNX8000 uses a service processor enclosure (SPE) and standby power supplies
(SPS). The SPE is a 4U enclosure with slots for two service processors, each supporting up to 11
SLICs. Each EMC VNX8000 includes two 2U SPS' that power the SPE and the vault DAE. Each
SPS contains two Li-ION batteries that require special shipping considerations.
• X-Blades (also known as data movers) provide file-level storage capabilities. These are housed in
data mover enclosures (DME). Each X-Blade connects to the network switches using 10G links
(either Twinax or 10G fibre).
• DAEs contain individual disk drives and are available in the following configurations:
EMC VNX5400
The EMC VNX5400 is a DPE-based array with two back-end SAS buses, up to four slots for front-end
connectivity, and support for up to 250 drives. It is available in both unified (NAS) and block
configurations.
EMC VNX5600
The EMC VNX5600 is a DPE-based array with up to six back-end SAS buses, up to five slots for front-
end connectivity, and support for up to 500 drives. It is available in both unified (NAS) and block
configurations.
EMC VNX5800
The EMC VNX5800 is a DPE-based array with up to six back-end SAS buses, up to five slots for front-
end connectivity, and support for up to 750 drives. It is available in a block configuration.
EMC VNX7600
The EMC VNX7600 is a DPE-based array with six back-end SAS buses, up to four slots for front-end
connectivity, and support for up to 1000 drives. It is available in a block configuration.
EMC VNX8000
The EMC VNX8000 comes in a different form factor from the other EMC VNX models. The EMC
VNX8000 is an SPE-based model with up to 16 back-end SAS buses, up to nine slots for front-end
connectivity, and support for up to 1500 drives. It is available in a block configuration.
31
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer
Related information
Replication
This section describes how VCE Systems can be upgraded to include EMC RecoverPoint.
For block storage configurations, VCE Systems can be upgraded to include EMC RecoverPoint. This
replication technology provides continuous data protection and continuous remote replication for on-
demand protection and recovery to any point in time. EMC RecoverPoint advanced capabilities include
policy-based management, application integration, and bandwidth reduction. RecoverPoint is included in
the EMC Local Protection Suite and EMC Remote Protection Suite.
To implement EMC RecoverPoint within a VCE System, add two or more EMC RecoverPoint Appliances
(RPA) in a cluster to the VCE System. This cluster can accommodate approximately 80 MB/s sustained
throughput through each EMC RPA.
To ensure proper sizing and performance of an EMC RPA solution, VCE works with an EMC Technical
Consultant. They collect information about the data to be replicated, as well as data change rates, data
growth rates, network speeds, and other information that is needed to ensure that all business
requirements are met.
To scale up storage resources, you can expand block I/O bandwidth between the compute and storage
resources, add RAID packs, and add disk-array enclosure (DAE) packs. I/O bandwidth and packs can be
added when VCE Systems are built and after they are deployed.
Fibre channel (FC) bandwidth can be increased in the VCE Systems with EMC VNX8000, VCE Systems
with EMC VNX7600, and VCE Systems with EMC VNX5800. This option adds an additional four FC
interfaces per fabric between the fabric interconnects and the Cisco MDS 9148 Multilayer Fabric Switch
(segregated network architecture) or Cisco Nexus 5548UP Switch or Switch Cisco Nexus 5596UP Switch
(unified network architecture). It also adds an additional four FC ports from the EMC VNX to each SAN
fabric.
This option is available for environments that require high bandwidth, block-only configurations. This
configuration requires the use of four storage array ports per storage processor that are normally
reserved for unified connectivity of the X-Blades.
32
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Storage layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
RAID packs
Storage capacity can be increased by adding RAID packs. Each pack contains a number of drives of a
given type, speed, and capacity. The number of drives in a pack depends upon the RAID level that it
supports.
The number and types of RAID packs to include in VCE Systems are based upon the following:
• The storage tiers that each pool contains, and the speed and capacity of the drives in each tier.
The following table lists tiers, supported drive types, and supported speeds and capacities.
Note: The speed and capacity of all drives within a given tier in a given pool must be the same.
• The RAID protection level for the tiers in each pool. The following table describes each
supported RAID protection level. The RAID protection level for the different pools can vary.
RAID Description
protection
level
33
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer
RAID Description
protection
level
RAID 5 • Block-level striping with a single parity block, where the parity data is distributed across all of the
drives in the set.
• Offers the best mix of performance, protection, and economy.
• Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required to
perform a single write.
• With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss or
unrecoverable read errors on a track during a drive rebuild.
• Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity or
better.
RAID 6 • Block-level striping with two parity blocks, distributed across all of the drives in the set.
• Offers increased protection and read performance comparable to RAID 5.
• Has a significant write performance penalty because multiple I/Os are required to perform a
single write.
• Economy is very good. Usable capacity is 75% of raw capacity or better.
• EMC best practice for SATA and NL-SAS drives.
There are RAID packs for each RAID protection level/tier type combination. The RAID levels dictate the
number of drives that are included in the packs. RAID 5 or RAID 1/0 is for performance and extreme
performance tiers and RAID 6 is for the capacity tier. The following table lists RAID protection levels and
the number of drives in the pack for each level:
If the number of RAID packs in VCE Systems is expanded, more disk array enclosures (DAEs) might be
required. DAEs are added in packs. The number of DAEs in each pack is equivalent to the number of
back-end buses in the EMC VNX array in the VCE System. The following table lists the number of buses
in the array and the number of DAEs in the pack for each VCE System:
VCE System Number of buses in the array Number of DAEs in the DAE pack
EMC VNX8000 8 or 16 8 or 16
EMC VNX7600 6 6
34
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Storage layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
VCE System Number of buses in the array Number of DAEs in the DAE pack
EMC VNX5800 6 6
There are two types of DAEs: a 2U 25 slot DAE for 2.5" disks and 3U 15 slot DAE for 3.5" disks. A DAE
pack can contain a mix of DAE sizes, as long as the total DAEs in the pack equals the number of buses.
To ensure that the loads are balanced, physical disk will be spread across the DAEs in accordance with
best practice guidelines.
The following table provides an overview of the support provided for EMC VNX operating environment for
new array hardware or capabilities:
Feature Description
NFS Virtual X-Blades – Provides security and segregation for service provider environmental clients.
VDM (Multi-LDAP
Support)
Data-in-place block When compression is enabled, thick LUNs are converted to thin and compressed in
compression place. RAID group LUNs are migrated into a pool during compression. There is no need
for additional space to start compression. Decompression temporarily requires additional
space, since it is a migration, and not an in-place decompression.
EMC VNX snapshots EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can
use EMC SnapView snapshots and EMC VNX snapshots at the same time.
Note: This feature is optional. Both types of snapshots have a seamless support
perspective. VCE relies on guidance from EMC best practices for different use
cases of EMC SnapView snapshots versus EMC VNX snapshots.
35
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Storage layer
Hardware features
File deduplication
File deduplication is supported, but is not enabled by default. Enabling this feature requires knowledge of
capacity and storage requirements.
Block compression
Block compression is supported but is not enabled by default. Enabling this feature requires knowledge of
capacity and storage requirements.
VCE Systems can present CIFS and NFS shares to external clients provided that these provisions are
followed:
• VCE Systems shares cannot be mounted internally by VCE Systems hosts and external to the
VCE Systems at the same time.
• In a configuration with two X-Blades, mixed internal and external access is supported.
• In a configuration with more than two X-Blades, external NFS and CIFS access can run on one or
more X-Blades that are physically separate from the X-Blades serving VMFS data stores to the
VCE Systems compute layer.
Snapshots
EMC VNX snapshots are only for storage pools; not for RAID groups. Storage pools can use EMC
SnapView snapshots and EMC VNX snapshots at the same time.
Note: EMC VNX snapshot is an optional feature. Both types of snapshots have a seamless support
perspective. VCE relies on guidance from EMC best practices for different use cases of EMC
SnapView snapshots versus EMC VNX snapshots.
Replicas
For VCE Systems NAS configurations, EMC VNX Replicator is supported. This software can create local
clones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator is
included in the EMX VNX Remote Protection Suite.
36
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Network layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Network layer
Network overview
This topic provides an overview of the network components for the VCE System.
The Cisco Nexus Series Switches in the network layer provide 10 or 40 GbE IP connectivity between the
VCE System and the external network. In unified storage architecture, the switches also connect the
fabric interconnects in the compute layer to the X-Blades in the storage layer.
In the segregated architecture, the Cisco MDS 9000 series switches in the network layer provide Fibre
Channel (FC) links between the Cisco fabric interconnects and the EMC VNX array. These FC
connections provide block level devices to blades in the compute layer. In unified network architecture,
there are no Cisco MDS series storage switches. FC connectivity is provided by the Cisco Nexus 5548UP
Switches or Cisco Nexus 5596UP Switches.
Ports are reserved or identified for special services such as backup, replication, or aggregation uplink
connectivity.
The VCE System contains two Cisco Nexus 3048 switches to provide management network connectivity
to the different components of the VCE System. These connections include the EMC VNX service
processors, Cisco UCS fabric interconnects, Cisco Nexus 5500UP switches or Cisco Nexus 9396PX
switches, and power output unit (POU) management interfaces.
IP network components
This topic describes the IP network components used by VCE Systems.
VCE Systems use Cisco UCS 6200 series fabric interconnects. VCE Systems with EMC VNX5400 use
the Cisco UCS 6248UP fabric switches. All other VCE Systems use the Cisco UCS 6248UP Fabric
Interconnects or the Cisco UCS 6296UP Fabric Interconnects.
VCE Systems include two Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco
Nexus 9396PX switches to provide 10 or 40 GbE connectivity:
• To the second generation Advanced Platform (AMP-2) through redundant connections between
AMP-2 and the Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus
9396PX switches
37
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer
To support the Ethernet and SAN requirements in the traditional, segregated network architecture, two
Cisco Nexus 5548UP switches or Cisco Nexus 9396PX switches provide Ethernet connectivity, and a pair
of Cisco MDS switches provide Fibre Channel (FC) connectivity.
The Cisco Nexus 5548UP Switch is available as an option for all segregated network VCE Systems. It is
also an option for unified network VCE Systems with EMC VNX5400 and EMC VNX5600.
The two Cisco Nexus 5500 series switches support low latency, line-rate, 10 Gb Ethernet and FC over
Ethernet (FCoE) connectivity for up to 96 ports. Unified port expansion modules are available and provide
an extra 16 ports of 10 GbE or FC connectivity. The FC ports are licensed in packs of eight in an on-
demand basis.
The Cisco Nexus 5548UP switches have 32 integrated, low-latency, unified ports. Each port provides
line-rate, 10 Gb Ethernet or eight Gb/s FC connectivity. The Cisco Nexus 5548UP switches have one
expansion slot that can be populated with a 16 port unified port expansion module. The Cisco Nexus
5548UP Switch is the only network switch supported for VCE Systems data connectivity in VCE Systems
(5400).
The Cisco Nexus 5596UP switches have 48 integrated, low-latency, unified ports. Each port provides
line-rate 10 GB Ethernet or eight Gbs FC connectivity. The Cisco Nexus 5596UP switches have three
expansion slots that can be populated with 16 port, unified, port expansion modules. The Cisco Nexus
5596UP Switch is available as an option for both network topologies for all VCE Systems except VCE
Systems (5400).
The Cisco Nexus 9396PX Switch supports both 10 Gbps SFP+ ports and 40 Gbps QSFP+ ports. The
Cisco Nexus 9396PX Switch is a two rack unit (2RU) appliance with all ports licensed and available for
use. There are no expansion modules available for the Cisco Nexus 9396PX Switch.
The Cisco Nexus 9396PX Switch provide 48 integrated, low-latency SFP+ ports. Each port provides line-
rate 1/10 Gbps Ethernet. There are also 12 QSFP+ ports that provide line-rate 40Gbps Ethernet.
Related information
Port utilization
This section describes the switch port utilization for Cisco Nexus 5548UP Switch and Cisco Nexus
5596UP Switch in segregated networking and unified networking configurations, as well as the Cisco
Nexus switches in a segregated networking configuration.
38
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Network layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN
traffic.
The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with
segregated networking:
*VCE Systems with VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus
5548UP switches.
**VCE Systems with VNX5400 only support four links between the Cisco Nexus 5548UP Switch and
customer core network.
The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the
following additional connectivity option:
If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional
ports (beyond the core connectivity requirements) available to provide additional feature connectivity.
Actual feature availability and port requirements are driven by the model that is selected.
The following table shows the additional connectivity for Cisco Nexus 5548UP Switch with a 16UP
module:
Uplinks from Cisco UCS FI for Ethernet bandwidth (BW) enhancement 8 10G Twinax
39
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer
The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1G or 10G connectivity for LAN
traffic.
The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module) with
segregated networking:
The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the
following additional connectivity option:
If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports
(beyond the core connectivity requirements) are available to provide additional feature connectivity.
Actual feature availability and port requirements are driven by the model that is selected.
The following table shows the additional connectivity for the Cisco Nexus 5596UP Switch with one 16UP
module:
Note: Cisco Nexus 5596UP Switch with two or three 16UP modules is not supported with segregated
networking.
Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10G Twinax
The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN
traffic or 2/4/8 Gbps FC traffic.
40
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Network layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with
unified networking for VCE Systems with EMC VNX5400 only.
The following table shows the core connectivity for the Cisco Nexus 5548UP Switch with unified
networking for VCE Systems with EMC VNX5600:
The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the
following additional connectivity options for VCE Systems with EMC VNX5400 only.
The remaining ports in the base Cisco Nexus 5548UP Switch provide support for the following additional
connectivity options for the other VCE Systems:
EMC RecoverPoint WAN links (one per EMC RecoverPoint 2 1G GE_T SFP+
Appliance pair)
41
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer
If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, additional ports
(beyond the core connectivity requirements) available to provide additional feature connectivity. Actual
feature availability and port requirements are driven by the model that is selected.
The following table shows the additional connectivity for the Cisco Nexus 5548UP Switch with one 16UP
module:
EMC RecoverPoint WAN links (one per EMC RecoverPoint 4 1G GE_T SFP+
Appliance pair)
Uplinks from Cisco UCS FIs for Ethernet BW Enhancement 8 10G Twinax
The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1/10G connectivity for LAN traffic
or 2/4/8 Gbps Fibre Channel (FC) traffic.
The following table shows the core connectivity for the Cisco Nexus 5596UP Switch (no module):
The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the
following additional connectivity options:
42
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Network layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Up to three additional 16 unified port modules can be added to the Cisco Nexus 5596UP Switch
(depending on the selected VCE System). Each module has 16 ports to enable additional feature
connectivity. Actual feature availability and port requirements are driven by the model that is selected.
The following table shows the connectivity options for Cisco Nexus 5596UP Switch for slots 2-4:
The base Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1G or 10G connectivity and 12
40G QSFP+ ports for LAN traffic.
43
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Network layer
The following table shows core connectivity for the Cisco Nexus 9396PX Switch with segregated
networking:
*VCE Systems with EMC VNX5400 only support four links between the Cisco UCS FIs and Cisco Nexus
9396PX switches.
** VCE Systems with EMC VNX5400 only support four links between the Cisco Nexus 9396PX Switch
and customer core network.
*** VCE Systems and Cisco Nexus 9396PX support 40G or 10G SFP+ uplinks to customer core.
The remaining ports in the Cisco Nexus 9396PX Switch provide support for a combination of the following
additional connectivity options:
In a segregated networking model, there are two Cisco MDS 9148 multilayer fabric switches. In a unified
networking model, Fibre Channel (FC) based features are provided by the two Cisco Nexus 5548UP
switches or Cisco Nexus 5596UP switches that are also used for LAN traffic.
44
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Network layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
• FC connectivity between the compute layer components and the storage layer components
• Connectivity for backup, business continuity (EMC RecoverPoint Appliance), and storage
federation requirements when configured.
Note: Inter-Switch Links (ISLs) to the existing SAN are not permitted.
The Cisco MDS 9148 Multilayer Fabric Switch provides from 16 to 48 line-rate ports for non-blocking
8 Gbps throughput. The port groups are enabled on an as needed basis.
The Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches provide a number of line-rate ports
for non-blocking 8 Gbps throughput. Expansion modules can be added to the Cisco Nexus 5596UP
Switch that provide 16 additional ports operating at line-rate.
The following tables define the port utilization for the SAN components when using a Cisco MDS 9148:
Backup 2
FC links from Cisco UCS fabric interconnect (FI) for FC Bandwidth (BW) enhancement 4
SAN aggregation 2
45
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Virtualization layer
Virtualization layer
Virtualization components
VMware vSphere is the virtualization platform that provides the foundation for the private cloud. The core
VMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server for
management. VMware vSphere 5.x includes a Single Sign-on (SSO) component as a standalone
Windows server and an embedded service on the vCenter server. VMware vSphere 6.0 includes a pair of
Platform Service Controller Linux appliances to provide the SSO service.
The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of
resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility
with the use of VMware vMotion and Storage vMotion technology.
This lightweight hypervisor requires very little space to run (less than six GB of storage required to install)
and has minimal management overhead.
VMware vSphere ESXi does not contain a console operating system. The VMware vSphere Hypervisor
ESXi boots from Cisco FlexFlash (SD card) on AMP-2. For the compute blades, ESXi boots from the SAN
through an independent Fibre Channel (FC) LUN presented from the EMC VNX storage array. The FC
LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to
provide stateless computing within VCE Systems. The stateless hypervisor is not supported.
Cluster configuration
VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusters
contain the CPU, memory, network, and storage resources available for allocation to virtual machines
(VMs). Clusters can scale up to a maximum of 32 hosts for VMware vSphere 5.1/5.5 and 64 hosts for
VMware vSphere 6.0. Clusters can support thousands of VMs.
The clusters can also support a variety of Cisco UCS blades running inside the same cluster.
Note: Some advanced CPU functionality might be unavailable if more than one blade model is running in
a given cluster.
Data stores
VCE Systems support a mixture of data store types: block level storage using VMFS or file level storage
using NFS.
46
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Virtualization layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The maximum size per VMFS5 volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning with VMware
vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255
volumes.
VCE optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed in VCE Systems
to maximize the throughput and scalability of NFS data stores. VCE Systems support a maximum of 256
NFS data stores per host.
Virtual networks
Virtual networking in the Advanced Management Platform (AMP-2) uses standard virtual switches. Virtual
networking in VCE Systems is managed by the Cisco Nexus 1000V Series Switch. The Cisco Nexus
1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data
center by allowing policies to move with a VM during live migration. This provides persistent network,
security, and storage compliance.
Alternatively, virtual networking in VCE Systems is managed by a VMware vCenter Virtual Distributed
Switch (version 5.5 or higher) with comparable features to the Cisco Nexus 1000V where applicable. The
VMware VDS option consists of both a VMware Standard Switch (VSS) and a VMware vSphere
Distributed Switch (VDS) and will use a minimum of four uplinks presented to the hypervisor.
The implementation of Cisco Nexus 1000V Series Switch for VMware vSphere 5.1/5.5 and VMware VDS
for VMware vSphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service
(QoS) policies to appropriately shape network traffic according to workload type and priority. With
VMware vSphere 6.0, QoS is set to Default (Trust Host). The vNICs are equally distributed across all
available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This
provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco
UCS Virtual Interface Card (VIC) hardware. Thus, VMware vSphere ESXi has a predictable uplink
interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the
virtual network interface cards (vNIC) to ensure consistency in case the uplinks need to be migrated to
the VMware vSphere Distributed Switch (VDS) after manufacturing.
Related information
VMware vCenter Server is a central management point for the hypervisors and virtual machines. VMware
vCenter Server is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit
Windows Server and runs as a service to assist with host patch management.
47
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Virtualization layer
Second generation of the Advanced Management Platform with redundant physical servers (AMP-2RP)
and the VCE System each have a unified VMware vCenter Server Appliance instance. Each of these
systems resides in the AMP-2RP.
• Cloning of VMs
• Creating templates
• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters
VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. VCE System
administrators can create and apply the following alarms to all managed objects in VMware vCenter
Server:
Databases
The backend database that supports VMware vCenter Server and VMware Update Manager (VUM) is
remote Microsoft SQL Server 2008 (vSphere 5.1) and Microsoft SQL 2012 (vSphere 5.5/6.0). The SQL
Server service requires a dedicated service account.
Authentication
VCE Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple
identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware
SSO is available in VMware vSphere 5.1 and higher. VMware vCenter Server, Inventory, Web Client,
SSO, Core Dump Collector, and Update Manager run as separate Windows services, which can be
configured to use a dedicated service account depending on the security and directory services
requirements.
• VMware vSphere Web Client (used with VCE Vision™ Intelligent Operations)
48
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Virtualization layer VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
• VMware DRS
• VMware vMotion
— Layer 3 capability available for compute resources (version 6.0 and higher)
• Resource Pools
49
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Management
Management
AMP-2 provides a single management point for VCE Systems and provides the ability to:
The Core Management Workload is the minimum required set of management software to install, operate,
and support a VCE System. This includes all hypervisor management, element managers, virtual
networking components (Cisco Nexus 1000v or VMware vSphere Distributed Switch (VDS)), and VCE
Vision™ Intelligent Operations software.
The VCE Optional Management Workload is non-Core Management Workloads that are directly
supported and installed by VCE whose primary purpose is to manage components within a VCE System.
The list would be inclusive of, but not limited to, Data Protection, Security or Storage management tools
such as, EMC Unisphere for EMC RecoverPoint or EMC VPLEX, Avamar Administrator, EMC InsightIQ
for Isilon, or VMware vCNS appliances (vShield Edge/Manager).
Related information
AMP-2 is available with one to three physical servers. All options use their own resources to run
workloads without consuming resources:
AMP-2P One Cisco UCS C220 Default configuration for VCE Systems that use a dedicated
server Cisco UCS C220 Server to run management workload
applications.
50
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Management VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
AMP-2RP Two Cisco UCS C220 Adds a second Cisco UCS C220 Server to support application
servers and hardware redundancy.
AMP-2HA Baseline Two Cisco UCS C220 Implements VMware vSphere HA/DRS with shared storage
servers provided by EMC VNXe3200 storage.
AMP-2HA Three Cisco UCS C220 Adds a third Cisco UCS C220 Server and additional storage
Performance servers for EMC FAST VP.
AMP-2 is delivered pre-configured with the following software components which are dependent on the
selected VCE Release Certification Matrix:
• VMware vCenter Server Appliance (AMP-2RP) - a second instance of VMware vCenter Server is
required to manage the replication instance separate from the production VMware vCenter Server
51
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Management
• VMware vSphere Distributed Switch (VDS) or Cisco Nexus 1000V virtual switch (VSM)
• Array management modules, including but not limited to, EMC Unisphere Client, EMC Unisphere
Service Manager, EMC VNX Initialization Utility, EMC VNX Startup Tool, EMC SMI-S Provider,
EMC PowerPath Viewer
52
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Management VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following illustration provides an overview of the network connectivity for the AMP-2HA:
53
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Management
The following illustration provides an overview of the VM server assignment for AMP-2HA:
VCE Systems that use VMware vSphere Distributed Switch (VDS) do not include Cisco Nexus1000V
VSM VMs.
The Performance option of AMP-2HA leverages the DRS functionality of VMware vCenter to optimize
resource usage (CPU/memory) so that VM assignment to a VMware vSphere ESXi host will be managed
automatically
54
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Management VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following illustration provides an overview of the VM server assignment for AMP-2P:
55
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Management
The following illustration provides an overview of the VM server assignment for AMP-2RP:
VCE Systems that use VMware VDS do not include Cisco Nexus1000V VSM VMs.
56
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System infrastructure VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
System infrastructure
VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400
Fabric interconnects Cisco Nexus 6248UP or Cisco Nexus 6296UP Cisco Nexus
6248UP
VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400
Network Cisco Nexus 5548UP or Cisco Nexus 5596UP Cisco Nexus 5548UP
VCE Systems with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400
57
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview System infrastructure
Cabinets overview
This topic describes the VCE cabinets.
In each VCE System, the compute, storage, and network layer components are distributed within two or
more 42U cabinets. Distributing the components this way balances out the power draw and reduces the
size of the power outlet units (POUs) that are required.
Each cabinet conforms to a standard predefined layout. Space can be reserved for specific components
even if they are not present or required for the external configuration. This design makes it easier to
upgrade or expand each VCE System as capacity needs increase.
VCE System cabinets are designed to be installed next to one another within the data center (that is,
contiguously). If a customer requires the base and expansion cabinets to be physically separated,
customized cabling is needed, which incurs additional cost and can increase delivery time.
Note: The cable length is NOT the same as distance between cabinets. The cable must route through
the cabinets and through the cable channels overhead or in the floor.
For more information about the IPI appliance, refer to the administration guide for your VCE System.
Power options
This topic describes the power outlet unit (POU) options inside and outside of North America.
VCE Systems support several POU options inside and outside of North America.
The NEMA POU is standard; other POUs add time to assembly and delivery. The following table lists the
POUs available for VCE Systems in North America:
58
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
System infrastructure VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The IEC 309 POU is standard; other POUs add time to assembly and delivery. The following table lists
the POUs available for VCE Systems in Europe:
IEC 60309, SPLASH PROOF Single phase / 32A / 230V (half height)
The following table lists the POUs available for VCE Systems in Japan:
The VCE Vblock® and VxBlock™ Systems 340 Physical Planning Guide provides more information about
power requirements.
Related information
59
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
Configuration descriptions
Array options
VCE Systems (8000) are available as block only or unified storage. Unified storage VCE Systems (8000)
support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade
provides four 10G front-end network connections. An additional data mover enclosure (DME) supports
the connection of two additional X-Blades with the same configuration as the base data movers.
Unified 8/16 2
Unified 8/16 3
Unified 8/16 4
Unified 8/16 5
Unified 8/16 6
Unified 8/16 7
Unified 8/16 8
• 24 GB RAM
• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array
Feature options
VCE Systems (8000) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW
enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that
SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or Cisco Nexus 5596UP
switches, depending on topology.
60
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Block Segregated Y Y
Unified Segregated Y Y
Unified networking is supported only on the VCE Systems (8000) with Cisco Nexus 5596UP switches.
Ethernet BW enhancement is supported only on the VCE Systems (8000) with Cisco Nexus 5596UP
switches.
VCE Systems (8000) include two 25 slot 2.5" disk array enclosures (DAEs). An additional six DAEs are
required beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot
2.5" DAEs. Additional DAEs (after initial eight) are added in multiples of eight. If there are 16 buses, then
DAEs must be added in multiples of 16. DAEs are interlaced when racked, and all 2.5" DAEs are first
racked on the buses, then 3.5" DAEs.
SLIC configuration
The EMC VNX8000 provides slots for 11 SLICs in each service processor (SP).
• Two slots in each SP are populated with back-end SAS bus modules by default.
• Two additional back-end SAS bus modules support up to 16 buses. If this option is chosen, all
DAEs are purchased in groups of 16.
• VCE Systems (8000) support two FC SLICs per SP for host connectivity. Additional FC SLICs are
included to support unified storage.
• The remaining SLIC slots are reserved for future VCE configuration options.
• VCE only supports the four port FC SLIC for host connectivity.
• By default, six FC ports per SP are connected to the SAN switches for VCE Systems host
connectivity. The addition of FC BW Enhancement provides four additional FC ports per SP.
As the EMC VNX8000 has multiple CPUs, SLIC arrangements should be balanced across CPUs.
61
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
The following table shows the SLIC configurations per SP (eight bus):
Array FC BW enhancement SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10
Block N FC Res Res Res Res Bus Res Res Res FC Bus
Unified N FC Res Res Res Res Bus Res Res FC/U FC Bus
Unified -> 4 DM N FC Res FC/U Res Res Bus Res Res FC/U FC Bus
Unified -> 4 DM Y FC Res FC/U FC Res Bus Res Res FC/U FC Bus
FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections.
Bus: Four port - 4x lane/port 6 Gb/s SAS: provides additional back-end bus connections.
The following table shows the SLIC configurations per SP (16 bus):
Array FC BW SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10
Block N FC Res Res Res Bus Bus Bus Res Res FC Bus
Unified N FC Res Res Res Bus Bus Bus Res FC/U FC Bus
Unified -> 4 DM N FC Res FC/U Res Bus Bus Bus Res FC/U FC Bus
Unified -> 4 DM Y FC Res FC/U FC Bus Bus Bus Res FC/U FC Bus
Two additional back-end SAS bus modules are available to support up to 16 buses. If this option is
chosen, all DAEs are purchased in groups of 16.
Compute
VCE Systems (8000) support between two to 16 chassis, and up to 128 half-width blades. Each chassis
can be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS
2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) per
IOM.
62
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The following table shows the compute options that are available for the fabric interconnects:
Fabric interconnect Min chassis 2-link max chassis 4-link max chassis 8-link max
(blades) (blades) (blades) chassis (blades)
Connectivity
VCE Systems (8000) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric
interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for
Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco
MDS 9148 multilayer fabric switches, based on topology.
The following table shows the available switch combinations that are available for the fabric
interconnects:
Cisco UCS 6248UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Cisco UCS 6296UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Note: The default is unified network with Cisco Nexus 5596UP switches.
Array options
VCE Systems (7600) are available as block only or unified storage. Unified storage VCE Systems (7600)
support up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade
provides four 10G front-end connections to the network. An additional data mover enclosure (DME)
supports the connection of two additional X-Blades with the same configuration as the base X-Blades.
63
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
Block 6 N/A
Unified 6 2*
Unified 6 3*
Unified 6 4*
Unified 6 5*
Unified 6 6*
Unified 6 7*
Unified 6 8*
• 12 GB RAM
• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array
Feature options
VCE Systems (7600) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW
enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that
SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or the Cisco Nexus 5596UP
switches, depending on topology. Both block and unified arrays use FC BW enhancement.
Block Segregated Y Y
Unified Segregated Y Y
Unified networking is only supported on VCE Systems (7600) with Cisco Nexus 5596UP switches.
64
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
VCE Systems (7600) have two 25 slot 2.5" disk array enclosures (DAEs). The EMC VNX7600 data
processor enclosure (DPE) provides the DAE for bus 0, and provides the first DAE on bus 1. An
additional four DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15
slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEs
are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs.
SLIC configuration
The EMC VNX7600 provides slots for five SLICs in each service processor (SP). Slot 0 in each SP is
populated with a back-end SAS bus module. VCE Systems (7600) support two FC SLICs per SP for host
connectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, an
additional FC SLIC is added to the array. VCE only supports the four port FC SLIC for host connectivity.
By default, six FC ports per SP are connected to the SAN switches for VCE Systems host connectivity.
The addition of FC BW enhancement provides four additional FC ports per SP.
FC/U 4xFC port IO module dedicated to unified X-Blade connectivity: provides four 8G FC connections.
Bus four port - 4x lane/port six GB SAS: provides additional back-end bus connections.
Compute
VCE Systems (7600) support two to 16 chassis, and up to 128 half-width blades. Each chassis can be
connected with two links (Cisco UCS 2204XP fabric extenders input/output module (IOM) only), four links
(Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM
only) per IOM.
65
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
The following table shows the compute options available for the fabric interconnects:
Fabric interconnect Min chassis 2-link max chassis 4-link max chassis 8-link max
(blades) (blades) (blades) chassis (blades)
Connectivity
VCE Systems (7600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric
interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for
Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco
MDS 9148 multilayer fabric switches, based on the topology.
The following table shows the available switch combinations available for the fabric interconnects:
Cisco UCS 6248UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Cisco UCS 6296UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Note: The default is unified network with Cisco Nexus 5596UP switches.
Array options
VCE Systems (5800) are available as block only or unified storage. Unified storage VCE Systems (5800)
support up to six X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides
four 10G front-end connections to the network. An additional data mover enclosure (DME) supports the
connection of one additional X-Blade with the same configuration as the base data movers.
66
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Block 6 N/A
Unified 6 2
Unified 6 3*
Unified 6 4*
Unified 6 5*
Unified 6 6*
• 12 GB RAM
• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array
Feature options
The VCE Systems (5800) support both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW
enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that
SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or the Cisco Nexus 5596UP
switches, depending on topology. Both block and unified arrays use FC BW enhancement.
Block Segregated Y Y
Unified Segregated Y Y
Note: Unified networking is supported only on VCE Systems (5800) with Cisco Nexus 5596UP switches.
VCE Systems (5800) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5800 data
processor enclosure (DPE) provides the DAE for bus 0, and the second provides the first DAE on bus 1.
67
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
An additional four DAEs are required beyond the base two DAEs. Additional DAEs can be added in either
15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six.
DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs.
SLIC configuration
The EMC VNX5800 provides slots for five SLICs in each service processor. Slot 0 is populated with a
back-end SAS bus module. VCE Systems (5800) support two FC SLICs per SP for host connectivity. A
third is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLIC
is added to the array. VCE only supports the four-port FC SLIC for host connectivity. By default, six FC
ports per SP are connected to the SAN switches for VCE Systems host connectivity. The addition of FC
BW enhancement provides four additional FC ports per SP.
FC/U 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections.
Bus: Four port - 4x lane/port 6 Gb/s SAS: provides additional back-end bus connections.
Compute
VCE Systems (5800) support two to 16 chassis, and up to 128 half-width blades. Each chassis can be
connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XP
fabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM.
The following table shows the compute options that are available for the fabric interconnects:
Fabric interconnect Min chassis 2-link max chassis 4-link max chassis 8-link max chassis
(blades) (blades) (blades) (blades)
68
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Connectivity
VCE Systems (5800) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric
interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for
Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 switches or Cisco MDS
9148 multilayer fabric switches, based on topology.
The following table shows all the available switch combinations that are available for the fabric
interconnects:
Cisco UCS 6248UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Cisco UCS 6296UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Note: The default is unified network with Cisco Nexus 5596UP switches.
Array options
VCE Systems (5600) are available as block only or unified storage. Unified storage VCE Systems (5600)
support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end
connections to the network.
Block 2 or 6 N/A
Unified 2 or 6 1
Unified 2 or 6 2*
Unified 2 or 6 3*
Unified 2 or 6 4*
69
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
• Six GB RAM
• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array
Feature options
VCE Systems (5600) use the Cisco Nexus 5596UP switches. VCE Systems (5600) do not support FC
bandwidth (BW) enhancement in block or unified arrays.
Block Segregated Y
Unified Segregated Y
DAE configuration
VCE Systems (5600) have two 25 slot 2.5" disk array enclosure (DAEs). The EMCVNX 5600 disk
processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1.
Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5? DAEs. Additional DAEs are added in
multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then
3.5" DAEs.
An additional four port SAS bus expansion SLIC is an option with VCE Systems (5600). If more than 19
DAEs are required, the addition of a four port expansion bus card is required. If the card is added, DAEs
are purchased in groups of six.
SLIC configuration
The EMC VNX5600 provides slots for five SLICs in each service processor. VCE Systems (5600) have
two FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage.
The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four
port FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for VCE
Systems host connectivity.
70
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
The FC 4xFC port I/O module provides four 8G FC connections. The FC/U 4xFC port IO module (IOM)
dedicated to unified X-Blade connectivity provides four 8G FC connections. Bus four port - 4x lane/port
six GB SAS: provides additional back-end bus connections.
Compute
VCE Systems (5600) support two to eight chassis and up to 64 half-width blades. Each chassis can be
connected with four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS
2208XP fabric extenders IOM only) per IOM.
The following table shows the compute options that are available for the fabric interconnects:
Fabric interconnect Min chassis 2-link max chassis 4-link max chassis 8-link max
(blades) (blades) (blades) chassis (blades)
Connectivity
VCE Systems (5600) support the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric
interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for
Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco
MDS 9148 multilayer fabric switches, based on topology.
The following table shows the switch options that are available for the fabric interconnects:
Cisco UCS 6248UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
Cisco UCS 6296UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
71
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
Note: The default is unified network with Cisco Nexus 5596UP switches.
Array options
VCE Systems (5400) are available as block only or unified storage. Unified storage VCE Systems (5400)
support one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end
connections to the network.
Block 2 N/A
Unified 2 1*
Unified 2 2*
Unified 2 3*
Unified 2 4*
• Six GB RAM
• One Fibre Channel (FC) storage line card (SLIC) for connectivity to array
Feature options
VCE Systems (5400) use the Cisco UCS 6248UP fabric interconnects. VCE Systems (5400) do not
support FC bandwidth (BW) enhancement or Ethernet BW enhancement in block or unified arrays.
72
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Configuration descriptions VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
VCE Systems (5400) have two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5400 disk
processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1.
Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added in
multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then
3.5" DAEs.
SLIC configuration
EMC VNX5400 provides slots for five SLICs in each service processor (SP), although only four are
enabled. VCE Systems (5400) have two FC SLICs per SP for host connectivity. A third FC SLIC can be
ordered to support unified storage. The remaining SLIC slots are reserved for future VCE configuration
options. VCE only supports the four-port FC SLIC for host connectivity. Six FC ports per SP are
connected to the SAN switches for VCE Systems host connectivity.
The FC 4xFC port I/O module (IOM) provides four 8G FC connections. The FC/U 4xFC port IOM
dedicated to unified X-Blade connectivity provides four 8G FC connections.
Compute
VCE Systems (5400) are configured with two chassis that support up to 16 half-width blades. Each
chassis is connected with four links per fabric extender I/O module (IOM). VCE Systems (5400) support
Cisco UCS 2204XP Fabric Extenders IOM only.
The following table shows the compute options that are available for the Cisco UCS 6248UP fabric
interconnects:
Fabric interconnect Min chassis 2-link max chassis 4-link max chassis 8-link max chassis
(blades) (blades) (blades) (blades)
Connectivity
VCE Systems (5400) contain the Cisco UCS 6248UP fabric interconnects that uplink to Cisco UCS Nexus
5548UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5548UP
switches or Cisco MDS 9148 multilayer fabric switches.
73
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Configuration descriptions
The following table shows the switch options that are available for the fabric interconnects:
Cisco UCS 6248UP Segregated Cisco Nexus 5548UP Cisco MDS 9148 Multilayer
switches Fabric Switch
74
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Sample configurations
These elevations are provided for sample purposes only. For specifications for a specific VCE System
design, consult your vArchitect.
75
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
76
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
77
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
78
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
79
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
80
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
These elevations are provided for sample purposes only. For specifications for a specific VCE System
design, consult your vArchitect.
81
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
82
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
83
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
84
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
85
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
These elevations are provided for sample purposes only. For specifications for a specific VCE System
design, consult your vArchitect.
86
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
87
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
88
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
89
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Sample configurations
90
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Sample configurations VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
91
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview Additional references
Additional references
Virtualization components
This topic provides a description of the virtualization components.
VMware vCenter Provides a scalable and extensible platform that forms http://www.vmware.com/products/
Server the foundation for virtualization management. vcenter-server/
VMware vSphere Virtualizes all application servers and provides VMware www.vmware.com/products/
ESXi high availability (HA) and dynamic resource scheduling vsphere/
(DRS).
Compute components
This topic provides a description of the compute components.
Cisco UCS 2200 Series Bring unified fabric into the blade-server www.cisco.com/en/US/prod/collateral/
Fabric Extenders chassis, providing up to eight 10 Gbps ps10265/ps10276/
connections each between blade servers and data_sheet_c78-675243.html
the fabric interconnect.
Cisco UCS 5100 Series Chassis that supports up to eight blade www.cisco.com/en/US/products/
Blade Server Chassis servers and up to two fabric extenders in a six ps10279/index.html
rack unit (RU) enclosure.
Cisco UCS 6200 Series Cisco UCS family of line-rate, low-latency, www.cisco.com/en/US/products/
Fabric Interconnects lossless, 10 Gigabit Ethernet, Fibre Channel ps11544/index.html
over Ethernet (FCoE), and Fibre Channel
functions. Provide network connectivity and
management capabilities.
92
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
Additional references VCE™ Vblock® and VxBlock™ Systems 340 Architecture Overview
Network components
This topic provides a description of the network components.
Cisco Nexus 1000V A software switch on a server that delivers Cisco www.cisco.com/en/US/products/
Series Switches VN-Link services to virtual machines hosted on that ps9902/index.html
server.
Cisco MDS 9148 Provides 48 line-rate 16-Gbps ports and offers www.cisco.com/en/US/products/
Multilayer Fabric cost-effective scalability through on-demand ps10703/index.html
Switch activation of ports.
Cisco Nexus 3048 Provides local switching that connects transparently http://www.cisco.com/c/en/us/
Switch to upstream Cisco Nexus switches, creating an products/switches/nexus-3048-switch/
end-to-end Cisco Nexus fabric in data centers. index.html
Storage components
This topic provides a description of the storage components.
93
© 2013-2015 VCE Company, LLC.
All Rights Reserved.
www.vce.com
About VCE
VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCE
accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while
improving time to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure
systems, allowing customers to focus on business innovation instead of integrating, validating, and managing IT
infrastructure. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical
industry offerings, and application development environments, allowing customers to focus on business innovation instead
of integrating, validating, and managing IT infrastructure.
Copyright © 2013-2015 VCE Company, LLC. All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack,
and the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein are
the property of their respective owners.
94
© 2013-2015 VCE Company, LLC.
All Rights Reserved.