You are on page 1of 59

VMware vCloud Architecture Toolkit

Public VMware vCloud Implementation Example


Version 2.0.1
October 2011

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international
copyright and intellectual property laws. This product is covered by one or more patents listed at
http://www.vmware.com/download/patents.html.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.

VMware, Inc
3401 Hillview Ave
Palo Alto, CA 94304
www.vmware.com

2011 VMware, Inc. All rights reserved.


Page 2 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

Contents
1.

Overview .......................................................................................... 7
1.1 Business Requirements................................................................................................. 7
1.2 Use Cases ..................................................................................................................... 8
1.3 Document Purpose and Assumptions ........................................................................... 9
1.4 vCloud Components .................................................................................................... 10
1.5 Abstractions and VMware vCloud Constructs ............................................................. 11

2.

vSphere Design .............................................................................. 12


2.1 Architecture Overview ................................................................................................. 12
2.2 Site Considerations ..................................................................................................... 15
2.3 Management Cluster Design ....................................................................................... 16
2.4 Resource Group Design .............................................................................................. 23

3.

vCloud Design Provider Constructs ............................................. 30


3.1 Provider Virtual Datacenters ....................................................................................... 30
3.2 External Networks ....................................................................................................... 33
3.3 Network Pools ............................................................................................................. 33
3.4 Users/Roles ................................................................................................................. 33

4.

vCloud Design Consumer Constructs .......................................... 35


4.1 Organizations .............................................................................................................. 35
4.2 Organization Virtual Datacenters ................................................................................ 35
4.3 Organization Networks ................................................................................................ 36
4.4 Catalogs ...................................................................................................................... 36
4.5 Users/Roles ................................................................................................................. 36

5.

vCloud Security .............................................................................. 37


5.1 Host Security ............................................................................................................... 37
5.2 Network Security ......................................................................................................... 37
5.3 vCenter Security .......................................................................................................... 37
5.4 VMware vCloud Director Security ............................................................................... 38
5.5 Additional Security Considerations .............................................................................. 38

2011 VMware, Inc. All rights reserved.


Page 3 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

6.

vCloud Management ...................................................................... 39


6.1 vSphere Host Setup Standardization .......................................................................... 39
6.2 vCloud Center of Excellence ....................................................................................... 39
6.3 vCloud Logging ............................................................................................................ 39
6.4 vCloud Monitoring........................................................................................................ 44

7.

Extending vCloud ........................................................................... 49


7.1 Hybrid vCloud .............................................................................................................. 49
7.2 vCloud Connector ........................................................................................................ 49
7.3 vCloud API ................................................................................................................... 50
7.4 VMware vCenter Orchestrator ..................................................................................... 51

8.

vCloud Metering ............................................................................. 54


8.1 Cost Models ................................................................................................................. 54
8.2 Reporting ..................................................................................................................... 55
8.3 Metering Internet Traffic .............................................................................................. 55
8.4 Aggregator Reporting .................................................................................................. 56

List of Figures
Figure 1. VMware vCloud Director Abstraction Layer ................................................................... 11
Figure 2. vSphere Logical Architecture Overview ......................................................................... 14
Figure 3. vCloud Physical Design Overview ................................................................................. 15
Figure 4. vSphere Logical Network Design Management Cluster ............................................. 21
Figure 5. vSphere Logical Network Design ................................................................................... 25
Figure 6. vCloud Logging Organization ......................................................................................... 40
Figure 7. vCloud Connector ........................................................................................................... 49
Figure 8. vCloud API Logical Representation ............................................................................... 50
Figure 9. vCloud Orchestration ...................................................................................................... 51
Figure 10. One Time Router Cost Example (vCenter Chargeback UI) ......................................... 56

2011 VMware, Inc. All rights reserved.


Page 4 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

List of Tables
Table 1. Initial vCloud Capacity ....................................................................................................... 8
Table 2. Document Sections ........................................................................................................... 9
Table 3. vCloud Components ........................................................................................................ 10
Table 4. vCloud Components ........................................................................................................ 13
Table 5. vCenter Servers ............................................................................................................... 15
Table 6. Management Virtual Machines ........................................................................................ 16
Table 7. Management Component Resiliency .............................................................................. 17
Table 8. vSphere Clusters Management Cluster ....................................................................... 18
Table 9: vSphere Management Cluster DRS Rules ...................................................................... 19
Table 10. Host Logical Design Specifications Management Cluster.......................................... 20
Table 11. Virtual Switch Configuration Management Cluster ..................................................... 21
Table 12. Virtual Switch Configuration Settings Management Cluster....................................... 22
Table 13. Shared Storage Logical Design Specifications Management Cluster ........................ 22
Table 14. Resource Group Clusters .............................................................................................. 23
Table 15. vSphere Cluster Configuration vCloud Resources ..................................................... 23
Table 16. Host Logical Design Specifications ............................................................................... 24
Table 17. Virtual Switch Configuration vCloud Resources ......................................................... 24
Table 18. dvResSwitch01 Teaming and Failover Policies ............................................................ 25
Table 19. dvResSwitch01 Security Policies .................................................................................. 26
Table 20. dvResSwitch01 General Policies .................................................................................. 26
Table 21. Storage Logical Design Specifications vCloud Compute Cluster .............................. 26
Table 22. vSphere Clusters vCloud Compute Datastores ......................................................... 27
Table 23. Datastore Size Estimation Factors res-pod1 Cluster ................................................. 28
Table 24. Provider Virtual Datacenter Specifications .................................................................... 31
Table 25. Virtual Machine Sizing and Distribution ......................................................................... 31
Table 26. Provider External Network Specifications ..................................................................... 33
Table 27. System Administrator User and Group.......................................................................... 34
Table 28. NewCo Fixed-cost Cost Model ...................................................................................... 36
Table 29. Virtual Switch Security Settings ..................................................................................... 37
Table 30: vCloud Director Log Locations ...................................................................................... 42
Table 31. VMware vCloud Director Monitoring Items .................................................................... 44
Table 32.vCenter Orchestrator Monitored MBeans....................................................................... 46
Table 33. vCloud Connector Components .................................................................................... 50
2011 VMware, Inc. All rights reserved.
Page 5 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

Table 34. Allocation Units for vCloud Hierarchies Based on Allocation Model ............................. 54
Table 35. Management Cluster Inventory ..................................................................................... 57
Table 36. vCloud Resources Inventory ......................................................................................... 59

2011 VMware, Inc. All rights reserved.


Page 6 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

1. Overview

This public VMware vCloud implementation example uses a fictitious corporation, New
Company (NewCo), as a vehicle to provide a detailed implementation example for a public
VMware vCloud. It is intended to provide architects and engineers who are interested in
implementing a public vCloud with a reference implementation that conforms to VMware best
practices, and describes the logical and physical design and implementation of the components
of a VMware vCloud. Each document section elaborates on different aspects and key design
decisions of this vCloud solution. This implementation example provides a baseline that is
extensible for future usage patterns.

1.1

Business Requirements

The NewCo vCloud implementation has the following characteristics and provides:

Basic service offering as an instance-based Pay-As-You-Go resource consumption model,


with each provisioned virtual machine charged separately and separate billing records
produced for each virtual machine.

Secure multitenancy controlling workload access as well as network isolation, permitting


multiple organizations within an implementation to share public vCloud resources.

A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a


catalog of predefined applications and services.

Use of a public vCloud to rapidly provision complex multitier applications or entire


environments to respond to the dynamic business requirements of an organization.

Conform to vCloud Center of Excellence principles described in Operating a VMware vCloud.

See the Public VMware vCloud Service Definition for additional details.

2011 VMware, Inc. All rights reserved.


Page 7 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

1.1.1 vCloud Capacity and Considerations


The NewCo vCloud design is configured to initially support 600 organizations and 1500 virtual
machines with the ability to scale to 5000 virtual machines under one resource group. Additional
details regarding the initial targets are provided in Table 1.
Table 1. Initial vCloud Capacity
Initial Target

600 customers/organizations:

30 virtual machines maximum.

1 public-routed network.

1 internal network.

1500 virtual machines:

5% 16GB/4 vCPU.

10% 8GB/2 vCPU.

25% 4GB/1 vCPU.

20% 2GB/1 vCPU.

40% 1GB/1 vCPU.

Average 40GB storage.

Parallel Operations:

30 virtual machine provision/clone operations.

10 OVF uploads.

SLA:
o

Infrastructure uptime 99.9%.

Portal/API uptime 99.9%.

Virtual machine provisioning < 5 minutes.

Use of virtual hardware version 8 allows for larger virtual machines than are
defined in this section. NewCo determined that the use cases for the vCloud do
not warrant larger virtual machines at this time.

1.2

Use Cases

The target use case for this vCloud environment includes, but is not limited to, transient
workloads normally observed with:

Software development.

Quality Assurance and software testing.

2011 VMware, Inc. All rights reserved.


Page 8 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

1.3

Document Purpose and Assumptions

This document is intended to serve as a reference for service providers and assumes that they

have familiarity with VMware products, including VMware vSphere , VMware vCenter, VMware

vCloud Director, VMware vShield, VMware vFabric, and VMware vCenter Chargeback. It
covers both logical and physical design considerations for all VMware vCloud infrastructure
components, with each section elaborating on different aspects and key design decisions of a
public vCloud implementation.
Public vCloud architecture topics are covered in the document sections listed in Table 2.
Table 2. Document Sections
Section

Description

1. Overview

Overview, business requirements, capacity


considerations, and inventory of vCloud components.

2. vSphere Design

Management cluster Management components that


support the operation of the resource groups.

Resource group Compute, storage, and network


resources available for public consumption.

VMware vCloud Director provider objects and


configuration.

Relationship of vCloud Director provider objects to


vSphere objects.

VMware vCloud Director organization objects and


configuration.

Relationship of consumer objects to underlying


provider objects.

3. vCloud Design Provider


Constructs

4. vCloud Design Consumer


Constructs

5. vCloud Security

Considerations as they apply to all management and


resource components.

6. vCloud Management

Considerations that apply to vCloud Director management


components.

7. Extending vCloud

Available options for increasing the functionality,


automation, and orchestration of the vCloud.

8. vCloud Metering

vCenter Chargeback design and configuration as well as


vRAM metering of virtual machines.

This document is not intended as a substitute for VMware product documentation. See the
installation and administration guides as well as published best practices for the appropriate
product for further information.

2011 VMware, Inc. All rights reserved.


Page 9 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

1.4

vCloud Components

Table 3 lists the components that comprise the vCloud.


Table 3. vCloud Components
vCloud Component

Description

VMware vCloud Director

Abstracts and provides secure resource and workload


isolation of underlying vSphere resources. Includes:

VMware vSphere

VMware vCloud Director Server (two or more


instances, each installed on a Linux virtual machine
and referred to as a cell).

VMware vCloud Director Database (one instance per


clustered set of VMware vCloud Director cells).

vSphere compute, network and storage resources.

Foundation of underlying vCloud resources. Includes:

VMware vShield

VMware ESXi hosts (three or more instances for


management cluster and three or more instances for
resource group, also referred to as Compute Cell).

VMware vCenter Server (one instance managing a


management cluster of hosts, and one or more
instances managing one or more clusters of hosts
reserved for vCloud consumption).

vCenter Server Database (one instance per vCenter


Server).

Provides network security services including Layer 2


isolation, NAT, firewall, DHCP, and VPN. Includes:

VMware vCenter Chargeback

vShield Manager (one instance per vCenter Server


in the resource groups).

vShield Edge (deployed automatically by VMware


vCloud Director as virtual appliances on hosts within
resource groups).

Provides resource metering and cost models. Includes:

vCenter Chargeback Server (one or more instances).

vCenter Chargeback database (one or more


instances).

vCloud Director data collector (one or more


instances).

vShield Manager data collector (one per resource


group).

See Architecting a VMware vCloud for additional information about the vCloud components and
options for planning, deployment, and configuration.
2011 VMware, Inc. All rights reserved.
Page 10 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

1.5

Abstractions and VMware vCloud Constructs

Key features of the vCloud architecture are resource pooling, abstraction, and isolation. VMware
vCloud Director further abstracts the virtualized resources presented by vSphere by providing the
following logical constructs that map to vSphere logical resources:

Organization A logical object that provides a security and policy boundary. Organizations
are the main method of establishing multitenancy and typically represent a business unit,
project, or customer in a private vCloud environment.

Virtual datacenter Deployment environments in which virtual machines run.

Organization virtual datacenter An organizations allocated portion of provider virtual


datacenter resources, including CPU, RAM, and storage.

Provider virtual datacenter vSphere resource groupings of compute, storage, and network
that power organization virtual datacenters.

Figure 1. VMware vCloud Director Abstraction Layer

2011 VMware, Inc. All rights reserved.


Page 11 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2. vSphere Design
2.1

Architecture Overview

vSphere resources are organized and separated into:

A management cluster containing all core components and services needed to run the
vCloud.

One resource group that represents dedicated resources for vCloud consumption. Each
compute cluster of ESXi hosts is managed by a vCenter Server, and is under the control of
VMware vCloud Director. Multiple compute clusters can be managed by the same VMware
vCloud Director instance as additional capacity or service offerings are added.

Reasons for organizing and separating vSphere resources along these lines are:

Facilitates quicker troubleshooting and problem resolution. Management components are


strictly contained in a relatively small and manageable management cluster.

Provides resource isolation between workloads running in the vCloud and the actual systems
used to manage the vCloud.

Separates the management components from the resources they are managing.

Resources allocated for vCloud use have little reserved overhead. For example, vCloud
resources would not host vCenter virtual machines.

vCloud resources can be consistently and transparently managed, carved up, and scaled
horizontally.

2011 VMware, Inc. All rights reserved.


Page 12 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
The components that comprise the vCloud are listed in Table 4.
Table 4. vCloud Components
vCloud Components

vCenter Server version 5.0.

vSphere ESXi version 5.0 hosts.

vCenter Chargeback Server version 1.6.2.

vCenter Chargeback Collectors version 1.6.2.

vShield Manager version 5.0.

Microsoft SQL Server 2008 R2 Standard 64-bit.

vCenter Server Databases.

vCloud Director Database.

vCenter Update Manager Database.

vCenter Chargeback Database.

vCloud Director 1.5.

vFabric Hyperic 4.5.

Active Directory 2008.

Syslog-ng.

vCenter Update Manager 5.0.

vCenter Configuration Manager.

vCenter Orchestrator 5.0.

These components map to the management cluster as noted in Section 2.3, Management Cluster
Design. For a complete bill of materials, see Appendix A: Bill of Materials.

2011 VMware, Inc. All rights reserved.


Page 13 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
The high-level logical architecture is illustrated in Figure 2.
Figure 2. vSphere Logical Architecture Overview

2011 VMware, Inc. All rights reserved.


Page 14 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Figure 3 shows the physical design that corresponds to the logical architecture.
Figure 3. vCloud Physical Design Overview

2.2

Site Considerations

There is enough floor space, power, and cooling capacity for the management group and the
resource groups to both reside within a single physical datacenter, and scale to support 5,000
virtual machines, as defined in the requirements.
Table 5. vCenter Servers
vCenter

Datacenter

Purpose

mgmt-vc1

mgmt-newco

Provides compute resource clusters for


vCloud management components.

res1-newco-vc1

Res1-newco

Provides compute resource clusters for


tenant workloads.

2011 VMware, Inc. All rights reserved.


Page 15 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.3

Management Cluster Design

The vSphere management cluster design encompasses the ESXi hosts contained in the
management group. The scope is limited to only the infrastructure components used to operate
vCloud resource group workloads. The virtual machines that run in the management group are
listed in Table 6.
Table 6. Management Virtual Machines
Virtual Machine

Purpose

mgmt-vcd1

vCloud Director cell.

mgmt-vcd2

vCloud Director cell.

res1-newco-vc1

vCenter Server dedicated to vCloud Director and


managing vCloud resources.

mgmt-vc1

vCenter Server dedicated to administering the


management group.

res-newco-vsm1

vShield Manager server paired with res1-newco-vc1.

mgmt-newco-vsm1

vShield Manager server paired with mgmt-vc1.

mgmt-lb1

Virtual load balancer appliance.

mgmt-lb2

Virtual load balancer appliance.

mgmt-vsm1

vShield Manager server paired with the management


vCenter Server.

mgmt-vco1

vCenter Orchestrator Server.

mgmt-vco2

vCenter Orchestrator Server.

mgmt-ad1

Active Directory 2008 Server.

mgmt-ad2

Active Directory 2008 Server.

mgmt-dns1

DNS, SMTP, and NTP node.

mgmt-dns2

DNS, SMTP, and NTP node.

mgmt-ipam

IPAM server for management cluster.

mgmt-hyperic

VMware vFabric Hyperic Server.

mgmt-syslog1

Syslog appliance for management and resource cluster.

mgmt-syslog2

Syslog appliance for management and resource cluster.

2011 VMware, Inc. All rights reserved.


Page 16 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

mgmt-vma

vSphere Management Assistant.

mgmt-cb1

vCenter Chargeback Server.

mgmt-um1

vCenter Update Manager.

mgmt-mssql1

Microsoft MSSQL Server 2008 R2.

mgmt-mssql2

Microsoft MSSQL Server 2008 R2.

res-mssql1

Microsoft MSSQL Server 2008 R2 attached to resource


vCenter Servers.

res-mssql2

Microsoft MSSQL Server 2008 R2 attached to resource


vCenter Servers.

2.3.1 Management Component Resiliency Considerations


The following management components rely on HA, FT, and third-party clustering for
redundancy.
Table 7. Management Component Resiliency
Management
Component

HA
Enabled

Virtual Machine
Monitoring

FT

vCenter
Heartbeat

Clustered

vCenter Server

Yes

Yes

No

Yes

No

VMware vCloud
Director

Yes

Yes

No

NA

No

vCenter
Chargeback
Server

Yes

Yes

No

NA

No

vShield Manager

Yes

Yes

Yes

NA

No

SQL Server 2008


R2 Standard (x64)

Yes

Yes

No

NA

Yes

VMware vCenter
Orchestrator

Yes

Yes

No

NA

Yes

Active Directory

Yes

Yes

No

NA

No

VMware Data
Recovery

Yes

Yes

No

NA

No

2011 VMware, Inc. All rights reserved.


Page 17 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.3.2 vCloud Cell Load Balancing


VMware vCloud Director supports multiple cells in a management cluster. All of the cells act as
front-ends for the same database, but adding cells and providing front-end load balancing can:

Increase the number of concurrent operations.

Increase the number of simultaneous consoles accessible via the console proxy service.

Increase the number of vCenter Server operations carried out (assuming that the number of
vCenter servers scales as well as the number of VCD cells).

Allow for upgrade and maintenance of the vCloud Director cells without having to disable the
public vCloud service.

2.3.3 vSphere Clusters


The management cluster is comprised of the following vSphere HA/DRS details.
Table 8. vSphere Clusters Management Cluster
Attribute

Specification

Cluster Name

Mgmt-NewCo1

Number of ESXi Hosts

VMware DRS Configuration

Fully automated

VMware DRS Migration Threshold

3 (of 5)

VMware HA Enable Host Monitoring

Yes

VMware HA Admission Control Policy

Enabled (percentage based)

VMware HA Percentage

33% CPU

33% Memory

N+1 for 3 host cluster

VMware HA Admission Control


Response

Disallow virtual machine power on


operations that violate availability
constraints.

VMware HA Default VM Restart Priority

N/A

VMware HA Host Isolation Response

Leave virtual machine Powered On

VMware HA Enable VM Monitoring

Yes

VMware HA VM Monitoring Sensitivity

Medium

2011 VMware, Inc. All rights reserved.


Page 18 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
VMware DRS dynamically balances computing capacity across a collection of hardware
resources aggregated into logical resource pools, continuously monitoring utilization across
resource pools and intelligently allocating available resources among the virtual machines based
on predefined rules that reflect business or application needs. Use of DRS rules allows for the
separation of virtual machines so that there is limited impact in the event of a host failure or
isolation event. DRS rules detailed here have been configured such that operation of
management resources are minimally impacted by a host or chassis failure.
Table 9: vSphere Management Cluster DRS Rules
Rule

Type

Description

Mgmt-LDAP

Separate virtual machines

Keep Active Directory 2008 servers


separated.

Resource vCenter

Separate virtual machines

Keep resource group vCenter


Servers separated.

Mgmt-DNS

Separate virtual machines

Keep DNS, SMTP, and NTP services


from running on same host.

Mgmt-SYSLOG

Separate virtual machines

Keep Syslog appliances separated.

Mgmt-VCD

Separate virtual machines

Keep vCloud Director servers


separated.

Mgmt-Chargeback

Keep virtual machines together

Keep vCenter Chargeback Server


and vCenter Chargeback database
together.

Resource-vCenterDB

Keep virtual machines together

Keep vCenter Server and vCenter


Server database paired together.

Mgmt-LB

Separate virtual machines

Separate virtual load balancing


appliances.

Mgmt-VCO

Separate virtual machine

Separate vCenter Orchestrator


servers.

Resource-VSM

Keep virtual machines together

Keep associated resource group


vCenter Server and the paired
vShield Manager appliance together.

2011 VMware, Inc. All rights reserved.


Page 19 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.3.4 Host Logical Design


Each ESXi host in the management cluster has the following specifications.
Table 10. Host Logical Design Specifications Management Cluster
Attribute

Specification

Host Type and Version

Blade, VMware ESXi 5

Processors

2 x Intel Xeon x5650 2.6 GHz (6 core, Westmere)

Storage

NFS Storage

Networking

802.1q Trunk Port Connectivity participating in the


following VLANs:

VLAN 600 management network (console)

VLAN 610 VMKernel (vMotion non-routable)

VLAN 620 VMKernel (iSCSI)

VLAN 630 VMKernel (NFS)

VLAN 640 Fault Tolerance (non-routable)

VLAN 650 management network (console)

Memory

96GB

2.3.5 Network Logical Design


The network logical design defines how the vSphere virtual networking is configured. Following
best practices, the network architecture must meet the following requirements:

Separate networks for vSphere management, virtual machine connectivity, VMware vSphere

vMotion traffic, IP storage, and VMware Fault Tolerance.

Redundant dvSwitch ports with at least two active physical NIC adapters each.

Redundancy across different physical adapters to protect against NIC or PCI slot failure.

Redundancy at the physical switch level.

A mandatory dvSwitch in the management cluster.

2011 VMware, Inc. All rights reserved.


Page 20 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 11. Virtual Switch Configuration Management Cluster
Switch Name

Switch Type

dvMgmtSwitch0

Distributed

Function

# of Physical NIC
Ports

Management
console

2 x 10 GigE (teamed for


failover)

VMKernel vMotion

VMKernel NFS

Fault Tolerance

Virtual machine

Figure 4 shows the virtual network infrastructure design for the vSphere management cluster.
Figure 4. vSphere Logical Network Design Management Cluster

2011 VMware, Inc. All rights reserved.


Page 21 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 12. Virtual Switch Configuration Settings Management Cluster
Parameter

Port Group

Setting

Load Balancing

All

Route based on originating port ID

Failover Detection

All

Link status

Notify Switches

All

Enabled

Failback

All

No

Failover Order

Management

Vmnic0 Active, Vmnic1 Standby

vMotion

Vmnic1 Active, Vmnic0 Standby

Production virtual machines

Vmnic0 Active, Vmnic1 Standby

NFS

Vmnic1 Active, Vmnic0 Standby

Fault Tolerance

Vmnic1 Active, Vmnic0 Standby

2.3.6 Shared Storage Logical Design


This shared storage logical design defines how vSphere storage is configured. Different volumes
from the same storage system are used for both the management cluster and the vCloud
resources.
Following best practices, the shared storage architecture must meet the following requirements:

Storage paths will be redundant at the host (connector), switch, and storage array levels.

All hosts in the management cluster have access to the same volumes, but are isolated from
datastores in the resource cluster.

Table 13. Shared Storage Logical Design Specifications Management Cluster


Attribute

Specification

Number of Initial Volumes

5 dedicated

Volume Size

1TB

Datastores per volume

Virtual Machines per Volume

No greater than 15 while distributing


redundant virtual machines.

2011 VMware, Inc. All rights reserved.


Page 22 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.3.7 vCloud Director Transfer Storage


To provide temporary storage for uploads and downloads, a 500GB network share must be
presented to all cells in the cluster. The transfer server storage volume must have write
permissions for root. Each host must mount this storage at $VCLOUD_HOME/data/transfer
(by default this is /opt/vmware/cloud-director/data/transfer).

2.4

Resource Group Design

The resource group design represents the ESXi host clusters and infrastructure used to run the
vApps that are provisioned and managed by vCloud Director. In this section the scope is further
limited to only the infrastructure dedicated to the vCloud workloads.

2.4.1 Virtual Datacenters


Each resource cluster is linked to a single virtual datacenter associated with a single vCenter
instance.
Table 14. Resource Group Clusters
Datacenter

Purpose

res-newco1

Provides computer resource clusters for one of the NewCo


resource groups.

2.4.2 vSphere Clusters


All vSphere clusters will be configured similarly with the following specifications.
Table 15. vSphere Cluster Configuration vCloud Resources
Attribute

Specification

Number of ESXi Hosts

16

VMware DRS Configuration

Fully automated.

VMware DRS Migration Threshold

3 (of 5)

VMware HA Enable Host Monitoring

Yes

VMware HA Admission Control Policy

Enabled (percentage based).

VMware HA Percentage

7% CPU

7% memory

(N+1 for 16 host cluster)

VMware HA Admission Control Response

Prevent virtual machines from being powered


on if they violate availability constraints.

VMware HA Default VM Restart Priority

N/A

VMware HA Host Isolation Response

Leave virtual machine Powered On.

2011 VMware, Inc. All rights reserved.


Page 23 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.4.3 Host Logical Design


Each ESXi host in the vCloud resources will have the following specifications.
Table 16. Host Logical Design Specifications
Attribute

Specification

Host Type and Version

Blade, VMware ESXi 5

Processors

2 x Intel Xeon x5650 2.6 GHz (6 core, Westmere)

Storage

NFS

Networking

802.1q trunk port connectivity participating in the


following VLANs:

Memory

VLAN 700 management network (Console)

VLAN 710 VMKernel (vMotion non-routable)

VLAN 720 VMKernel (iSCSI)

VLAN 730 VMKernel (NFS)

VLAN 750 management network (console)

96GB

2.4.4 Network Logical Design


The network logical design defines how vSphere virtual networking is configured. Following best
practices, the network architecture must meet the following requirements:

Separate networks for virtual machine connectivity and specific VMKernel port groups.

Maintain isolation of organizations from other VLANs across physical and virtual networking
infrastructure.

dvSwitch with, at minimum, two active physical adapter ports.

Redundancy across different physical adapters to protect against NIC or PCI slot failure.

Redundancy at the physical switch level.

Table 17. Virtual Switch Configuration vCloud Resources


Switch Name

Switch Type

Function

# of NIC Ports

vdResSwitch01

Distributed

Network pools

2 x 10 GigE (teamed for


failover)

External networks

2011 VMware, Inc. All rights reserved.


Page 24 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
When using the distributed virtual switch, dvUplink ports are the number of physical NIC ports on
each host. The physical NIC ports are connected to redundant physical switches.
Figure 5 depicts the vSphere virtual network infrastructure design.
Figure 5. vSphere Logical Network Design

Table 18. dvResSwitch01 Teaming and Failover Policies


Parameter

Port Group

Setting

Load Balancing

All

Route based on NIC load (for vDS)

Failover Detection

All

Link status

Notify Switches

All

Enabled

Failback

All

No

Failover Order

Management

Vmnic0 Active, Vmnic1 Standby

vMotion

Vmnic1 Active, Vmnic0 Standby

External Network

Vmnic0 Active, Vmnic1 Standby

Virtual Machine

Vmnic0 Active, Vmnic1 Standby

NFS

Vmnic1 Active, Vmnic0 Standby

Fault Tolerance

Vmnic1 Active, Vmnic0 Standby

2011 VMware, Inc. All rights reserved.


Page 25 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 19. dvResSwitch01 Security Policies
Parameter

Port Group

Setting

Promiscuous Mode

All

Reject

MAC Address Change

All

Reject

Forged Transmits

All

Reject

Table 20. dvResSwitch01 General Policies


Parameter

Port Group

Setting

Port binding

Production external net

Ephemeral no binding

Port binding

Development external net

Ephemeral no binding

2.4.5 Shared Storage Logical Design


The shared storage design defines how the vSphere volumes will be configured. Following best
practices, the shared storage architecture must meet the following requirements:

Storage paths will be redundant at the host, switch, and storage array levels.

All hosts in a cluster have access to the same volumes.

Table 21. Storage Logical Design Specifications vCloud Compute Cluster


Attribute

Specification

Cluster

res-pod1

Number of Initial LUNs

100

LUN Size

500GB

Virtual Machines per Volume

15 (simultaneous active virtual machines)

2011 VMware, Inc. All rights reserved.


Page 26 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.4.6 vCloud Resources Datastore Considerations


When sizing datastores you have to determine the limit that should be implemented for the
number of virtual machines per datastore. The reason for limiting this number is to minimize the
potential for SCSI locking due to metadata updates and to spread the I/O across as many storage
processors as possible. Most mainstream storage vendors will provide VMware-specific
guidelines for this limit, and VMware recommends an upper limit of 15 virtual machines (active)
per VMFS datastore, regardless of storage platform. Note that the number of virtual machines per
LUN is also influenced by the size and I/O requirements of the virtual machine, but perhaps more
importantly, by the selected storage solution and even disk types.
When VMware vCloud Director provisions virtual machines it automatically places the virtual
machines on datastores based on the free disk space of each of the associated datastores in an
organization virtual datacenter. Due to this mechanism, we need to keep the size of the LUNs
and the number of virtual machines per LUN relatively low to avoid possible I/O contention.
When considering the number of virtual machines to place on a single datastore, some of the
following factors should be considered in conjunction with any recommended VMs-per-LUN ratio:

Average virtual machine workload/profile (in particular, the amount of I/O).

Typical virtual machine size (including configuration files, logs, swap files, and snapshot
files).

VMFS metadata.

Max requirement for IOPs and throughput per LUN, dependency on storage array and
design.

Max RTO, if a LUN is lostthat is, your backup and restore design.

If we approach this from an average I/O profile it would be tempting to create all LUNs the same,
say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and
virtual machines on those LUNs. Another approach would be to create LUNs with different RAID
profiles based on anticipated workloads to provide differentiated levels of service. These levels of
service would be represented at the vSphere level by an HA/DRS cluster and its associated
mapped storage and network objects. The vCloud logical design will map provider virtual
datacenters to these clusters. To achieve the desired levels of service NewCo will start with one
underlying vSphere vCloud compute cluster with dedicated storage. Additional vCloud compute
clusters will be attached to their own dedicated storage as they are added.
Table 22. vSphere Clusters vCloud Compute Datastores
Cluster Name

Datastores

Quantity

RAID

Size

res-pod1

res-pod1-xx

100

500GB

Where xx = the LUN ID or volume for that device.

As a starting point, VMware recommends RAID 5 storage profiles, and only creating storage tierspecific provider virtual datacenters as one-offs to address specific organization or business unit
requirements.

2011 VMware, Inc. All rights reserved.


Page 27 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.4.7 Datastore Sizing Estimation


An estimate of the typical datastore size can be approximated by considering the following
factors.
Table 23. Datastore Size Estimation Factors res-pod1 Cluster
Variable

Value

Maximum Number of VMs per Volume

15

Average Size of Virtual Disk(s) per VM

40GB

Average Memory Size per VM

2GB

Safety Margin

20% (to avoid warning


alerts)

For example,

((15 * 40GB) + (15 * 2GB))+ 20% = (600GB + 30GB) * 1.2 = 636GB

2.4.8 Storage VMotion


Storage vMotion in ESXi 5.0 has been improved to support migration of linked clones, which is
the technology used to implement fast provisioning in vCloud Director. In a vCloud Director
environment, the migration of linked clones can only be invoked in the vCloud Director layer,
through the REST API Relocate_VM method. In vCloud Director 1.5, the API call is the only
method to migrate vApps provisioned through fast provisioning. It is not supported to invoke
storage vMotion migration of linked clone virtual machines in the vSphere layer. When invoking
the Relocate_VM API to migrate linked clones, make sure that the target organization virtual
datacenter is part of the same provider virtual datacenter as the source organization virtual
datacenter, or is backed by a provider virtual datacenter that has the same datastore where the
source vApp resides. If the condition is not met, the API call will fail.
Be aware of following when leveraging Storage vMotion in a vCloud environment:

Source and destination volumes for Storage vMotion should both reside within the same
provider virtual datacenter or vSphere cluster.

For provider virtual datacenters that leverage fast provisioning, linked clones become full
clones when virtual machines are migrated using Storage vMotion.

2.4.9 Storage I/O Control


Storage I/O Control (SIOC) provides QoS for VMFS datastores and allows intelligent performance
management across all nodes within an HA/DRS cluster. Enabling SIOC on all datastores in a
cluster prevents virtual machines from monopolizing storage I/O, and provides a weighting
mechanism that supplies adequate performance using relative shares, NFS datastores, or
datastores with multiple extents.

2011 VMware, Inc. All rights reserved.


Page 28 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

2.4.10 Storage DRS


Storage DRS provides initial placement and on-going balancing recommendations for datastores
in a Storage DRS-enabled datastore cluster. Currently, datastore clusters cannot serve as a
storage source to a provider virtual datacenter, and as a result Storage DRS is not used with
resource groups, but is enabled for use with management group.

2.4.11 vSphere Storage APIs Array Integration


vSphere Storage APIs Array Integration (VAAI) enables storage-based hardware acceleration
by allowing vSphere to pass storage primitives to supported arrays, offloading functions such as
full copy, block zeroing, and locking. VAAI improves storage task execution times, network traffic
utilization, and CPU host utilization during heavy storage operations. VAAI is currently supported
with NAS primitives and as a result is used with the vCloud resource groups.

2011 VMware, Inc. All rights reserved.


Page 29 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

3. vCloud Design Provider Constructs


The vSphere management cluster design encompasses the vSphere hosts contained in the
management cluster. In this section, the scope is limited to only the infrastructure that supports
the vCloud management component workloads.

3.1

Provider Virtual Datacenters

A vSphere cluster will scale to 32 hosts (typically 8-12 is a good starting point, allowing for future
growth), allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum
number of hosts per datacenter possible) and an upper limit of 10,000 virtual machines(this is a
vCenter limit).
The recommendation provided in Architecting a VMware vCloud is to start with 50% of the
maximum cluster size and add additional hosts to the cluster as dictated by tenant consumption.
When utilization of the total compute resources across the resource group for the cluster reaches
60%, VMware recommends that a new provider virtual datacenter be deployed. This provides for
growth within the provider virtual datacenter for the existing organizations/business units without
necessitating their migration as utilization nears maxing out a clusters resources.
As an example, a fully loaded resource group will contain 14 provider virtual datacenters and up
to 350 ESXi hosts, giving an average virtual machine consolidation ratio of 26:1 assuming a 5:1
ratio of vCPU:pCPU. To increase this ratio, NewCo enterprise would need to increase the
vCPU:pCPU ratio that they are willing to support. The risk associated with an increase in CPU
over-commitment is mainly in degraded overall performance that can result in higher than
acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU overcommitment for the available cores with which NewCo is comfortable. For virtual machines that
are not busy this ratio can be increased without any undesirable effect on virtual machine
performance. Monitoring of vCPU ready times helps identify if the ratio needs to be increased or
decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.
A provider virtual datacenter can map to only one vSphere cluster, but can map to multiple
datastores and networks.
Multiple provider virtual datacenters are used to map to different types/tiers of resources:

Compute This is a function of the mapped vSphere clusters and the resources that back it.

Storage This is a function of the underlying storage types of the mapped datastores.

Networking This is a function of the mapped vSphere networking in terms of speed and
connectivity.

Multiple provider virtual datacenters are created for the following reasons:

The vCloud requires more compute capacity than a single vSphere cluster (a vSphere
resource pool cannot span vSphere clusters).

Tiered storage is required; each provider virtual datacenter maps to datastores on storage
with different characteristics.

Requirement for workloads to run on physically separate infrastructure.

2011 VMware, Inc. All rights reserved.


Page 30 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 24. Provider Virtual Datacenter Specifications
Attribute

Specification

Number of Provider Virtual


Datacenters

Number of Default External Networks

VMware recommends assessing workloads to assist in sizing. The following is a sample sizing
table that can be used as a reference for future design activities. Virtual machine distribution is
based on the percentages outlined in the service offering initial target with a maximum of 5000
virtual machines.
Table 25. Virtual Machine Sizing and Distribution
Virtual Machine Size

Distribution

Number of
Virtual
Machines

1 vCPU/1GB RAM

40%

2000

1 vCPU/2GB RAM

20%

1000

1 vCPU/4GB RAM

25%

1250

2 vCPU/8GB RAM

10%

500

4 vCPU/16GB RAM

5%

250

Total

100%

5000

2011 VMware, Inc. All rights reserved.


Page 31 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

3.1.1 Provider Virtual Datacenter Sizing


Each NewCo provider virtual datacenter corresponds to one and only one vSphere HA/DRS
cluster. Though a vSphere 5 cluster can scale to 32 hosts, each new cluster will start with 16
hosts, allowing for future growth until utilization reaches 60%. Through the concept of pooled and
abstracted infrastructure, capacity can be added to the vCloud through this method allowing for
expansion of provider virtual datacenters and the corresponding clusters without impacting
running vApps. If expanding an existing cluster is not an option, VMware recommends that a new
provider virtual datacenter and corresponding cluster be deployed.
The design calls for two clusters initially sized at sixteen hosts. A single vCenter Server is limited
to 1,000 ESXi hosts and 10,000 powered on virtual machines if spread across more than one
VMware datacenter. In this configuration each vCenter hierarchy will act as a large resource pool
which can scale up through the addition of hosts to existing vSphere clusters or by adding
additional vSphere clusters and associated provider virtual datacenters. Multiple clusters can be
managed by the same VMware vCloud Director and represent different levels of service.
Based on analysis of the existing vSphere environment NewCo averages a 4:1 vCPU to physical
CPU Core ratio for their virtual machines. Each one of the vSphere clusters provides
approximately 192 usable cores based on the host hardware configuration and HA availability.
Based on the estimated vCPU to pCPU ratio of 4:1, this should provide the ability to run 768
virtual machines of similar size and performance characteristics in the vCloud. As previously
noted, the risk associated with an increase in CPU over-commitment is mainly degraded overall
performance that can result in higher than acceptable vCPU ready times. The vCPU:pCPU ratio
is based on the amount of CPU over-commitment for the available cores with which NewCo is
comfortable. For virtual machines that are not busy this ratio can be increased without any
undesirable effect on virtual machine performance. Monitoring of vCPU ready times helps identify
if the ratio needs to be increased or decreased on a per cluster basis.

3.1.2 Provider Virtual Datacenter Expansion


vCloud Director 1.5 introduced the concept of elastic virtual datacenters, allowing a provider
virtual datacenter to recognize compute, network, and storage resources from multiple resource
pools or vSphere clusters. In vCloud Director 1.5, only Pay-As-You-Go organization virtual
datacenters can be backed by multiple resource pools or vSphere clusters. Organization virtual
datacenters that use the Reservation Pool or Allocation Pool allocation models, Committed and
Dedicated service offerings respectively, cannot be backed by elastic virtual datacenters. To
maintain consistency, all datastores that are mapped to the underlying vSphere clusters beneath
an elastic provider virtual datacenter are added to the provider virtual datacenter.

3.1.3 Provider Virtual Datacenter Storage


When creating the provider virtual datacenter, the vCloud administrator adds all of the shared
storage LUNs available to the HA/DRS cluster to which the provider virtual datacenter is mapped.
Storage LUNs should usually only be mapped to the hosts within an HA/DRS cluster to facilitate
vSphere vMotion and DRS. vCloud Director 1.5 does not understand datastore clusters
introduced in vSphere 5 so datastores should be added individually to provider virtual
datacenters.
For the Gold provider virtual datacenter, this means adding the 10 shared storage LUNs from the
SAN with the naming standard Fc02-vcdgold01-xx, where xx is the LUN ID. For the Silver
provider virtual datacenter, add the eight shared storage LUNs from the SAN with naming
convention Fc02-vcdsilv01-xx, where xx is the LUN ID. Keep all of the LUNs within a provider
virtual datacenter with the same performance and RAID characteristics to provide a consistent
level of service to consumers. Only shared storage should be used to allow for vSphere vMotion
and DRS to function.
2011 VMware, Inc. All rights reserved.
Page 32 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

3.2

External Networks

A vCloud external network is a logical construct that maps directly to a vSphere port group that
has multiple vmnic uplinks to a physical network. This construct represents an external
connection for communication in and out of the vCloud. NewCo provides each organization with a
guarantee of one routable organization external network, with the ability to request additional
organization external networks as needed.
Table 26. Provider External Network Specifications
Attribute

Specification

Number of Default External Networks

Maximum # of Organization External


Networks

600

Default Network Pool Types Used

vCloud Director Network Isolation (VCD-NI)

Routable Public IP Addresses

4096 or /20

More than one vCenter Server is required to manage 600 networks under a VCD-NI pool.
Additional vCenter Servers will be added as VCD-NI network pools are exhausted.

3.3

Network Pools

Network pools are a construct in vCloud Director and represent a preconfigured vCloud controlled
pool of Layer 2 isolated networks that are automatically used to provide isolation between
different organizations or even between vApps within an organization. Aside from the Layer 2
Isolation function, they also enable self-service by allowing the complicated underlying networking
configuration to be abstracted from the application owner.
NewCo will provide the following sets of network pools based on need:

VMware vCloud Director Network Isolation-backed.

VLAN-backed.

For the VCD-NI pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that
is not in use within the infrastructure. This is for increased security and isolation.

3.4

Users/Roles

For security purposes, the system administrators are a separate role and log into a different
context than the vCloud consumers who exist within an organization. As a provider construct, the
system administrator role has the ability to modify all organizations within the vCloud as well as
create and configure objects that vCloud consumers cannot. The role of System Administrator
should be reserved for a limited group of system administrators. Because this role has the ability
to create and destroy objects as well as make configuration changes that can negatively impact
multiple organizations users who possess this role should be knowledgeable about storage,
networking, virtualization, and vCloud. The design calls for a single local account (cloudadmin) to
be used as a backup for accessing VCD. The primary access method will be managed by adding
members to the cloudadmins LDAP group.

2011 VMware, Inc. All rights reserved.


Page 33 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 27. System Administrator User and Group
Account

User/Group

Type

Role

cloudadmin

User

Local

System Administrator

NewCo\cloudadmins

Group

LDAP

System Administrator

2011 VMware, Inc. All rights reserved.


Page 34 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

4. vCloud Design Consumer Constructs


4.1

Organizations

Except for the service provider default organization whose primary function is publishing vApps
and media for consumption by tenants, new organizations are created on demand and are not
defined in this section.

4.2

Organization Virtual Datacenters

An organization virtual datacenter is a subset of provider virtual datacenter that is backed by a


pool of compute, memory, storage, and network resources. An organization virtual datacenter can
be expanded by a vCloud system administrator to provide additional capacity by reserving
compute, network, and storage up to the existing capacity of the underlying provider virtual
datacenter. At NewCo, this expansion would need to be requested by an organization and the
corresponding vCenter Chargeback costs would increase automatically through regular
synchronization by the Chargeback vCloud data collector.

4.2.1 Thin Provisioning


Thin provisioning will be leveraged by NewCo for templates and shadow virtual machine disk files
to conserve storage for certain levels of service. This will be managed and configured from within
vCloud Director at the organization virtual datacenter level and will be used only for certain levels
of service. See Section 3.1.3, Provider Virtual Datacenter Storage, for details.

4.2.2 Fast Provisioning


Fast provisioning is a feature in vCloud Director 1.5 that enables faster provisioning of vApps
through the use of vSphere linked clones. A linked clone uses the same base disk as the original,
with a chain of delta disks to keep track of the differences between the original and the clone.
Fast provisioning is enabled by default when allocating storage to an organization virtual
datacenter. If an organization administrator disables fast provisioning, all provisioning operations
result in full clones.
VMware recommends enabling or disabling fast provisioning on all organization virtual
datacenters (and in turn, all datastores) allocated to a provider virtual datacenter for both
manageability and chargeback purposes. For the same reasons, keep datastores separate for
fast provisioning and full clone vApp workloads. All organization virtual datacenters created from
the same dedicated provider virtual datacenter will have Enable Fast Provisioning selected.
Placement of virtual machine disk files in a vCloud environment is based on available free
capacity across datastores that are mapped to a provider virtual datacenter. In the case of
organizational virtual datacenters that are leveraging fast provisioning, placement will first
consider the location of the base or shadow virtual machines until the datastore reaches a preset
Disk space threshold which is set for each datastore, and enforces the amount of free space in
a datastore. After this threshold is reached the datastore will no longer be considered as a valid
target for a clone operation regardless of where the new virtual machines base or shadow disk is
located.

2011 VMware, Inc. All rights reserved.


Page 35 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

4.3

Organization Networks

Organization networks are not defined in advance. Instead, they are created on demand during
the formation of the organization virtual datacenter. During the creation of the organization and
first organization virtual datacenter, the service provider will allocate four internal or vApp
networks and one internet routable address backed by VDC-NI.

4.4

Catalogs

The service provider catalog contains NewCo-specific templates that are made available to all
organizations/business units. NewCo will make a set of catalog entries available to cover the
classes of virtual machines, templates, and media, as specified in the Public VMware vCloud
Service Definition.
For the initial implementation, a single cost model will be created using the following fixed cost
pricing and chargeback model.
Table 28. NewCo Fixed-cost Cost Model
Virtual Machine
Configuration

Price

1 vCPU and 512MB RAM

$248.00

1 vCPU and 1GB RAM

$272.00

1 vCPU and 2GB RAM

$289.00

2 vCPUs and 2GB RAM

$308.00

1 vCPU and 3GB RAM

$315.00

2 vCPUs and 3GB RAM

$331.00

1 vCPU and 4GB RAM

$341.00

2 vCPUs and 4GB RAM

$354.00

4 vCPUs and 4GB RAM

$386.00

1 vCPU and 8GB RAM

$461.00

2 vCPUs and 8GB RAM

$477.00

4 vCPUs and 8GB RAM

$509.00

4 vCPUs and 16GB RAM

$681.00

4.5

Users/Roles

By default, only one user is created during onboarding of an organization and that is the system
administrator. All other roles, including additional system administrators, are managed by the
primary system administrator by importing users into the public vCloud via LDAP synchronization.
2011 VMware, Inc. All rights reserved.
Page 36 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

5. vCloud Security
Security is critical for any company. The following sections address host, network, vCenter, and
vCloud Director security considerations.

5.1

Host Security

ESXi will be configured with a strong root password stored following corporate password
procedures. ESXi lockdown mode will be enabled to prevent root access to the hosts over the
network, and appropriate security policies and procedures will be created and enforced to govern
the systems. Because ESXi cannot be accessed over the network, sophisticated host-based
firewall configurations are not required.

5.2

Network Security

Virtual switch security settings will be set as follows.


Table 29. Virtual Switch Security Settings
Function

Setting

Promiscuous Mode

Management cluster Reject


Resource groups Reject

MAC Address Changes

Management cluster Reject


Resource groups Reject

Forged Transmits

Management cluster Reject


Resource groups Reject

5.3

vCenter Security

vCenter Server is installed using a local administrator account. When vCenter Server is joined to
a domain, this results in any domain administrator gaining administrative privileges to vCenter.
VMware recommends that this potential security risk be removed by creating a new vCenter
Administrators group in Active Directory and assigning it to the vCenter Server Administrator role,
making it possible to remove the Local Administrators group from this role. By default, members
of the vCloud System Administrator group are not associated with vCenter Administrators group.

2011 VMware, Inc. All rights reserved.


Page 37 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

5.4

VMware vCloud Director Security

Standard Linux hardening guidelines need to be applied to the VMware vCloud Director virtual
machine. There is no need for local users, and the root password will only be needed during
install and upgrades to the VMware vCloud Director binaries. Additionally, certain network ports
must be open for vCloud Director use. For additional information see the vCloud Director
Administrators Guide (https://www.vmware.com/support/pubs/vcd_pubs.html).
In vCloud Director version 1.5, VMware has implemented a new configurable account lockout
feature where at the system level, accounts can be configured to lock out for a specified number
of minutes after a specified number of failed login attempts. By default, the lockout feature is not
enabled, but NewCo has chosen to enable it so that system administrators are locked out for 10
minutes after five failed login attempts. This feature is also available for organization accounts
and can be requested during the organization onboarding.

5.5

Additional Security Considerations

The following are examples of use cases that require special security considerations:

End-to-end encryption from a guest virtual machine to its communication endpoint, including
encrypted storage via encryption in the guest OS and/or storage infrastructure.

Provisioning of user accounts and/or access control from a single console.

Need to control access to each layer of a hosting environment (rules and role-based security
requirements for an organization).

vApp requirements for secure traffic and/or VPN tunneling from a vShield Edge device at any
network layer.

2011 VMware, Inc. All rights reserved.


Page 38 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

6. vCloud Management
6.1

vSphere Host Setup Standardization

Host profiles can be used to automatically configure network, storage, security and other
features. This feature, along with automated installation of ESXi hosts, is used to standardize all
host configurations.
VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to
verify a virtual machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to
not be updated, VM Monitoring verifies if any storage or networking I/O has occurred over the last
120 seconds and if not, the virtual machine is restarted.
VMware recommends enabling both VMware HA and VM monitoring on the management cluster
and the resource groups.

6.2

vCloud Center of Excellence

The vCloud Center of Excellence (vCOE) model is an extension of the VMware Center of
Excellence model that has been used by many organizations of various sizes to facilitate the
adoption of VMware technology and to reduce the complexity of managing a VMware virtual
infrastructure. The vCloud Center of Excellence model defines cross-domain vCloud
Infrastructure Management accountability and responsibility within team roles across an
organization. These team roles enable an organization to consistently measure, account for, and
improve the effectiveness of its vCloud infrastructure management even if its IT Service
Management roles and responsibilities are distributed across multiple IT functional areas. See
Operating a VMware vCloud for more information about the vCOE.

6.3

vCloud Logging

Logging is one of key components in any infrastructure. It provides audit trails for user logins and
logouts among other important functions. Logging records various events on servers, and helps
diagnose problems, and detect unauthorized access. In some cases, regular log analyses and
scrubbing will proactively stave off problems that may turn out to be critical to the vCloud
operations.
NewCo utilizes a centralized, redundant Syslog system for all management virtual machines and
applications for error analysis and compliance. Logs captured in Syslog are readily available for
analysis for 60 days and available via archive for a minimum of 12 months.

2011 VMware, Inc. All rights reserved.


Page 39 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Figure 6. vCloud Logging Organization

2011 VMware, Inc. All rights reserved.


Page 40 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

6.3.1 VMware vCloud Director Logging


Two external syslog servers are configured and used as a duplicate destination for the events
that are logged locally to vCloud Director. Each vCloud Director cell has been configured to
redirect syslog messages to syslog servers with the following log4j.properities changes to
%VCLOUD%/etc. This is in addition to importing the copy of the response.properties and
global.properties files during the initial installation of subsequent cells.
log4j.appender.vcloud.system.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.vcloud.system.syslog.syslogHost=127.0.0.1
log4j.appender.vcloud.system.syslog.facility=LOCAL3
log4j.appender.vcloud.system.syslog.layout=com.vmware.vcloud.logging.CustomPatte
rnLayout
log4j.appender.vcloud.system.syslog.layout.ConversionPattern=%d{ISO8601} | %8.8p | %-25.50t | %-30.50c{1} | %m | %x%n
log4j.appender.vcloud.system.syslog.threshold=INFO

Save the file and restart the vCloud Director cell using service v mw a r e - v cd r e st a rt.
To enable centralized logging in all the vCloud Director cells, repeat the procedure for each cell.
6.3.1.1. Syslog Configuration
Depending on your network architecture, it may be valuable to transmit logs to multiple hosts for
redundancy. Syslog is UDP-based and stateless, so anywhere there can be network failure, logs
transmission is not guaranteed. Some redundancy can be achieved by setting the syslog targets
in %VCLOUD%/etc/global.properties and %VCLOUD%/etc/responses.properties to
127.0.0.1 (localhost), and then modifying /etc/syslog.conf to retransmit those syslogs
elsewhere, allowing you to send syslogs to two targets. For example, the following line could be
placed at the top of the syslog.conf file:
*.*

ip.address.syslog.host

This assumes all logs are wanted. VCD event logs are logged at the user.notice facility and
level. If you redirect things such as the Debug and Info logs, those facilities are specified in the
log4j.properties file.
Such a configuration will also transmit all other logs received by syslog, regardless of facility.
Because the logs received from VCD are considered to be remote (as they are sent via a network
socket to localhost), the file /etc/sysconfig/syslog must also be modified to give syslogd
the correct startup parameters. This line:
SYSLOGD_OPTIONS="-m 0"
Can be modified to:
SYSLOGD_OPTIONS="-r -h -x -m 0"
Which instructs syslogd to accept logs remotely, and to re-forward logs received from remote
sources. The x flag disables name lookups, which can prevent syslogd from consuming lots of
extra resources on name resolution.

2011 VMware, Inc. All rights reserved.


Page 41 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 30: vCloud Director Log Locations
Log Type

Location

Host(s)

Collection Method

VCD Debug
Logs

%VCLOUD%/logs/*

VCD cells

%VCLOUD%/logs/vcloudcontainer-debug.log and
%VCLOUD%/logs/vcloudcontainer-info.log are redirected to
syslog by modification of

%VCLOUD%/etc/log4j.properties
VCD Syslog
Events

Sent via Syslog

VCD cells

Set syslog targets are configured in

%VCLOUD%/etc/global.properties
and

%VCLOUD%/etc/responses.propert
ies
VCD System
Logs

Standard Linux Log


locations;
/var/log/messag
es and
/var/log/secure

VCD cells

Syslog targets via syslog.conf or agent


retrieval

API Web
Access Logs

%VCLOUD%/logs/*
.request.log by
date

VCD cells

Periodical retrieval (logs organized by date.


For example:
2010_08_09.request.log)

6.3.2 vSphere Host Logging


Remote logging to a central host provides a way to greatly increase administration capabilities.
Gathering log files on a central server facilitates monitoring of all hosts with a single tool as well
as enabling aggregate analysis and the ability to search for evidence of coordinated attacks on
multiple hosts.
Within each ESXi host, Syslog behavior is managed by leveraging esxcli. These settings
determine the central logging host that will receive the Syslog messages. The hostname must be
resolvable using DNS.
For this initial implementation, all of the NewCo management and resource hosts are configured
to send log files to two central Syslog servers residing in the management cluster. Requirements
for this configuration are:

Syslog.Local.DatastorePath A location on a local or remote datastore and path


where logs are saved. Has the format [datastorename directory/filename], which
maps to /vmfs/volumes/<datastorename>/<directory>/<filename>. The default
datastore location is []/scratch/log/messages. For more information on scratch, see
Creating a persistent scratch location for ESXi
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&
externalId=1033696).

Syslog.Remote.Hostname A remote server's DNS name or IP address where logs are


sent using the syslog protocol. DNS name for syslog: mgmt-syslog.example.com.

2011 VMware, Inc. All rights reserved.


Page 42 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

Syslog.Remote.Port A remote server's UDP port where logs are sent using the syslog
protocol. Default is port 514.

#Configure NewCo syslog servers.


esxcli system syslog config set --default-rotate 20 --loghost udp://mgmtsyslog1.example.com:514,udp://mgmt-syslog2.example.com:1514

#Configure ESXi logs to send to syslog.


esxcli system syslog config logger set --id=hostd --rotate=20 --size=2048
esxcli system syslog config logger set --id=vmkernel --rotate=20 --size=2048
esxcli system syslog config logger set --id=fdm --rotate=20
esxcli system syslog config logger set --id=vpxa --rotate=20

6.3.3 vCenter Orchestrator Logging


vCenter Orchestrator uses Apache log4j, which allows for granular logging at runtime without
modifying the application. The target of the log output is configured by default to a set of files, but
has been routed instead to the NewCo Syslog servers.
The log configuration file is named log4j.xml and is located in install_directory\appserver\server\vmo\conf. The following section was added to enable logs to be routed to
Syslog with X.X.X.X changed to the Syslog server and LOCAL2 replaced with the correct log
facility.
<!-- ============================== -->
<!-- Append messages to the syslog-->
<!-- ============================== -->
<appender name="SYSLOG" class="org.apache.log4j.net.SyslogAppender">
<param name="SyslogHost" value="X.X.X.X"/>
<param name="Facility" value="LOCAL2"/>
<param name="FacilityPrinting" value="true"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="%t %5r %-5p %-21d{yyyyMMdd HH:mm:ss,SSS} %c{2} [%x] %m %n"/>
</layout>
</appender>

The <root></root> section must include:


<appender-ref ref="SYSLOG"/>

Each of the vCenter Orchestrator application server components and plug-in adapters provides
logs at different levels including fatal, error, warning, info, and debug.
During normal operations it is recommended to use the default info level to maximize the server
performance while still keeping quite detailed level information. Using the debug level for a single
component or for all server components is recommended for troubleshooting.
When troubleshooting vCenter Orchestrator with vCloud Director, setting the vCloud Director
plug-in in debug mode logs the REST calls and responses. This can be changed in the following
section as follows:
<category additivity="true" name="com.vmware.vmo.plugin.vcloud">
<priority value="DEBUG"/>
</category>

2011 VMware, Inc. All rights reserved.


Page 43 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
During overall server troubleshooting it is recommended to set the server and all the components
in debug mode. This can be done by changing the log level in the three following sections.
In the section:
<appender class="org.jboss.logging.appender.RollingFileAppender" name="FILE">

Update this line as follows:


<param name="Threshold" value="DEBUG"/>

In the section:
<appender class="org.apache.log4j.ConsoleAppender" name="CONSOLE">

Update this line as follows:


<param name="Threshold" value="DEBUG"/>

In the section:
<!-- VMware vCO -->

Update this section as follows:


<category additivity="true" name="ch.dunes">
<priority value="DEBUG"/>
</category>

6.4

vCloud Monitoring

Monitoring a vCloud instance gives the service provider insight into the health of their vCloud
services to help meet SLAs and proactively notify regarding any potential capacity shortfalls.
vCloud management systems can be monitored using vFabric Hyperic as a single dashboard
integrated with agents installed on the management virtual machines.
Table 31. VMware vCloud Director Monitoring Items
Scope

Item

System

Leases

Quotas

Limits

CPU

Memory

Network IP address pool

Storage free space

vSphere Resources

Virtual Machines/vApps

Not in scope

2011 VMware, Inc. All rights reserved.


Page 44 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

6.4.1 vCloud Director


As of VMware vCloud Director 1.0 monitoring is performed via custom queries to VMware vCloud
Director using the Admin API to capture the summary consumption data on organization virtual
datacenters, through MBeans, and through standard Linux and JMX monitoring services running
on the vCloud Director guest. Some of the components in VMware vCloud Director can also be
monitored by aggregating the Syslog-generated logs from the different VMware vCloud Director
cells that would be found on the centralized log server.

6.4.2 vCenter Server


Multiple vCenter Servers are present within a vCloud instance to manage the virtual resources
within the management and resource groups. Use of vCenter alerts and health of the vpxd
service provides notice when vCenter Servers are resource constrained or have suffered a fault.
Use of the Hyperic plug-in for vSphere enables detailed monitoring for the vCenter Servers, ESXi
hosts, and virtual machines running within them. See the Monitoring vSphere Components guide
(http://support.hyperic.com/display/DOC/Monitoring+vSphere+Components) for more information
about available objects and alerts.

6.4.3 vCenter Orchestrator


The vCenter Orchestrator application server is monitored using Java Management Extensions
(JMX). To enable JMX for a vCenter Orchestrator server started as a service:
1. Create a jmxremote.password file in install_directory\appserver\server\vmo\conf. The file takes the form of a username and a password
separated by a space on each line. An example is: monitor password.
2. Create a jmxremote.access file in install_directory\appserver\server\vmo\conf. The file takes the form of a username and a permission
separated by a space on each line. An example is: monitor readonly.
3. Secure each of the jmxremote files on Windows as described in the Oracle Java
documentation: How to Secure a Password File on Microsoft Windows Systems
(http://download.oracle.com/javase/6/docs/technotes/guides/management/securitywindows.html), and for non-Windows systems simply use the following command:
chmod 600 jmxremote.password jmxremote.password

Note

Essentially, each of the jmxremote files is only accessible to the owner. Final
security properties on a Windows server will show Full Access to the owner (default
is Administrators group) and no other users, groups, or SYSTEM listed for access.
Additionally, windows explorer adds a lock icon next to the filename.

4. Edit install_directory\app- server\bin\wrapper.conf by adding the lines below


after the line "wrapper.java.additional.9 ...":
wrapper.java.additional.10=-Dcom.sun.management.jmxremote.authenticate=true
wrapper.java.additional.11=-Dcom.sun.management.jmxremote.password.file=
../server/vmo/conf/jmxremote.password
wrapper.java.additional.12=Dcom.sun.management.jmxremote.access.file=../server/vmo/conf/jmxremote.
access

2011 VMware, Inc. All rights reserved.


Page 45 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
wrapper.java.additional.13=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.14=-Dcom.sun.management.jmxremote.port=1099
wrapper.java.additional.15=Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBui
lderImpl
wrapper.java.additional.16=-Djboss.platform.mbeanserver

Note

The numbers in wrapper.java.additional.## must be in order without any gap


between them; otherwise the wrapper ignores them. If the last property is
wrapper.java.additional.10 then all above properties should be shifted by 1 starting
at wrapper.java.additional.11

JMX monitoring will be available after restarting the vCenter Orchestrator server and is set up
during the initial vCloud deployment.
6.4.3.1. Testing of the Monitoring
JConsole is a GUI application from the Java Development Kit that is designed for monitoring Java
applications.
The jconsole executable is in JDK_HOME/bin, where JDK_HOME is the directory where the
JDK is installed. If this directory is in system path, the tool can be started by typing jc o n s o l e in
a command (shell) prompt.
JConsole lists local processes and provides the option to enter a remote one with the
hostname:port syntax.
After connecting you can monitor the memory, threads, and managed beans.
Table 32 represents a subset of MBeans that can be are used for monitoring the performance of
a vCenter Orchestrator instance.
Table 32.vCenter Orchestrator Monitored MBeans
Workflow Execution
Mbean

ch.dunes.workflow.engine.mbean.WorkflowEngine

Description

Statistics about active workflows.

Attribute

Description

ExecutorsActiveCount

Number of currently active workflows.

ExecutorsQueueSize

Number of workflows queued.

Web views
Mbean

jboss.web:type=Cache,host=[hostname],path=/vmo

Description

webview statistics

Attribute

Description

accessCount

Number of accesses to the cache.

cacheMaxSize

Max size of resources which will have their content cached.

2011 VMware, Inc. All rights reserved.


Page 46 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

cacheSize

Max size of resources which will have their content cached.

desiredEntryAccessRatio

Entry hit ratio at which an entry will never be removed from the cache.

hitsCount

Number of cache hits.

Apache Tomcat Global


Request Processor
Metrics
Mbean

jboss.web:type=GlobalRequestProcessor,name=http-[hostname]-[port]

Description

Apache Tomcat Global Request Processor Metrics

Attribute

Description

bytesSent

Bytes sent by all the request processors running on the Apache Tomcat
container.

bytesReceived

Bytes received by all the request processors running on the Apache


Tomcat container.

processingTime
errorCount

maxTime
requestCount

Total processing time (in milliseconds) since startup.


Error count on all the request processors running on the Apache Tomcat
container.
Maximum time it took to process a request.
Request count on all the request processors running on the Apache
Tomcat container.

vCO web service


Mbean

jboss.web:type=Manager,path=/vmware-vmo-webcontrol,host=[hostname]

Description
Attribute

Description

activeSessions

Number of currently active sessions.

expiredSessions

Number of sessions that have expired.

maxActive

Maximum number of sessions that have been active at the same time.

processingTime

Total processing time (in milliseconds) since startup.

sessionAverageAliveTime

Average time (in seconds) that expired sessions had been alive.

sessionCounter

Total number of sessions created by this manager.

sessionMaxAliveTime

Longest time (in seconds) that an expired session had been alive.

2011 VMware, Inc. All rights reserved.


Page 47 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

WebViewEngine
Mbean

jboss.web:j2eeType=Servlet,name=VSOWebViewEngine,WebModule=//localhost/vmo,J2EEApplication=none,J2EE
Server=none

Description
Attribute

Description

maxTime

Maximum time taken (in milliseconds) for processing a request.

processingTime

Total processing time (in milliseconds) since startup.

sessionMaxAliveTime

The longest time (in seconds) that an expired session had been alive.

requestCount

Total number of requests served since startup.

Web Tread pool


Mbean

jboss.web:type=ThreadPool,name=http-[hostname]-[port]

Description
Attribute

Description

currentThreadCount

Number of threads created on the Apache Tomcat container.

currentThreadsBusy

Number of busy threads on the Apache Tomcat container.

2011 VMware, Inc. All rights reserved.


Page 48 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

7. Extending vCloud
7.1

Hybrid vCloud

A hybrid vCloud is a vCloud infrastructure composed of two or more vCloud instances (private or
public) that remain unique entities but are bound together by standardized technology that
enables data and application portability (for example, cloudbursting for load balancing resources
between vCloud instances). NewCo allows organizations to extend their existing private virtual
environment through cloudbursting and IPSEC VPN between organizations to extend the
datacenter.
See Hybrid VMware vCloud Use Case for details on how private and public vCloud instances can
be associated with each other.

7.2

vCloud Connector

VMware vCloud Connector (vCC) is an appliance that allows vSphere administrators to move
virtual machines from vSphere environments or vApps from a vCloud to a remote vCloud. The
origination and destination vCloud can be a public or public vCloud. Figure 7 provides an
overview of communication protocols between vCloud Connector and vCloud Director:
Figure 7. vCloud Connector

se

On

mi
Pre

lou

eC

vat

ri
-P

CB
Server

vCloud
Director

vCloud
Director

CB
Server

REST APIs
REST APIs

te

va

i
rem

se

ri
-P

ud

Clo

fP

Of
vCenter
Server

VCC
Appliance

REST APIs

vCloud
Director

CB
Server

tio

VCC Plug-In for


vSphere Client

liza
se
mi tua
Pre Vir
On here
vSp

lou

cC

bli

Pu

7.2.1 vCloud Connector Design Considerations

vCloud Connector requires vSphere 4.0 or later and administrative privileges.

The appliance requires a dedicated static IP address.

It is recommended that the appliance reside on the same subnet as vCenter Server.

Ports 80, 443 and 8443 must be open on any firewall to allow communication between
vCenter and the vCloud Connector appliance.

2011 VMware, Inc. All rights reserved.


Page 49 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 33. vCloud Connector Components
VMware Components

Description

VMware vSphere 5.0

VMware virtualization software.

vCloud Connector (vCC) appliance

vCC appliance deployed on vSphere

vSphere Client

vCC plug-in installs in vSphere Client.

Microsoft Internet Explorer 7 or higher

Requirement for vSphere plug-in.

7.3

vCloud API

There are two ways to interact with the vCloud Director cell; via the browser-based UI or via the
vCloud API. The browser-based UI has limited customization capability. To enhance the user
experience a service provider or enterprise may want to write their own portals to integrate with
vCloud Director. To enable integration, the vCloud API provides a rich set of calls in VMware
vCloud Director.
vCloud APIs are REST-like (which allows for loose coupling of services between the server and
consumer), are highly scalable, and use HTTP/S protocol for communication. The APIs are
grouped into three sections based upon the functionality they provide and type of operation.
There are several options available to implement the custom portal using the vCloud API:
VMware vCloud Request Manager, VMware vCloud Orchestrator, or by using third-party
integrators. Some of these may require customization to design workflows to satisfy customer
requirements.
Figure 8 shows a use case where a service provider has exposed a custom portal to end users
on the Internet.
Figure 8. vCloud API Logical Representation

2011 VMware, Inc. All rights reserved.


Page 50 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
End-users log into the portal with a valid login/password and are able to select a predefined
workload (from a catalog list) to deploy. The users selection, in turn, initiates a custom workflow
that deploys the requested catalog item (vApp or media) in the vCloud.
Currently, the vCloud API is available in the form of a vCloud SDK with Java, C-Sharp, and PHP
language bindings.

7.4

VMware vCenter Orchestrator

Because vCloud Director leverages core vSphere infrastructure, automation is possible through
vCenter Orchestrator. vCenter Orchestrator provides out-of-the-box workflows that can be
customized to automate existing manual tasks. Administrators can use sample workflows from a
standard workflow library that provides blueprints for creating additional workflows, or create their
own custom workflows.
vCenter Orchestrator integrates with vCloud Director through a vCloud Director plug-in that
communicates via the vCloud API. vCenter Orchestrator can also orchestrate workflows at the
vSphere level through a vSphere plug-in, if necessary.
Figure 9. vCloud Orchestration

7.4.1 Managing vCloud Resources


vCenter Orchestrator provides numerous plug-ins for managing a vCloud environment from the
compute and storage layer through vSphere and the vCloud Director instance. NewCo utilizes
workflows contained in the plug-ins to assist in adding vCloud compute capacity and customer
managementincluding onboarding, offboarding, and user managementto reduce operations
associated with a vCloud system administrator.

2011 VMware, Inc. All rights reserved.


Page 51 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

7.4.2 vCenter Orchestrator Active-Passive Node Configuration


High availability for vCenter Orchestrator can be used to reduce downtime during a planned
service maintenance event or unplanned service outage on one or all of the following
components:

vCenter Orchestrator application server.

vCenter Orchestrator Database.

vCenter Orchestrator host operating system.

vCenter Orchestrator host virtual machine.

Cluster hosting the virtual machine.

Datacenter/site failure.

The vCenter Orchestrator server application is a Windows service that can be controlled with
scripts using the command line interface. The vCenter Orchestrator server application is
stateless. The workflows and their state are stored in a database. The vCenter Orchestrator
server application implements checkpointing. It can resume running workflows from their saved
state. Only one vCenter Orchestrator server application node can run per database. The
application server has a local file based configuration required to start the service and the
orchestrated systems.
When making vCenter Orchestrator highly available the first thing to implement is multi-master or
master-slave database replication. Also called cold standby, this provides a fully redundant
instance of each node, which is only brought online when its associated primary node fails. As
long as a copy of the database is available, a vCenter Orchestrator application server with the
appropriate configuration can resume workflow operations. Specific database vendor best
practices must be followed to implement database high availability.
This is the configuration that suits best vCenter Orchestrator. A third-party clustering application
can be set up to check on the server availability (for example, by monitoring the Web service),
and upon failure stops the primary node and starts one of the secondary nodes.
This requires that all of the vCenter Orchestrator application servers have the same plug-ins
installed and, except for the IP address in use, the same configuration.
This can be done having each node maintain its own copy of the cluster configuration data. The
configuration on the nodes can be initially set using the vCenter Orchestrator web configuration
application and exported manually to the other nodes, and then upon configuration change
updated using file replication scripts on the <vCO Installation Folder>\appserver\server\vmo\conf and <vCO Installation Folder>\appserver\server\vmo\plugins directories.
Alternatively, installing vCenter Orchestrator on the quorum drive is possible, but requires
scripting the update of the IP address configuration of the Orchestrator application server (in
<vCO Installation Folder>\app-server\bin\boot.properties) for the new host as
part of the automated failover and failback operations.

2011 VMware, Inc. All rights reserved.


Page 52 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
The first approach is recommended because compared to the alternative this:

Allows recovery when the application server file structure integrity is compromised.

Guarantees optimum performance for geographically dispersed clusters.

Improves traceability by separating the file-based logs.

Permits resumption of availability in thirty seconds to two minutes (the time required for a
vCenter Orchestrator server to start).

2011 VMware, Inc. All rights reserved.


Page 53 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

8. vCloud Metering
To track resource metrics for vCloud entities, vCenter Chargeback sets allocation units on the
imported vCloud Director hierarchies based on the allocation model configured in vCloud
Director. Table 34 shows which allocation units are set.
Table 34. Allocation Units for vCloud Hierarchies Based on Allocation Model
Entity

Pay-As-You-Go

Allocation Pool

Reservation Pool

Organization virtual
datacenter

None

CPU

CPU

Memory

Memory

Storage

Storage

vApp

None

None

None

Virtual machine

vCPU

vCPU

vCPU

Memory

Memory

Memory

Storage

Storage

Storage

Template

Storage

Storage

Storage

Media file

Storage

Storage

Storage

Network

DHCP

DHCP

DHCP

NAT

NAT

NAT

Firewall

Firewall

Firewall

Count of Networks

Count of Networks

Count of Networks

8.1

Cost Models

Installing vCloud Director and vShield Manager data collectors also creates default cost models
and billing policies that integrate with vCloud Director and vShield Manager. Billing policies
control costs assessed to resources used. Default vCloud Director billing policies charge based
on allocation for vCPU, memory, and storage. Costs are charged for on an hourly, daily, weekly,
monthly, quarterly, biannual, and yearly basis.
Instead of modifying default billing copies and cost models, make copies and modify the
duplicates. For more information, see the vCenter Chargeback Users Guide
(http://www.vmware.com/support/pubs/vcbm_pubs.html) for vCenter Chargeback version 1.6.2.

2011 VMware, Inc. All rights reserved.


Page 54 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Rate factors allow the scaling of base costs for a specific chargeable entity. Example use cases
include:

Promotional rate A service provider offers new clients a 10% discount. Instead of modifying
base rates in the cost model, apply a 0.9 rate factor to reduce the base costs for client by
10%.

Rates for unique configurations A service provider decides to charge clients for special
infrastructure configurations using a rate factor to scale costs.

VM instance costing assigns a fixed cost to a hard bundle of vCPU and memory. This option is
only available with the Pay-As-You-Go allocation model. Use VM instance costing to create a
fixed cost matrix for different virtual machine bundles.

8.2

Reporting

vCenter Chargeback generates cost, usage, and comparison reports for hierarchies and entities.
The vCenter Chargeback API provides the capability to export reports to XML. Developers use
XSLT to transform the raw XML into a format supported by the customers billing system. Reports
run from the vCenter Chargeback user interface are available in PDF and XLS format. Service
accounts with read-only privileges have been created to run reports from the vCenter Chargeback
UI or API.

8.3

Metering Internet Traffic

Internet traffic is network traffic that extends beyond the vCloud environment to the Internet. For
routed external organization networks, internet traffic is the traffic sent and received through the
vShield appliance. vCenter Chargeback pulls network metrics sent through vShield Edge devices
(send and receive) from vShield Manager.
A usage model bills for the network bandwidth use by applying base rates to Network Received
and Network Transmit metrics. This is the default billing policy type.
A fixed cost-based cost model allows billing for different types of Internet services and usage
based on an agreed upon fixed price. Example fixed costs include:

Monthly fixed rate for a specified bandwidth cap Instead of charging for actual usage, the
client is billed a fixed fee for Internet usage through the creation of a fixed cost in vCenter
Chargeback.

Basic monthly fixed costs on top of Internet usage (application monitoring tools and reports
supplied by a solution provider).

Additional fixed costs that incurred due to upfront infrastructure needed (for example, a new
router for the client). Figure 10 provides an example of a one-off router cost of $150.

2011 VMware, Inc. All rights reserved.


Page 55 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Figure 10. One Time Router Cost Example (vCenter Chargeback UI)

Additional chargeable vCloud resources include:

Count of networks that belong to an organization or a vApp.

NAT service.

DHCP service.

Firewall service.

VPN service.

External network bandwidth measured in Mbps/hour.

8.4

Aggregator Reporting

Public vCloud providers under the VMware Service Provider Program (VSPP) are required to
report on the hourly virtual machine vRAM usage within the resource groups. NewCo has
deployed the vCloud Usage Meter to meter the resource groups and report back vRAM usage to
the aggregator on the fifth of every month. vRAM data collected from the resource groups is kept
on file or within the vCloud Usage Meter database for a minimum of 12 months in the event of an
audit by the aggregator or VMware.

2011 VMware, Inc. All rights reserved.


Page 56 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

Appendix A: Bill of Materials


The inventory and specifications of components comprising the vCloud are provided in the
following table.
Table 35. Management Cluster Inventory
Item

Quantity

Name/Description

ESXi

Chassis: 3

Blades per chassis: 1

Processors: 2 Socket Intel Xeon x5650 (6 core, 2.6 GHz


Westmere)

Memory: 96GB

Version: vSphere 5.0 (ESXi)

Type: virtual machine

Guest OS: Windows 2008 R2 x86_64

2 vCPUs

4GB memory

1 vNIC

Primary Disk (C:): 40GB

Version: 5.0

Type: virtual machine

Guest OS: Windows 2008 R2 x86_64

4 vCPUs

8GB memory

1 vNIC

Primary Disk (C:): 40GB

Version: 5.0

vCenter Server
(Management)

vCenter Server
(Resource)

2011 VMware, Inc. All rights reserved.


Page 57 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example

vCenter Server,
vCenter Update
Manager,
vCloud Director,
vCenter
Chargeback
Database

VMware vCloud
Director

vShield Manager

vCenter
Chargeback
Server

Domain
Controllers (AD)

Type: virtual machine

Guest OS: Windows 2008 R2 x86_64

4 vCPUs

16GB memory

1 vNIC

Primary disk (C:): 40GB

Data disk: 20GB per database

SQL Server 2008 R2

Type: virtual machine

Guest OS: RHEL 5

2 vCPUs

4GB memory

1 vNIC

vCloud Director version 1.5

Type: virtual appliance

Version: 5.0

1vCPU

4GB memory

1 vNIC

Type: virtual appliance

Guest OS: Windows Server 2008 x64

2 vCPUs

4GB memory

1 vNIC

Version: 1.6.2

Type: virtual machines

Guest OS: Windows Server 2008 R2 x64

1vCPU

4GB memory

1 NIC

Version: Active Directory 2008

2011 VMware, Inc. All rights reserved.


Page 58 of 59

VMware vCloud Architecture Toolkit


Public VMware vCloud Implementation Example
Table 36. vCloud Resources Inventory
Item

Quantity

Name/Description

ESXi host

Vendor XCompute resource

Chassis: 6

Blades per chassis: 1

Blade Type: Vendor x Blade Type Y

Processors:2 socket Intel Xeon X5670 (6 core,


2.9 GHz Westmere)

Memory: 96GB

Version: vSphere 4.1 (ESXi)

vCenter Server

Same as management cluster

Storage

FC SAN array

VMFS

LUN sizing: 500GB

RAID Level: 5

2011 VMware, Inc. All rights reserved.


Page 59 of 59

You might also like