Professional Documents
Culture Documents
2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international
copyright and intellectual property laws. This product is covered by one or more patents listed at
http://www.vmware.com/download/patents.html.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.
VMware, Inc
3401 Hillview Ave
Palo Alto, CA 94304
www.vmware.com
Contents
1.
Overview .......................................................................................... 7
1.1 Business Requirements................................................................................................. 7
1.2 Use Cases ..................................................................................................................... 8
1.3 Document Purpose and Assumptions ........................................................................... 9
1.4 vCloud Components .................................................................................................... 10
1.5 Abstractions and VMware vCloud Constructs ............................................................. 11
2.
3.
4.
5.
6.
7.
8.
List of Figures
Figure 1. VMware vCloud Director Abstraction Layer ................................................................... 11
Figure 2. vSphere Logical Architecture Overview ......................................................................... 14
Figure 3. vCloud Physical Design Overview ................................................................................. 15
Figure 4. vSphere Logical Network Design Management Cluster ............................................. 21
Figure 5. vSphere Logical Network Design ................................................................................... 25
Figure 6. vCloud Logging Organization ......................................................................................... 40
Figure 7. vCloud Connector ........................................................................................................... 49
Figure 8. vCloud API Logical Representation ............................................................................... 50
Figure 9. vCloud Orchestration ...................................................................................................... 51
Figure 10. One Time Router Cost Example (vCenter Chargeback UI) ......................................... 56
List of Tables
Table 1. Initial vCloud Capacity ....................................................................................................... 8
Table 2. Document Sections ........................................................................................................... 9
Table 3. vCloud Components ........................................................................................................ 10
Table 4. vCloud Components ........................................................................................................ 13
Table 5. vCenter Servers ............................................................................................................... 15
Table 6. Management Virtual Machines ........................................................................................ 16
Table 7. Management Component Resiliency .............................................................................. 17
Table 8. vSphere Clusters Management Cluster ....................................................................... 18
Table 9: vSphere Management Cluster DRS Rules ...................................................................... 19
Table 10. Host Logical Design Specifications Management Cluster.......................................... 20
Table 11. Virtual Switch Configuration Management Cluster ..................................................... 21
Table 12. Virtual Switch Configuration Settings Management Cluster....................................... 22
Table 13. Shared Storage Logical Design Specifications Management Cluster ........................ 22
Table 14. Resource Group Clusters .............................................................................................. 23
Table 15. vSphere Cluster Configuration vCloud Resources ..................................................... 23
Table 16. Host Logical Design Specifications ............................................................................... 24
Table 17. Virtual Switch Configuration vCloud Resources ......................................................... 24
Table 18. dvResSwitch01 Teaming and Failover Policies ............................................................ 25
Table 19. dvResSwitch01 Security Policies .................................................................................. 26
Table 20. dvResSwitch01 General Policies .................................................................................. 26
Table 21. Storage Logical Design Specifications vCloud Compute Cluster .............................. 26
Table 22. vSphere Clusters vCloud Compute Datastores ......................................................... 27
Table 23. Datastore Size Estimation Factors res-pod1 Cluster ................................................. 28
Table 24. Provider Virtual Datacenter Specifications .................................................................... 31
Table 25. Virtual Machine Sizing and Distribution ......................................................................... 31
Table 26. Provider External Network Specifications ..................................................................... 33
Table 27. System Administrator User and Group.......................................................................... 34
Table 28. NewCo Fixed-cost Cost Model ...................................................................................... 36
Table 29. Virtual Switch Security Settings ..................................................................................... 37
Table 30: vCloud Director Log Locations ...................................................................................... 42
Table 31. VMware vCloud Director Monitoring Items .................................................................... 44
Table 32.vCenter Orchestrator Monitored MBeans....................................................................... 46
Table 33. vCloud Connector Components .................................................................................... 50
2011 VMware, Inc. All rights reserved.
Page 5 of 59
Table 34. Allocation Units for vCloud Hierarchies Based on Allocation Model ............................. 54
Table 35. Management Cluster Inventory ..................................................................................... 57
Table 36. vCloud Resources Inventory ......................................................................................... 59
1. Overview
This public VMware vCloud implementation example uses a fictitious corporation, New
Company (NewCo), as a vehicle to provide a detailed implementation example for a public
VMware vCloud. It is intended to provide architects and engineers who are interested in
implementing a public vCloud with a reference implementation that conforms to VMware best
practices, and describes the logical and physical design and implementation of the components
of a VMware vCloud. Each document section elaborates on different aspects and key design
decisions of this vCloud solution. This implementation example provides a baseline that is
extensible for future usage patterns.
1.1
Business Requirements
The NewCo vCloud implementation has the following characteristics and provides:
See the Public VMware vCloud Service Definition for additional details.
600 customers/organizations:
1 public-routed network.
1 internal network.
5% 16GB/4 vCPU.
Parallel Operations:
10 OVF uploads.
SLA:
o
Use of virtual hardware version 8 allows for larger virtual machines than are
defined in this section. NewCo determined that the use cases for the vCloud do
not warrant larger virtual machines at this time.
1.2
Use Cases
The target use case for this vCloud environment includes, but is not limited to, transient
workloads normally observed with:
Software development.
1.3
This document is intended to serve as a reference for service providers and assumes that they
have familiarity with VMware products, including VMware vSphere , VMware vCenter, VMware
vCloud Director, VMware vShield, VMware vFabric, and VMware vCenter Chargeback. It
covers both logical and physical design considerations for all VMware vCloud infrastructure
components, with each section elaborating on different aspects and key design decisions of a
public vCloud implementation.
Public vCloud architecture topics are covered in the document sections listed in Table 2.
Table 2. Document Sections
Section
Description
1. Overview
2. vSphere Design
5. vCloud Security
6. vCloud Management
7. Extending vCloud
8. vCloud Metering
This document is not intended as a substitute for VMware product documentation. See the
installation and administration guides as well as published best practices for the appropriate
product for further information.
1.4
vCloud Components
Description
VMware vSphere
VMware vShield
See Architecting a VMware vCloud for additional information about the vCloud components and
options for planning, deployment, and configuration.
2011 VMware, Inc. All rights reserved.
Page 10 of 59
1.5
Key features of the vCloud architecture are resource pooling, abstraction, and isolation. VMware
vCloud Director further abstracts the virtualized resources presented by vSphere by providing the
following logical constructs that map to vSphere logical resources:
Organization A logical object that provides a security and policy boundary. Organizations
are the main method of establishing multitenancy and typically represent a business unit,
project, or customer in a private vCloud environment.
Provider virtual datacenter vSphere resource groupings of compute, storage, and network
that power organization virtual datacenters.
2. vSphere Design
2.1
Architecture Overview
A management cluster containing all core components and services needed to run the
vCloud.
One resource group that represents dedicated resources for vCloud consumption. Each
compute cluster of ESXi hosts is managed by a vCenter Server, and is under the control of
VMware vCloud Director. Multiple compute clusters can be managed by the same VMware
vCloud Director instance as additional capacity or service offerings are added.
Reasons for organizing and separating vSphere resources along these lines are:
Provides resource isolation between workloads running in the vCloud and the actual systems
used to manage the vCloud.
Separates the management components from the resources they are managing.
Resources allocated for vCloud use have little reserved overhead. For example, vCloud
resources would not host vCenter virtual machines.
vCloud resources can be consistently and transparently managed, carved up, and scaled
horizontally.
Syslog-ng.
These components map to the management cluster as noted in Section 2.3, Management Cluster
Design. For a complete bill of materials, see Appendix A: Bill of Materials.
2.2
Site Considerations
There is enough floor space, power, and cooling capacity for the management group and the
resource groups to both reside within a single physical datacenter, and scale to support 5,000
virtual machines, as defined in the requirements.
Table 5. vCenter Servers
vCenter
Datacenter
Purpose
mgmt-vc1
mgmt-newco
res1-newco-vc1
Res1-newco
2.3
The vSphere management cluster design encompasses the ESXi hosts contained in the
management group. The scope is limited to only the infrastructure components used to operate
vCloud resource group workloads. The virtual machines that run in the management group are
listed in Table 6.
Table 6. Management Virtual Machines
Virtual Machine
Purpose
mgmt-vcd1
mgmt-vcd2
res1-newco-vc1
mgmt-vc1
res-newco-vsm1
mgmt-newco-vsm1
mgmt-lb1
mgmt-lb2
mgmt-vsm1
mgmt-vco1
mgmt-vco2
mgmt-ad1
mgmt-ad2
mgmt-dns1
mgmt-dns2
mgmt-ipam
mgmt-hyperic
mgmt-syslog1
mgmt-syslog2
mgmt-vma
mgmt-cb1
mgmt-um1
mgmt-mssql1
mgmt-mssql2
res-mssql1
res-mssql2
HA
Enabled
Virtual Machine
Monitoring
FT
vCenter
Heartbeat
Clustered
vCenter Server
Yes
Yes
No
Yes
No
VMware vCloud
Director
Yes
Yes
No
NA
No
vCenter
Chargeback
Server
Yes
Yes
No
NA
No
vShield Manager
Yes
Yes
Yes
NA
No
Yes
Yes
No
NA
Yes
VMware vCenter
Orchestrator
Yes
Yes
No
NA
Yes
Active Directory
Yes
Yes
No
NA
No
VMware Data
Recovery
Yes
Yes
No
NA
No
Increase the number of simultaneous consoles accessible via the console proxy service.
Increase the number of vCenter Server operations carried out (assuming that the number of
vCenter servers scales as well as the number of VCD cells).
Allow for upgrade and maintenance of the vCloud Director cells without having to disable the
public vCloud service.
Specification
Cluster Name
Mgmt-NewCo1
Fully automated
3 (of 5)
Yes
VMware HA Percentage
33% CPU
33% Memory
N/A
Yes
Medium
Type
Description
Mgmt-LDAP
Resource vCenter
Mgmt-DNS
Mgmt-SYSLOG
Mgmt-VCD
Mgmt-Chargeback
Resource-vCenterDB
Mgmt-LB
Mgmt-VCO
Resource-VSM
Specification
Processors
Storage
NFS Storage
Networking
Memory
96GB
Separate networks for vSphere management, virtual machine connectivity, VMware vSphere
Redundant dvSwitch ports with at least two active physical NIC adapters each.
Redundancy across different physical adapters to protect against NIC or PCI slot failure.
Switch Type
dvMgmtSwitch0
Distributed
Function
# of Physical NIC
Ports
Management
console
VMKernel vMotion
VMKernel NFS
Fault Tolerance
Virtual machine
Figure 4 shows the virtual network infrastructure design for the vSphere management cluster.
Figure 4. vSphere Logical Network Design Management Cluster
Port Group
Setting
Load Balancing
All
Failover Detection
All
Link status
Notify Switches
All
Enabled
Failback
All
No
Failover Order
Management
vMotion
NFS
Fault Tolerance
Storage paths will be redundant at the host (connector), switch, and storage array levels.
All hosts in the management cluster have access to the same volumes, but are isolated from
datastores in the resource cluster.
Specification
5 dedicated
Volume Size
1TB
2.4
The resource group design represents the ESXi host clusters and infrastructure used to run the
vApps that are provisioned and managed by vCloud Director. In this section the scope is further
limited to only the infrastructure dedicated to the vCloud workloads.
Purpose
res-newco1
Specification
16
Fully automated.
3 (of 5)
Yes
VMware HA Percentage
7% CPU
7% memory
N/A
Specification
Processors
Storage
NFS
Networking
Memory
96GB
Separate networks for virtual machine connectivity and specific VMKernel port groups.
Maintain isolation of organizations from other VLANs across physical and virtual networking
infrastructure.
Redundancy across different physical adapters to protect against NIC or PCI slot failure.
Switch Type
Function
# of NIC Ports
vdResSwitch01
Distributed
Network pools
External networks
Port Group
Setting
Load Balancing
All
Failover Detection
All
Link status
Notify Switches
All
Enabled
Failback
All
No
Failover Order
Management
vMotion
External Network
Virtual Machine
NFS
Fault Tolerance
Port Group
Setting
Promiscuous Mode
All
Reject
All
Reject
Forged Transmits
All
Reject
Port Group
Setting
Port binding
Ephemeral no binding
Port binding
Ephemeral no binding
Storage paths will be redundant at the host, switch, and storage array levels.
Specification
Cluster
res-pod1
100
LUN Size
500GB
Typical virtual machine size (including configuration files, logs, swap files, and snapshot
files).
VMFS metadata.
Max requirement for IOPs and throughput per LUN, dependency on storage array and
design.
Max RTO, if a LUN is lostthat is, your backup and restore design.
If we approach this from an average I/O profile it would be tempting to create all LUNs the same,
say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and
virtual machines on those LUNs. Another approach would be to create LUNs with different RAID
profiles based on anticipated workloads to provide differentiated levels of service. These levels of
service would be represented at the vSphere level by an HA/DRS cluster and its associated
mapped storage and network objects. The vCloud logical design will map provider virtual
datacenters to these clusters. To achieve the desired levels of service NewCo will start with one
underlying vSphere vCloud compute cluster with dedicated storage. Additional vCloud compute
clusters will be attached to their own dedicated storage as they are added.
Table 22. vSphere Clusters vCloud Compute Datastores
Cluster Name
Datastores
Quantity
RAID
Size
res-pod1
res-pod1-xx
100
500GB
As a starting point, VMware recommends RAID 5 storage profiles, and only creating storage tierspecific provider virtual datacenters as one-offs to address specific organization or business unit
requirements.
Value
15
40GB
2GB
Safety Margin
For example,
Source and destination volumes for Storage vMotion should both reside within the same
provider virtual datacenter or vSphere cluster.
For provider virtual datacenters that leverage fast provisioning, linked clones become full
clones when virtual machines are migrated using Storage vMotion.
3.1
A vSphere cluster will scale to 32 hosts (typically 8-12 is a good starting point, allowing for future
growth), allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum
number of hosts per datacenter possible) and an upper limit of 10,000 virtual machines(this is a
vCenter limit).
The recommendation provided in Architecting a VMware vCloud is to start with 50% of the
maximum cluster size and add additional hosts to the cluster as dictated by tenant consumption.
When utilization of the total compute resources across the resource group for the cluster reaches
60%, VMware recommends that a new provider virtual datacenter be deployed. This provides for
growth within the provider virtual datacenter for the existing organizations/business units without
necessitating their migration as utilization nears maxing out a clusters resources.
As an example, a fully loaded resource group will contain 14 provider virtual datacenters and up
to 350 ESXi hosts, giving an average virtual machine consolidation ratio of 26:1 assuming a 5:1
ratio of vCPU:pCPU. To increase this ratio, NewCo enterprise would need to increase the
vCPU:pCPU ratio that they are willing to support. The risk associated with an increase in CPU
over-commitment is mainly in degraded overall performance that can result in higher than
acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU overcommitment for the available cores with which NewCo is comfortable. For virtual machines that
are not busy this ratio can be increased without any undesirable effect on virtual machine
performance. Monitoring of vCPU ready times helps identify if the ratio needs to be increased or
decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.
A provider virtual datacenter can map to only one vSphere cluster, but can map to multiple
datastores and networks.
Multiple provider virtual datacenters are used to map to different types/tiers of resources:
Compute This is a function of the mapped vSphere clusters and the resources that back it.
Storage This is a function of the underlying storage types of the mapped datastores.
Networking This is a function of the mapped vSphere networking in terms of speed and
connectivity.
Multiple provider virtual datacenters are created for the following reasons:
The vCloud requires more compute capacity than a single vSphere cluster (a vSphere
resource pool cannot span vSphere clusters).
Tiered storage is required; each provider virtual datacenter maps to datastores on storage
with different characteristics.
Specification
VMware recommends assessing workloads to assist in sizing. The following is a sample sizing
table that can be used as a reference for future design activities. Virtual machine distribution is
based on the percentages outlined in the service offering initial target with a maximum of 5000
virtual machines.
Table 25. Virtual Machine Sizing and Distribution
Virtual Machine Size
Distribution
Number of
Virtual
Machines
1 vCPU/1GB RAM
40%
2000
1 vCPU/2GB RAM
20%
1000
1 vCPU/4GB RAM
25%
1250
2 vCPU/8GB RAM
10%
500
4 vCPU/16GB RAM
5%
250
Total
100%
5000
3.2
External Networks
A vCloud external network is a logical construct that maps directly to a vSphere port group that
has multiple vmnic uplinks to a physical network. This construct represents an external
connection for communication in and out of the vCloud. NewCo provides each organization with a
guarantee of one routable organization external network, with the ability to request additional
organization external networks as needed.
Table 26. Provider External Network Specifications
Attribute
Specification
600
4096 or /20
More than one vCenter Server is required to manage 600 networks under a VCD-NI pool.
Additional vCenter Servers will be added as VCD-NI network pools are exhausted.
3.3
Network Pools
Network pools are a construct in vCloud Director and represent a preconfigured vCloud controlled
pool of Layer 2 isolated networks that are automatically used to provide isolation between
different organizations or even between vApps within an organization. Aside from the Layer 2
Isolation function, they also enable self-service by allowing the complicated underlying networking
configuration to be abstracted from the application owner.
NewCo will provide the following sets of network pools based on need:
VLAN-backed.
For the VCD-NI pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that
is not in use within the infrastructure. This is for increased security and isolation.
3.4
Users/Roles
For security purposes, the system administrators are a separate role and log into a different
context than the vCloud consumers who exist within an organization. As a provider construct, the
system administrator role has the ability to modify all organizations within the vCloud as well as
create and configure objects that vCloud consumers cannot. The role of System Administrator
should be reserved for a limited group of system administrators. Because this role has the ability
to create and destroy objects as well as make configuration changes that can negatively impact
multiple organizations users who possess this role should be knowledgeable about storage,
networking, virtualization, and vCloud. The design calls for a single local account (cloudadmin) to
be used as a backup for accessing VCD. The primary access method will be managed by adding
members to the cloudadmins LDAP group.
User/Group
Type
Role
cloudadmin
User
Local
System Administrator
NewCo\cloudadmins
Group
LDAP
System Administrator
Organizations
Except for the service provider default organization whose primary function is publishing vApps
and media for consumption by tenants, new organizations are created on demand and are not
defined in this section.
4.2
4.3
Organization Networks
Organization networks are not defined in advance. Instead, they are created on demand during
the formation of the organization virtual datacenter. During the creation of the organization and
first organization virtual datacenter, the service provider will allocate four internal or vApp
networks and one internet routable address backed by VDC-NI.
4.4
Catalogs
The service provider catalog contains NewCo-specific templates that are made available to all
organizations/business units. NewCo will make a set of catalog entries available to cover the
classes of virtual machines, templates, and media, as specified in the Public VMware vCloud
Service Definition.
For the initial implementation, a single cost model will be created using the following fixed cost
pricing and chargeback model.
Table 28. NewCo Fixed-cost Cost Model
Virtual Machine
Configuration
Price
$248.00
$272.00
$289.00
$308.00
$315.00
$331.00
$341.00
$354.00
$386.00
$461.00
$477.00
$509.00
$681.00
4.5
Users/Roles
By default, only one user is created during onboarding of an organization and that is the system
administrator. All other roles, including additional system administrators, are managed by the
primary system administrator by importing users into the public vCloud via LDAP synchronization.
2011 VMware, Inc. All rights reserved.
Page 36 of 59
5. vCloud Security
Security is critical for any company. The following sections address host, network, vCenter, and
vCloud Director security considerations.
5.1
Host Security
ESXi will be configured with a strong root password stored following corporate password
procedures. ESXi lockdown mode will be enabled to prevent root access to the hosts over the
network, and appropriate security policies and procedures will be created and enforced to govern
the systems. Because ESXi cannot be accessed over the network, sophisticated host-based
firewall configurations are not required.
5.2
Network Security
Setting
Promiscuous Mode
Forged Transmits
5.3
vCenter Security
vCenter Server is installed using a local administrator account. When vCenter Server is joined to
a domain, this results in any domain administrator gaining administrative privileges to vCenter.
VMware recommends that this potential security risk be removed by creating a new vCenter
Administrators group in Active Directory and assigning it to the vCenter Server Administrator role,
making it possible to remove the Local Administrators group from this role. By default, members
of the vCloud System Administrator group are not associated with vCenter Administrators group.
5.4
Standard Linux hardening guidelines need to be applied to the VMware vCloud Director virtual
machine. There is no need for local users, and the root password will only be needed during
install and upgrades to the VMware vCloud Director binaries. Additionally, certain network ports
must be open for vCloud Director use. For additional information see the vCloud Director
Administrators Guide (https://www.vmware.com/support/pubs/vcd_pubs.html).
In vCloud Director version 1.5, VMware has implemented a new configurable account lockout
feature where at the system level, accounts can be configured to lock out for a specified number
of minutes after a specified number of failed login attempts. By default, the lockout feature is not
enabled, but NewCo has chosen to enable it so that system administrators are locked out for 10
minutes after five failed login attempts. This feature is also available for organization accounts
and can be requested during the organization onboarding.
5.5
The following are examples of use cases that require special security considerations:
End-to-end encryption from a guest virtual machine to its communication endpoint, including
encrypted storage via encryption in the guest OS and/or storage infrastructure.
Need to control access to each layer of a hosting environment (rules and role-based security
requirements for an organization).
vApp requirements for secure traffic and/or VPN tunneling from a vShield Edge device at any
network layer.
6. vCloud Management
6.1
Host profiles can be used to automatically configure network, storage, security and other
features. This feature, along with automated installation of ESXi hosts, is used to standardize all
host configurations.
VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to
verify a virtual machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to
not be updated, VM Monitoring verifies if any storage or networking I/O has occurred over the last
120 seconds and if not, the virtual machine is restarted.
VMware recommends enabling both VMware HA and VM monitoring on the management cluster
and the resource groups.
6.2
The vCloud Center of Excellence (vCOE) model is an extension of the VMware Center of
Excellence model that has been used by many organizations of various sizes to facilitate the
adoption of VMware technology and to reduce the complexity of managing a VMware virtual
infrastructure. The vCloud Center of Excellence model defines cross-domain vCloud
Infrastructure Management accountability and responsibility within team roles across an
organization. These team roles enable an organization to consistently measure, account for, and
improve the effectiveness of its vCloud infrastructure management even if its IT Service
Management roles and responsibilities are distributed across multiple IT functional areas. See
Operating a VMware vCloud for more information about the vCOE.
6.3
vCloud Logging
Logging is one of key components in any infrastructure. It provides audit trails for user logins and
logouts among other important functions. Logging records various events on servers, and helps
diagnose problems, and detect unauthorized access. In some cases, regular log analyses and
scrubbing will proactively stave off problems that may turn out to be critical to the vCloud
operations.
NewCo utilizes a centralized, redundant Syslog system for all management virtual machines and
applications for error analysis and compliance. Logs captured in Syslog are readily available for
analysis for 60 days and available via archive for a minimum of 12 months.
Save the file and restart the vCloud Director cell using service v mw a r e - v cd r e st a rt.
To enable centralized logging in all the vCloud Director cells, repeat the procedure for each cell.
6.3.1.1. Syslog Configuration
Depending on your network architecture, it may be valuable to transmit logs to multiple hosts for
redundancy. Syslog is UDP-based and stateless, so anywhere there can be network failure, logs
transmission is not guaranteed. Some redundancy can be achieved by setting the syslog targets
in %VCLOUD%/etc/global.properties and %VCLOUD%/etc/responses.properties to
127.0.0.1 (localhost), and then modifying /etc/syslog.conf to retransmit those syslogs
elsewhere, allowing you to send syslogs to two targets. For example, the following line could be
placed at the top of the syslog.conf file:
*.*
ip.address.syslog.host
This assumes all logs are wanted. VCD event logs are logged at the user.notice facility and
level. If you redirect things such as the Debug and Info logs, those facilities are specified in the
log4j.properties file.
Such a configuration will also transmit all other logs received by syslog, regardless of facility.
Because the logs received from VCD are considered to be remote (as they are sent via a network
socket to localhost), the file /etc/sysconfig/syslog must also be modified to give syslogd
the correct startup parameters. This line:
SYSLOGD_OPTIONS="-m 0"
Can be modified to:
SYSLOGD_OPTIONS="-r -h -x -m 0"
Which instructs syslogd to accept logs remotely, and to re-forward logs received from remote
sources. The x flag disables name lookups, which can prevent syslogd from consuming lots of
extra resources on name resolution.
Location
Host(s)
Collection Method
VCD Debug
Logs
%VCLOUD%/logs/*
VCD cells
%VCLOUD%/logs/vcloudcontainer-debug.log and
%VCLOUD%/logs/vcloudcontainer-info.log are redirected to
syslog by modification of
%VCLOUD%/etc/log4j.properties
VCD Syslog
Events
VCD cells
%VCLOUD%/etc/global.properties
and
%VCLOUD%/etc/responses.propert
ies
VCD System
Logs
VCD cells
API Web
Access Logs
%VCLOUD%/logs/*
.request.log by
date
VCD cells
Syslog.Remote.Port A remote server's UDP port where logs are sent using the syslog
protocol. Default is port 514.
Each of the vCenter Orchestrator application server components and plug-in adapters provides
logs at different levels including fatal, error, warning, info, and debug.
During normal operations it is recommended to use the default info level to maximize the server
performance while still keeping quite detailed level information. Using the debug level for a single
component or for all server components is recommended for troubleshooting.
When troubleshooting vCenter Orchestrator with vCloud Director, setting the vCloud Director
plug-in in debug mode logs the REST calls and responses. This can be changed in the following
section as follows:
<category additivity="true" name="com.vmware.vmo.plugin.vcloud">
<priority value="DEBUG"/>
</category>
In the section:
<appender class="org.apache.log4j.ConsoleAppender" name="CONSOLE">
In the section:
<!-- VMware vCO -->
6.4
vCloud Monitoring
Monitoring a vCloud instance gives the service provider insight into the health of their vCloud
services to help meet SLAs and proactively notify regarding any potential capacity shortfalls.
vCloud management systems can be monitored using vFabric Hyperic as a single dashboard
integrated with agents installed on the management virtual machines.
Table 31. VMware vCloud Director Monitoring Items
Scope
Item
System
Leases
Quotas
Limits
CPU
Memory
vSphere Resources
Virtual Machines/vApps
Not in scope
Note
Essentially, each of the jmxremote files is only accessible to the owner. Final
security properties on a Windows server will show Full Access to the owner (default
is Administrators group) and no other users, groups, or SYSTEM listed for access.
Additionally, windows explorer adds a lock icon next to the filename.
Note
JMX monitoring will be available after restarting the vCenter Orchestrator server and is set up
during the initial vCloud deployment.
6.4.3.1. Testing of the Monitoring
JConsole is a GUI application from the Java Development Kit that is designed for monitoring Java
applications.
The jconsole executable is in JDK_HOME/bin, where JDK_HOME is the directory where the
JDK is installed. If this directory is in system path, the tool can be started by typing jc o n s o l e in
a command (shell) prompt.
JConsole lists local processes and provides the option to enter a remote one with the
hostname:port syntax.
After connecting you can monitor the memory, threads, and managed beans.
Table 32 represents a subset of MBeans that can be are used for monitoring the performance of
a vCenter Orchestrator instance.
Table 32.vCenter Orchestrator Monitored MBeans
Workflow Execution
Mbean
ch.dunes.workflow.engine.mbean.WorkflowEngine
Description
Attribute
Description
ExecutorsActiveCount
ExecutorsQueueSize
Web views
Mbean
jboss.web:type=Cache,host=[hostname],path=/vmo
Description
webview statistics
Attribute
Description
accessCount
cacheMaxSize
cacheSize
desiredEntryAccessRatio
Entry hit ratio at which an entry will never be removed from the cache.
hitsCount
jboss.web:type=GlobalRequestProcessor,name=http-[hostname]-[port]
Description
Attribute
Description
bytesSent
Bytes sent by all the request processors running on the Apache Tomcat
container.
bytesReceived
processingTime
errorCount
maxTime
requestCount
jboss.web:type=Manager,path=/vmware-vmo-webcontrol,host=[hostname]
Description
Attribute
Description
activeSessions
expiredSessions
maxActive
Maximum number of sessions that have been active at the same time.
processingTime
sessionAverageAliveTime
Average time (in seconds) that expired sessions had been alive.
sessionCounter
sessionMaxAliveTime
Longest time (in seconds) that an expired session had been alive.
WebViewEngine
Mbean
jboss.web:j2eeType=Servlet,name=VSOWebViewEngine,WebModule=//localhost/vmo,J2EEApplication=none,J2EE
Server=none
Description
Attribute
Description
maxTime
processingTime
sessionMaxAliveTime
The longest time (in seconds) that an expired session had been alive.
requestCount
jboss.web:type=ThreadPool,name=http-[hostname]-[port]
Description
Attribute
Description
currentThreadCount
currentThreadsBusy
7. Extending vCloud
7.1
Hybrid vCloud
A hybrid vCloud is a vCloud infrastructure composed of two or more vCloud instances (private or
public) that remain unique entities but are bound together by standardized technology that
enables data and application portability (for example, cloudbursting for load balancing resources
between vCloud instances). NewCo allows organizations to extend their existing private virtual
environment through cloudbursting and IPSEC VPN between organizations to extend the
datacenter.
See Hybrid VMware vCloud Use Case for details on how private and public vCloud instances can
be associated with each other.
7.2
vCloud Connector
VMware vCloud Connector (vCC) is an appliance that allows vSphere administrators to move
virtual machines from vSphere environments or vApps from a vCloud to a remote vCloud. The
origination and destination vCloud can be a public or public vCloud. Figure 7 provides an
overview of communication protocols between vCloud Connector and vCloud Director:
Figure 7. vCloud Connector
se
On
mi
Pre
lou
eC
vat
ri
-P
CB
Server
vCloud
Director
vCloud
Director
CB
Server
REST APIs
REST APIs
te
va
i
rem
se
ri
-P
ud
Clo
fP
Of
vCenter
Server
VCC
Appliance
REST APIs
vCloud
Director
CB
Server
tio
liza
se
mi tua
Pre Vir
On here
vSp
lou
cC
bli
Pu
It is recommended that the appliance reside on the same subnet as vCenter Server.
Ports 80, 443 and 8443 must be open on any firewall to allow communication between
vCenter and the vCloud Connector appliance.
Description
vSphere Client
7.3
vCloud API
There are two ways to interact with the vCloud Director cell; via the browser-based UI or via the
vCloud API. The browser-based UI has limited customization capability. To enhance the user
experience a service provider or enterprise may want to write their own portals to integrate with
vCloud Director. To enable integration, the vCloud API provides a rich set of calls in VMware
vCloud Director.
vCloud APIs are REST-like (which allows for loose coupling of services between the server and
consumer), are highly scalable, and use HTTP/S protocol for communication. The APIs are
grouped into three sections based upon the functionality they provide and type of operation.
There are several options available to implement the custom portal using the vCloud API:
VMware vCloud Request Manager, VMware vCloud Orchestrator, or by using third-party
integrators. Some of these may require customization to design workflows to satisfy customer
requirements.
Figure 8 shows a use case where a service provider has exposed a custom portal to end users
on the Internet.
Figure 8. vCloud API Logical Representation
7.4
Because vCloud Director leverages core vSphere infrastructure, automation is possible through
vCenter Orchestrator. vCenter Orchestrator provides out-of-the-box workflows that can be
customized to automate existing manual tasks. Administrators can use sample workflows from a
standard workflow library that provides blueprints for creating additional workflows, or create their
own custom workflows.
vCenter Orchestrator integrates with vCloud Director through a vCloud Director plug-in that
communicates via the vCloud API. vCenter Orchestrator can also orchestrate workflows at the
vSphere level through a vSphere plug-in, if necessary.
Figure 9. vCloud Orchestration
Datacenter/site failure.
The vCenter Orchestrator server application is a Windows service that can be controlled with
scripts using the command line interface. The vCenter Orchestrator server application is
stateless. The workflows and their state are stored in a database. The vCenter Orchestrator
server application implements checkpointing. It can resume running workflows from their saved
state. Only one vCenter Orchestrator server application node can run per database. The
application server has a local file based configuration required to start the service and the
orchestrated systems.
When making vCenter Orchestrator highly available the first thing to implement is multi-master or
master-slave database replication. Also called cold standby, this provides a fully redundant
instance of each node, which is only brought online when its associated primary node fails. As
long as a copy of the database is available, a vCenter Orchestrator application server with the
appropriate configuration can resume workflow operations. Specific database vendor best
practices must be followed to implement database high availability.
This is the configuration that suits best vCenter Orchestrator. A third-party clustering application
can be set up to check on the server availability (for example, by monitoring the Web service),
and upon failure stops the primary node and starts one of the secondary nodes.
This requires that all of the vCenter Orchestrator application servers have the same plug-ins
installed and, except for the IP address in use, the same configuration.
This can be done having each node maintain its own copy of the cluster configuration data. The
configuration on the nodes can be initially set using the vCenter Orchestrator web configuration
application and exported manually to the other nodes, and then upon configuration change
updated using file replication scripts on the <vCO Installation Folder>\appserver\server\vmo\conf and <vCO Installation Folder>\appserver\server\vmo\plugins directories.
Alternatively, installing vCenter Orchestrator on the quorum drive is possible, but requires
scripting the update of the IP address configuration of the Orchestrator application server (in
<vCO Installation Folder>\app-server\bin\boot.properties) for the new host as
part of the automated failover and failback operations.
Allows recovery when the application server file structure integrity is compromised.
Permits resumption of availability in thirty seconds to two minutes (the time required for a
vCenter Orchestrator server to start).
8. vCloud Metering
To track resource metrics for vCloud entities, vCenter Chargeback sets allocation units on the
imported vCloud Director hierarchies based on the allocation model configured in vCloud
Director. Table 34 shows which allocation units are set.
Table 34. Allocation Units for vCloud Hierarchies Based on Allocation Model
Entity
Pay-As-You-Go
Allocation Pool
Reservation Pool
Organization virtual
datacenter
None
CPU
CPU
Memory
Memory
Storage
Storage
vApp
None
None
None
Virtual machine
vCPU
vCPU
vCPU
Memory
Memory
Memory
Storage
Storage
Storage
Template
Storage
Storage
Storage
Media file
Storage
Storage
Storage
Network
DHCP
DHCP
DHCP
NAT
NAT
NAT
Firewall
Firewall
Firewall
Count of Networks
Count of Networks
Count of Networks
8.1
Cost Models
Installing vCloud Director and vShield Manager data collectors also creates default cost models
and billing policies that integrate with vCloud Director and vShield Manager. Billing policies
control costs assessed to resources used. Default vCloud Director billing policies charge based
on allocation for vCPU, memory, and storage. Costs are charged for on an hourly, daily, weekly,
monthly, quarterly, biannual, and yearly basis.
Instead of modifying default billing copies and cost models, make copies and modify the
duplicates. For more information, see the vCenter Chargeback Users Guide
(http://www.vmware.com/support/pubs/vcbm_pubs.html) for vCenter Chargeback version 1.6.2.
Promotional rate A service provider offers new clients a 10% discount. Instead of modifying
base rates in the cost model, apply a 0.9 rate factor to reduce the base costs for client by
10%.
Rates for unique configurations A service provider decides to charge clients for special
infrastructure configurations using a rate factor to scale costs.
VM instance costing assigns a fixed cost to a hard bundle of vCPU and memory. This option is
only available with the Pay-As-You-Go allocation model. Use VM instance costing to create a
fixed cost matrix for different virtual machine bundles.
8.2
Reporting
vCenter Chargeback generates cost, usage, and comparison reports for hierarchies and entities.
The vCenter Chargeback API provides the capability to export reports to XML. Developers use
XSLT to transform the raw XML into a format supported by the customers billing system. Reports
run from the vCenter Chargeback user interface are available in PDF and XLS format. Service
accounts with read-only privileges have been created to run reports from the vCenter Chargeback
UI or API.
8.3
Internet traffic is network traffic that extends beyond the vCloud environment to the Internet. For
routed external organization networks, internet traffic is the traffic sent and received through the
vShield appliance. vCenter Chargeback pulls network metrics sent through vShield Edge devices
(send and receive) from vShield Manager.
A usage model bills for the network bandwidth use by applying base rates to Network Received
and Network Transmit metrics. This is the default billing policy type.
A fixed cost-based cost model allows billing for different types of Internet services and usage
based on an agreed upon fixed price. Example fixed costs include:
Monthly fixed rate for a specified bandwidth cap Instead of charging for actual usage, the
client is billed a fixed fee for Internet usage through the creation of a fixed cost in vCenter
Chargeback.
Basic monthly fixed costs on top of Internet usage (application monitoring tools and reports
supplied by a solution provider).
Additional fixed costs that incurred due to upfront infrastructure needed (for example, a new
router for the client). Figure 10 provides an example of a one-off router cost of $150.
NAT service.
DHCP service.
Firewall service.
VPN service.
8.4
Aggregator Reporting
Public vCloud providers under the VMware Service Provider Program (VSPP) are required to
report on the hourly virtual machine vRAM usage within the resource groups. NewCo has
deployed the vCloud Usage Meter to meter the resource groups and report back vRAM usage to
the aggregator on the fifth of every month. vRAM data collected from the resource groups is kept
on file or within the vCloud Usage Meter database for a minimum of 12 months in the event of an
audit by the aggregator or VMware.
Quantity
Name/Description
ESXi
Chassis: 3
Memory: 96GB
2 vCPUs
4GB memory
1 vNIC
Version: 5.0
4 vCPUs
8GB memory
1 vNIC
Version: 5.0
vCenter Server
(Management)
vCenter Server
(Resource)
vCenter Server,
vCenter Update
Manager,
vCloud Director,
vCenter
Chargeback
Database
VMware vCloud
Director
vShield Manager
vCenter
Chargeback
Server
Domain
Controllers (AD)
4 vCPUs
16GB memory
1 vNIC
2 vCPUs
4GB memory
1 vNIC
Version: 5.0
1vCPU
4GB memory
1 vNIC
2 vCPUs
4GB memory
1 vNIC
Version: 1.6.2
1vCPU
4GB memory
1 NIC
Quantity
Name/Description
ESXi host
Chassis: 6
Memory: 96GB
vCenter Server
Storage
FC SAN array
VMFS
RAID Level: 5