Professional Documents
Culture Documents
White Paper
Table of Contents
Executive Summary
Cluster Architecture Best Practices
High Availability Architecture for the OpenStack Control Plane
Use Host Aggregates to Abstract AZ Capabilities
Networking Considerations for HA Environments
Load Balancing Considerations
Considerations in Deploying Stateful Applications
Production-Grade Openstack LBaaS with Citrix NetScaler
General considerations for performance and availability
Deploying NetScaler ADC Services Across Multiple Availability Zones in OpenStack
General Best Practices
Deployment Considerations for Load Sharing Between Multiple NetScalers Across
Multiple AZs and Data Center
Mirantis/Citrix Integration in MOS
Conclusion
Next Steps
Resources
citrix.com
3
4
4
6
6
6
7
7
8
10
10
11
12
12
12
13
mirantis.com
White Paper
citrix.com
mirantis.com
White Paper
citrix.com
mirantis.com
White Paper
Storage for VM boot disks, Images, Snapshots and Volumes is backed by a distributed storage
platform of choice (Ceph, NetApp, EMC VNX, etc.)
-- This enables Live Migration and Evacuation of VMs (needs to be triggered from UI or API).
If Ceph -- the resilient multimodal data storage system -- is used to support Cinder (volume storage),
and/or Glance (image storage) and Swift (object storage API), it employs simple redundancy to protect
its Object Storage Daemon components, and provides its own quorum-based HA for Ceph Monitor
components, requiring time synchronization among all three (or more) nodes.
Mapping Fault Domains to OpenStack Availability Zones
Once OpenStack has been deployed, you can demarcate fault domains by assigning the resources in
each domain to an OpenStack Availability Zone -- a user-visible name assigned to a Host Aggregate.
Host Aggregates/Availability Zones are defined via the Nova client CLI or equivalent REST calls as
documented (for OpenStack Juno) here. To summarize, you can create a host aggregate my_host_
aggregate named as an availability zone my_availability_zone by issuing the CLI command:
$ nova aggregate-create my_host_aggregate my_availability_zone
Then list aggregates and their IDs via:
$ nova aggregate-list
And add hosts (e.g., my_hostname) to the availability zone by referencing the ID of the
corresponding host aggregate:
$ nova aggregate-add-host <ID> my_hostname
citrix.com
mirantis.com
White Paper
citrix.com
mirantis.com
White Paper
citrix.com
mirantis.com
White Paper
Figure 2. Citrix NetScaler integrates with OpenStack via driver to OpenStacks LBaaS Plugin, providing seamless control of
application-level load balancing under the Horizon webUI or OpenStack Neutron client and REST APIs.
NetScaler Control Center (NCC) provides the following key benefits that enable a cloud
consumption model for value added NetScaler ADC features, therefore making it easy for cloud
providers to offer any NetScaler ADC or security function as a cloud service.
Capacity pooling across all NetScaler infrastructure.
End-to-end automation across all NetScaler appliances.
Guaranteed SLAs through service aware resource allocation.
Integration with OpenStack KeyStone for single-sign-on authentication.
Flexible placement algorithms for ADC policies.
Centralized visibility and reporting for operational statistics.
General considerations for performance and availability
Choice of physical vs virtual appliances - NetScaler provides a wide choice of platforms that
can power Neutron LBaaS, ranging from physical, virtual and purpose built multi-tenant
appliances. Customers are free to self select between these platforms purely based on
performance, scalability and cost without having to make any changes in their LBaaS offering to
their tenants. Customers that prefer the resiliency and reliability of purpose built hardware may
opt for physical appliances where as the ones that prefer a purely software defined data center
model would tend to adopt the virtual form factors.
Multi-tenancy isolation best practices - The usage of shared infrastructure forms the
underpinning of the economies of scale in a cloud offering. However, the issue of how multiple
tenants can be hosted on the same-shared infrastructure without compromising isolation and
security needs to be adequately addressed. NetScalers Neutron LBaaS solution offers a wide
choice of multi-tenancy isolation mechanisms for the provider to choose from:
-- Fully dedicated instances for maximum isolation and independence - Designed for mission
critical workloads, this isolation model allows fine grained hard walling of CPU, memory,
throughput, SSL capacity and other critical resources for each tenant and constitutes the
highest form of isolation.
citrix.com
mirantis.com
White Paper
-- Shared instances - Ideal for test/dev workloads, shared instances can host the ADC workloads
of multiple tenants and offer a cost effective and best effort solution for multi-tenancy.
-- Partition based high-density multi-tenancy - Striking a fine balance between the isolation of
dedicated instances and the capacity efficiency of shared instances, NetScalers admin
partitions based multi-tenancy enables high density while allowing hard walling of certain
critical parameters such as throughput, connections and memory.
Scalability and capacity on demand - NetScalers industry leading TriScale technology allows
OpenStack providers to choose between various scalability options to increase capacity on demand -- Scale up - NetScaler supports a pay-grow licensing model on all its appliances where
additional capacity can be unlocked on any device by simply applying a corresponding license.
-- Scale out - NetScalers popular TriScale clustering technology allows for as many as 32
different nodes to be clustered together into a single logical NetScaler unit, with seamless
synchronization of operational and configuration data.
Multi-datacenter Architecture Best Practices
As noted previously, Mirantis uses native OpenStack segregation mechanisms to assemble multisite clouds consisting of independent OpenStack installations:
Regions to enable share UI and authentication for geo-dispersed datacenters.
Availability Zones to isolate fault domains within a datacenter (a rack, a power source, a
server room).
Host Aggregates for grouping of compute nodes by user-defined arbitrary characteristics
(availability of directly attached SSD storage, server model, etc.).
Figure 3. Multi-region cloud in conventional configuration, without ADC. Controllers within each region are in HA configuration.
Compute elements are segregated in AZs circumscribing fault domains. Storage is highly available.
citrix.com
mirantis.com
White Paper
10
Figure 4. Multi-region OpenStack cloud with NetScaler ADC installed in each region, in highly-available configuration.
citrix.com
mirantis.com
White Paper
11
Deployment Considerations for Load Sharing Between Multiple NetScalers Across Multiple
AZs and Datacenters
As mentioned above, typical best practice entails deploying a NetScaler instance or HA pair per AZ,
with that instance load balancing traffic to local computer resources within the same AZ. The obvious
next question to answer is how to distribute load across multiple AZs, on control and data planes.
Distributing NetScaler data plane across multiple AZs: Distributing traffic on the data plane
always involves an external entity (such as an upstream router or a switch) that is distributing traffic.
-- Using ECMP and RHI:
Equal Cost Multi Path (ECMP) is a common traffic distribution mechanism at layer3 supported
by all routers. A typical deployment would be for an upstream routing layer (like a data center
core router) to use ECMP to distribute traffic to NetScalers across multiple availability zones.
ECMP is based on a stateless hash mechanism that is flow safe, and therefore will always ensure
that traffic from the same flow is always processed by the same NetScaler.
-- Route Health Injection (RHI) is a mechanism that NetScaler supports to advertise the
availability of services running on an instance through dynamic routing protocols like OSPF. In
essence, RHI works by NetScaler injecting routes into OSPF for all the healthy services running
on the instance. When a service becomes unhealthy or goes down, the route is automatically
removed from the advertisements, and the upstream router will no longer direct traffic meant
for that service to that NetScaler instance.
ECMP and RHI together are a very popular choice for scale out architectures of NetScaler across
multiple availability zones.
-- Using GSLB for multi-DC deployments: For deployments that span across multiple
datacenters (and perhaps multiple geographical regions), NetScalers Global Server Load
Balancing (GSLB) solution is best suited for distributing traffic across multiple data centers.
However, each data center can have multiple availability zones, and again, ECMP + RHI is an
effective solution for balancing load across those availability zones.
-- TriScale clustering as a scale out solution for stateful applications: Some applications need
persistence and user session state to be preserved and synced across nodes in different
availability zones. For this type of application, TriScale clustering offers a fully stateful and
operationally streamlined scale-out solution that guarantees even distribution of load across
multiple AZs.
Distributing NetScaler control plane across multiple AZs: In an OpenStack environment,
NetScaler Control Center (NCC) constitutes the centralized control plane for NetScaler
appliances. The scalability and availability model of NCC needs to be carefully thought through
while designing your OpenStack cloud architecture.
A general best practice is to have a one-to-one correspondence between an NCC instance and an
OpenStack controller node.
citrix.com
mirantis.com
White Paper
12
-- Deploying NCC for HA within a region: Customers will be able to deploy multiple NCC
instances in high availability configuration to form a single logical control plane, per region.
This HA deployment of the control plane will have reachability to all NetScaler instances
running across multiple AZs within a region, similar to the way OpenStack controller HA cluster
manages resources across multiple AZs.
-- Multi-region considerations: Typical best practices for multi-region architectures is to have
completely separate OpenStack deployments in each region or geographically dispersed data
center. For NetScaler ADC services, this corresponds to completely separate NCC deployments
managing and controlling the NetScaler appliances within that region.
Mirantis/Citrix Integration In MOS
Deploying a highly available OpenStack cluster (with optional Ceph support) is simplified by use of
Mirantis OpenStack, whose wizard-driven Fuel installer can auto-deploy OpenStack with HA and/or
Ceph with simple, wizard-based configuration. For PoCs or testing, Fuel can deploy HA and Ceph
components on a single Controller node (HA is not fully functional in this case), and then deploy
additional Controllers on the cluster (HA becomes active when three or more Controllers are
deployed). For more information about Mirantis OpenStack and Fuel, see the Mirantis OpenStack
Planning Guide and User Guide.
NetScaler ADC (all platforms, physical and virtual) is certified by Mirantis to interoperate with
Mirantis OpenStack. A runbook for integrating NetScaler with MOS has been produced and verified
by Mirantis engineers. Mirantis provides L1 and L2 support to Mirantis OpenStack users with
NetScaler, and will escalate L3 issues to Citrix engineers for support.
Conclusion
Citrix NetScaler ADC brings significant benefits to OpenStack cloud operations, administration,
performance and resilience, particularly as clouds grow larger, extend to multiple regions, and as
multi-tenant demands on each cloud data center become more diverse. As we hope this white
paper demonstrates, NetScalers overall architecture and HA strategies dovetail well with proven
OpenStack best practice for building resilient scale-out clouds.
Next Steps
Readers interested in learning more about Citrix NetScaler ADC solutions and Mirantis OpenStack are
encouraged to begin by visiting Citrix partner page on mirantis.com. The latest Mirantis OpenStack
distribution can be downloaded free of charge at http://software.mirantis.com, and comes with a
complimentary 30 days of free support -- ideal for evaluation and PoC implementations.
Mirantis is happy to discuss your plans to evaluate or implement OpenStack clouds with Citrix
NetScaler ADC, and can support your efforts with a range of engineering services, including
Architectural Design Assessments that put the knowledge and expertise of Mirantis cloud architects
at your disposal quickly for concrete input and direction. To schedule an Architecture Design
Assessment (ADA), please contact us https://online.mirantis.com/contact-us.
citrix.com
mirantis.com
Solutions Brief
13
Resources
Mirantis OpenStack Documentation (6.0)
Mirantis Reference Architectures (for HA, Neutron-network, Ceph)
Mirantis Bill of Materials Calculator
OpenStack community documentation
OpenStack Architecture Design Guide
OpenStack Scaling
Corporate Headquarters
Fort Lauderdale, FL, USA
UK Development Center
Chalfont, United Kingdom
EMEA Headquarters
Schaffhausen, Switzerland
Pacific Headquarters
Hong Kong, China
About Citrix
Citrix (NASDAQ:CTXS) is leadingthe transition to software-defining the workplace,unitingvirtualization, mobility management,
networking and SaaS solutions to enable new waysfor businesses and peopleto work better. Citrix solutions power business mobility
through secure,mobileworkspaces that provide people with instant access to apps, desktops, dataand communications on any device,
over any network and cloud.With annual revenue in 2014 of $3.14 billion, Citrix solutions are in use at more than 330,000 organizations
and by over 100 million users globally. Learn more atwww.citrix.com.
Copyright 2015 Citrix Systems, Inc. All rights reserved. Citrix, NetScaler, NetScaler App Delivery Controller and TriScale are trademarks of
Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. Other product and company names
mentioned herein may be trademarks of their respective companies.
About Mirantis
Mirantis is the leading pure-play OpenStack company, creator of the highly-praised Mirantis OpenStack distribution. Mirantis is currently
the #3 contributor to OpenStack core and has built more large-scale enterprise and service-provider OpenStack clouds than any other
entity. Mirantis OpenStack incorporates a sophisticated pre-configuration and deployment tool (Fuel) that substantially automates creation
of robust OpenStack clouds in High Availability (HA) configurations. Mirantis is a founding member of the OpenStack Foundation.
2015 Mirantis and the Mirantis logo are registered trademarks of Mirantis in the U.S. and other countries. Third party trademarks
mentioned are the property of their respective owners.
0515/PDF
citrix.com
mirantis.com