You are on page 1of 595

OpenNebula 4.

6 Design and Installation


Guide
Release 4.6
OpenNebula Project
April 28, 2014
CONTENTS
1 Building your Cloud 1
1.1 An Overview of OpenNebula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Understanding OpenNebula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Planning the Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Installing the Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2 Quick Starts 27
2.1 Quickstart: OpenNebula on CentOS 6 and KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Quickstart: OpenNebula on CentOS 6 and Xen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3 Quickstart: OpenNebula on CentOS 6 and ESX 5.x . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4 Quickstart: OpenNebula on Ubuntu 12.04 and KVM . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.5 Quickstart: Create Your First vDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
i
ii
CHAPTER
ONE
BUILDING YOUR CLOUD
1.1 An Overview of OpenNebula
OpenNebula is the open-source industry standard for data center virtualization, offering a simple but feature-
rich and exible solution to build and manage enterprise clouds and virtualized data centers. OpenNebula is designed
to be simple. Simple to install, update and operate by the admins, and simple to use by end users. Being focused
on simplicity, we integrate with existing technologies whenever possible. Youll see that OpenNebula works with
MySQL, Ceph, LVM, GlusterFS, Open vSwitch, LDAP... This allows us to deliver a light, exible and robust cloud
manager. This introductory guide gives an overview of OpenNebula and summarizes its main benets for the different
stakeholders involved in a cloud computing infrastructure.
1.1.1 What Are the Key Features Provided by OpenNebula?
You can refer to our a summarized table of Key Features or to the Detailed Features and Functionality Guide included
in the documentation of each version.
1.1.2 What Are the Interfaces Provided by OpenNebula?
OpenNebula provides many different interfaces that can be used to interact with the functionality offered to manage
physical and virtual resources. There are four main different perspectives to interact with OpenNebula:
Cloud interfaces for Cloud Consumers, like the OCCI and EC2 Query and EBS interfaces, and a simple Sun-
stone cloud user view that can be used as a self-service portal.
Administration interfaces for Cloud Advanced Users and Operators, like a Unix-like command line interface
and the powerful Sunstone GUI.
Extensible low-level APIs for Cloud Integrators in Ruby, JAVA and XMLRPC API
A Marketplace for Appliance Builders with a catalog of virtual appliances ready to run in OpenNebula envi-
ronments.
1
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.1.3 What Does OpenNebula Offer to Cloud Consumers?
OpenNebula provides a powerful, scalable and secure multi-tenant cloud platform for fast delivery and elasticity of
virtual resources. Multi-tier applications can be deployed and consumed as pre-congured virtual appliances from
catalogs.
Image Catalogs: OpenNebula allows to store disk images in catalogs (termed datastores), that can be then used
to dene VMs or shared with other users. The images can be OS installations, persistent data sets or empty data
blocks that are created within the datastore.
Network Catalogs: Virtual networks can be also be organised in network catalogs, and provide means to
interconnect virtual machines. This kind of resources can be dened as xed or ranged networks, and can
be used to achieve full isolation between virtual networks.
VM Template Catalog: The template catalog system allows to register virtual machine denitions in the sys-
tem, to be instantiated later as virtual machine instances.
Virtual Resource Control and Monitoring: Once a template is instantiated to a virtual machine, there are
a number of operations that can be performed to control lifecycle of the virtual machine instances, such as
migration (live and cold), stop, resume, cancel, poweroff, etc.
Multi-tier Cloud Application Control and Monitoring: OpenNebula allows to dene, execute and manage
multi-tiered elastic applications, or services composed of interconnected Virtual Machines with deployment
dependencies between them and auto-scaling rules.
2 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.1.4 What Does OpenNebula Offer to Cloud Operators?
OpenNebula is composed of the following subsystems:
Users and Groups: OpenNebula features advanced multi-tenancy with powerful users and groups management,
ne-grained ACLs for resource allocation, and resource quota management to track and limit computing, storage
and networking utilization.
Virtualization: Various hypervisors are supported in the virtualization manager, with the ability to control the
complete lifecycle of Virtual Machines and multiple hypervisors in the same cloud infrastructure.
Hosts: The host manager provides complete functionality for the management of the physical hosts in the cloud.
Monitoring: Virtual resources as well as hosts are periodically monitored for key performance indicators. The
information can then used by a powerful and exible scheduler for the denition of workload and resource-aware
allocation policies. You can also gain insight application status and performance.
Accounting: A Congurable accounting system to visualize and report resource usage data, to allow their
integration with chargeback and billing platforms, or to guarantee fair share of resources among users.
Networking: An easily adaptable and customizable network subsystem is present in OpenNebula in order to
better integrate with the specic network requirements of existing data centers and to allowfull isolation between
virtual machines that composes a virtualised service.
Storage: The support for multiple datastores in the storage subsystem provides extreme exibility in planning
the storage backend and important performance benets.
Security: This feature is spread across several subsystems: authentication and authorization mechanisms al-
lowing for various possible mechanisms to identify a authorize users, a powerful Access Control List mechanism
allowing different role management with ne grain permission granting over any resource managed by Open-
Nebula, support for isolation at different levels...
High Availability: Support for HA architectures and congurable behavior in the event of host or VM failure to
provide easy to use and cost-effective failover solutions.
Clusters: Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load
balancing, high availability, and high performance computing.
1.1. An Overview of OpenNebula 3
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Multiple Zones: The Data Center Federation functionality allows for the centralized management of multiple
instances of OpenNebula for scalability, isolation and multiple-site support.
VDCs. An OpenNebula instance (or Zone) can be further compartmentalized in Virtual Data Centers (VDCs),
which offer a fully-isolated virtual infrastructure environments where a group of users, under the control of the
VDC administrator, can create and manage compute, storage and networking capacity.
Cloud Bursting: OpenNebula gives support to build a hybrid cloud, an extension of a private cloud to combine
local resources with resources from remote cloud providers. A whole public cloud provider can be encapsulated
as a local resource to be able to use extra computational capacity to satisfy peak demands.
App Market: OpenNebula allows the deployment of a private centralized catalog of cloud applications to share
and distribute virtual appliances across OpenNebula instances
1.1.5 What Does OpenNebula Offer to Cloud Builders?
OpenNebula offers broad support for commodity and enterprise-grade hypervisor, monitoring, storage, networking
and user management services:
User Management: OpenNebula can validate users using its own internal user database based on passwords,
or external mechanisms, like ssh, x509, ldap or Active Directory
Virtualization: Several hypervisor technologies are fully supported, like Xen, KVM and VMware.
Monitoring: OpenNebula provides its own customizable and highly scalable monitoring system and also can
be integrated with external data center monitoring tools.
Networking: Virtual networks can be backed up by 802.1Q VLANs, ebtables, Open vSwitch or VMware net-
working.
Storage: Multiple backends are supported like the regular (shared or not) lesystem datastore supporting pop-
ular distributed le systems like NFS, Lustre, GlusterFS, ZFS, GPFS, MooseFS...; the VMware datastore (both
regular lesystem or VMFS based) specialized for the VMware hypervisor that handle the vmdk format; the
LVM datastore to store disk images in a block device form; and Ceph for distributed block device.
Databases: Aside from the original sqlite backend, mysql is also supported.
4 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Cloud Bursting: Out of the box connectors are shipped to support Amazon EC2 cloudbursting.
1.1.6 What Does OpenNebula Offer to Cloud Integrators?
OpenNebula is fully platform independent and offers many tools for cloud integrators:
Modular and extensible architecture with customizable plug-ins for integration with any third-party data
center service
API for integration with higher level tools such as billing, self-service portals... that offers all the rich func-
tionality of the OpenNebula core, with bindings for ruby and java.
Sunstone Server custom routes to extend the sunstone server.
OneFlow API to create, control and monitor multi-tier applications or services composed of interconnected
Virtual Machines.
Hook Manager to trigger administration scripts upon VM state change.
1.1. An Overview of OpenNebula 5
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.2 Understanding OpenNebula
This guide is meant for the cloud architect and administrator, to help him to understand the way OpenNebula catego-
rizes the infrastructure resources, and how they are consumed by the users.
In a tiny installation with a few hosts, you can use OpenNebula with the two default groups for the administrator and
the users, without giving much though to the infrastructure partitioning and user organization. But for medium and
big deployments you will probably want to provide some level of isolation and structure.
Although OpenNebula has been designed and developed to be easy to adapt to each individual company use case and
processes, and perform ne-tuning of multiple aspects, OpenNebula brings a pre-dened model for cloud provisioning
and consumption.
The OpenNebula model is a result of our collaboration with our user community during the last years.
1.2.1 The Infrastructure Perspective
Common large IT shops have multiple Data Centers (DCs), each one of them consisting of several physical Clusters
of infrastructure resources (hosts, networks and storage). These Clusters could present different architectures and
software/hardware execution environments to fulll the needs of different workload proles. Moreover, many orga-
nizations have access to external public clouds to build hybrid cloud scenarios where the private capacity of the Data
Centers is supplemented with resources from external clouds to address peaks of demand. Sysadmins need a single
comprehensive framework to dynamically allocate all these available resources to the multiple groups of users.
For example, you could have two Data Centers in different geographic locations, Europe and USA West Coast, and
an agreement for cloudbursting with a public cloud provider, such as Amazon. Each Data Center runs its own full
OpenNebula deployment. Multiple OpenNebula installations can be congured as a federation, and in this case they
will share the same user accounts, groups, and permissions across Data Centers.
6 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.2.2 The Organizational Perspective
Users are organized into Groups (also called Projects, Domains, Tenants...). A Group is an authorization boundary
that can be seen as a business unit if you are considering it as private cloud or as a complete new company if it is
public cloud.
A Group is simply a boundary, you need to populate resources into the Group which can then be consumed by the
users of that Group. A vDC (Virtual Data Center) is a Group plus Resource Providers assigned. A Resource Provider
is a Cluster of infrastructure resources (physical hosts, networks, storage and external clouds) from one of the Data
Centers.
Different authorization scenarios can be enabled with the powerful and congurable ACL system provided, from the
denition of vDC Admins to the privileges of the users that can deploy virtual machines. Each vDC can execute
different types of workload proles with different performance and security requirements.
The following are common enterprise use cases in large cloud computing deployments:
On-premise Private Clouds Serving Multiple Projects, Departments, Units or Organizations. On-premise
private clouds in large organizations require powerful and exible mechanisms to manage the access privileges
to the virtual and physical infrastructure and to dynamically allocate the available resources. In these scenarios,
the Cloud Administrator would dene a vDC for each Department, dynamically allocating resources according
to their needs, and delegating the internal administration of the vDC to the Department IT Administrator.
Cloud Providers Offering Virtual Private Cloud Computing. Cloud providers providing customers with a fully-
congurable and isolated environment where they have full control and capacity to administer its users and
resources. This combines a public cloud with the control usually seen in a personal private cloud system.
For example, you can think Web Development, Human Resources, and Big Data Analysis as business units represented
by vDCs in a private OpenNebula cloud.
BLUE: Allocation of (ClusterA-DC_West_Coast + Cloudbursting) to Web Development
RED: Allocation of (ClusterB-DC_West_Coast + ClusterA-DC_Europe + Cloudbursting) to Human Resources
GREEN: Allocation of (ClusterC-DC_West_Coast + ClusterB-DC_Europe) to Big Data Analysis
1.2. Understanding OpenNebula 7
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.2.3 A Cloud Provisioning Model Based on vDCs
A vDC is a fully-isolated virtual infrastructure environment where a Group of users, optionally under the control of
the vDC admin, can create and manage compute and storage capacity. The users in the vDC, including the vDC ad-
ministrator, would only see the virtual resources and not the underlying physical infrastructure. The physical resources
allocated by the cloud administrator to the vDC can be completely dedicated to the vDC, providing isolation at the
physical level too.
The privileges of the vDC users and the administrator regarding the operations over the virtual resources created
by other users can be congured. In a typical scenario the vDC administrator can upload and create images and
virtual machine templates, while the users can only instantiate virtual machine templates to create their machines. The
administrators of the vDC have full control over other users resources and can also create new users in the vDC.
8 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Users can then access their vDC through any of the existing OpenNebula interfaces, such as the CLI, Sunstone Cloud
View, OCA, or the OCCI and AWS APIs. vDC administrators can manage their vDCs through the CLI or the vDC
admin view in Sunstone. Cloud Administrators can manage the vDCs through the CLI or Sunstone.
The Cloud provisioning model based on vDCs enables an integrated, comprehensive framework to dynamically pro-
vision the infrastructure resources in large multi-datacenter environments to different customers, business units or
groups. This brings several benets:
Partitioning of cloud physical resources between Groups of users
Complete isolation of users, organizations or workloads
Allocation of Clusters with different levels of security, performance or high availability
Containers for the execution of software-dened data centers
Way of hiding physical resources from Group members
Simple federation, scalability and cloudbursting of private cloud infrastructures beyond a single cloud instance
and data center
1.2.4 Cloud Usage Models
OpenNebula has three pre-dened user roles to implement two typical enterprise cloud scenarios: infrastructure man-
agement and infrastructure provisioning.
In both scenarios, the Cloud Administrator manages the physical infrastructure, creates users and vDC, and prepares
base templates and images for other users.
1.2. Understanding OpenNebula 9
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Role Capabilities
Cloud Admin.
Operates the Cloud infrastructure (i.e. computing
nodes, networking fabric, storage servers)
Creates and manages OpenNebula infrastructure
resources: Hosts, Virtual Networks, Datastores
Creates and manages Application Flows
Creates new groups for vDCs
Assigns resource providers to a vDC and sets
quota limits
Denes base instance types to be used by the
vDCs. These types dene the capacity of the VMs
(memory, cpu and additional storage) and connec-
tivity.
Prepare VM images to be used by the vDCs
Monitor the status and health of the cloud
Generate activity reports
Infrastructure Management
In this usage model, users are familiar with virtualization concepts. Except for the infrastructure resources, the web
interface offeres the same operations available to the Cloud Admin.
End users can use the templates and images pre-dened by the cloud administrator, but are also allowed to create their
own. They are also able to manage the life-cycle of their resources, including advanced features that may harm the
VM guests, like hot-plugging of new disks, resize of Virtual Machines, modify boot parameters, etc.
Role Capabilities
User
Instantiates VMs using their own templates
Creates new Images
Manages their VMs, including advanced life-
cycle features
Creates and manages Application Flows
Check their usage and quotas
Upload SSH keys to access the VMs
Infrastructure Provisioning
In a infrastructure provisioning model, the end users access a simplied web interface that allows them to launch
Virtual Machines from pre-dened Templates and Images. They can access their VMs, and perform basic operations
like shutdown. The changes made to a VM disk can be saved back, but new Images cannot be created from scratch.
Optionally, each vDC can dene one or more users as vDC Admins. These admins can create new users inside the
vDC, and also manage the resources of the rest of the users. A vDC Admin may, for example, shutdown a VM from
other user to free group quota usage.
10 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Role Capabilities
vDC Admin.
Creates new users in the vDC
Operates on vDC virtual machines and disk im-
ages
Creates and registers disk images to be used by
the vDC users
Checks vDC usage and quotas
vDC User
Instantiates VMs using the templates dened by
the Cloud Admins and the images dened by the
Cloud Admins or vDC Admins.
Instantiates VMs using their own Images saved
from a previous running VM
Manages their VMs, including
reboot
power off/on (short-term switching-off)
shutdown
make a VM image snapshot
obtain basic monitor information and status
(including IP addresses)
Delete any previous disk snapshot
Check user usage and quotas
Upload SSH keys to access the VMs
1.3 Planning the Installation
In order to get the most out of a OpenNebula Cloud, we recommend that you create a plan with the features, perfor-
mance, scalability, and high availability characteristics you want in your deployment. This guide provides information
to plan an OpenNebula installation, so you can easily architect your deployment and understand the technologies
involved in the management of virtualized resources and their relationship.
1.3.1 Architectural Overview
OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and
a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the
hosts with the front-end.
1.3. Planning the Installation 11
OpenNebula 4.6 Design and Installation Guide, Release 4.6
The basic components of an OpenNebula system are:
Front-end that executes the OpenNebula services.
Hypervisor-enabled hosts that provide the resources needed by the VMs.
Datastores that hold the base images of the VMs.
Physical networks used to support basic services such as interconnection of the storage servers and OpenNebula
control operations, and VLANs for the VMs.
OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade
hypervisor, monitoring, storage, networking and user management services. This guide briey describes the different
choices that you can make for the management of the different subsystems. If your specic services are not supported
we recommend to check the drivers available in the Add-on Catalog. We also provide information and support about
how to develop new drivers.
1.3.2 Front-End
The machine that holds the OpenNebula installation is called the front-end. This machine needs network connectivity
to each host, and possibly access to the storage Datastores (either by direct mount or network). The base installation
of OpenNebula takes less than 50MB.
OpenNebula services include:
Management daemon (oned) and scheduler (mm_sched)
Web interface server (sunstone-server)
Warning: Note that these components communicate through XML-RPC and may be installed in different ma-
chines for security or performance reasons
There are several certied platforms to act as front-end for each version of OpenNebula. Refer to the platform notes
and chose the one that better ts your needs.
OpenNebulas default database uses sqlite. If you are planning a production or medium to large scale deployment,
you should consider using MySQL.
12 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
If you are interested in setting up a high available cluster for OpenNebula, check the High OpenNebula Availability
Guide.
The maximum number of servers (virtualization hosts) that can be managed by a single OpenNebula instance (zone)
strongly depends on the performance and scalability of the underlying platform infrastructure, mainly the storage
subsystem. We do not recommend more than 500 servers within each zone, but there are users with 1,000 servers in
each zone. You may nd interesting the following guide about how to tune OpenNebula for large deployments.
1.3.3 Monitoring
The monitoring subsystem gathers information relative to the hosts and the virtual machines, such as the host status,
basic performance indicators, as well as VM status and capacity consumption. This information is collected by execut-
ing a set of static probes provided by OpenNebula. The output of these probes is sent to OpenNebula in two different
ways:
UDP-push Model: Each host periodically sends monitoring data via UDP to the frontend which collects it and
processes it in a dedicated module. This model is highly scalable and its limit (in terms of number of VMs
monitored per second) is bounded to the performance of the server running oned and the database server. Please
read the UDP-push guide for more information.
Pull Model: OpenNebula periodically actively queries each host and executes the probes via ssh. This mode
is limited by the number of active connections that can be made concurrently, as hosts are queried sequentially.
Please read the KVM and Xen SSH-pull guide or the ESX-pull guide for more information.
Warning: Default: UDP-push Model is the default IM for KVM and Xen in OpenNebula >= 4.4.
Please check the the Monitoring Guide for more details.
1.3.4 Virtualization Hosts
The hosts are the physical machines that will run the VMs. There are several certied platforms to act as nodes for each
version of OpenNebula. Refer to the platform notes and chose the one that better ts your needs. The Virtualization
Subsystem is the component in charge of talking with the hypervisor installed in the hosts and taking the actions
needed for each step in the VM lifecycle.
OpenNebula natively supports three hypervisors:
Xen
KVM
VMware
Warning: Default: OpenNebula is congured to interact with hosts running KVM.
Please check the Virtualization Guide for more details of the supported virtualization technologies.
If you are interested in failover protection against hardware and operating system outages within your virtualized IT
environment, check the Virtual Machines High Availability Guide.
1.3.5 Storage
OpenNebula uses Datastores to handle the VM disk Images. A Datastore is any storage medium used to store disk
images for VMs, previous versions of OpenNebula refer to this concept as Image Repository. Typically, a datastore
1.3. Planning the Installation 13
OpenNebula 4.6 Design and Installation Guide, Release 4.6
will be backed by SAN/NAS servers. In general, each Datastore has to be accessible through the front-end using any
suitable technology NAS, SAN or direct attached storage.
When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage
technology used it can mean a real transfer, a symbolic link or setting up an LVM volume.
OpenNebula is shipped with 3 different datastore classes:
System Datastores to hold images for running VMs, depending on the storage technology used these temporal
images can be complete copies of the original image, qcow deltas or simple lesystem links.
Image Datastores store the disk images repository. Disk images are moved, or cloned to/from the System
datastore when the VMs are deployed or shutdown; or when disks are attached or snapshoted.
File Datastore is a special datastore used to store plain les and not disk images. The plain les can be used as
kernels, ramdisks or context les.
Image datastores can be of different type depending on the underlying storage technology:
File-system, to store disk images in a le form. The les are stored in a directory mounted from a SAN/NAS
server.
vmfs, a datastore specialized in VMFS format to be used with VMware hypervisors. Cannot be mounted in the
OpenNebula front-end since VMFS is not *nix compatible.
LVM, The LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of
plain les to hold the Virtual Images. This reduces the overhead of having a le-system in place and thus
increases performance..
Ceph, to store disk images using Ceph block devices.
Warning: Default: The system and images datastores are congured to use a shared lesystem.
Please check the Storage Guide for more details.
14 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.3.6 Networking
OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the
specic network requirements of existing datacenters. At least two different physical networks are needed:
A service network is needed by the OpenNebula front-end daemons to access the hosts in order to manage and
monitor the hypervisors, and move image les. It is highly recommended to install a dedicated network for this
purpose.
A instance network is needed to offer network connectivity to the VMs across the different hosts. To make an
effective use of your VM deployments youll probably need to make one or more physical networks accessible
to them.
The OpenNebula administrator may associate one of the following drivers to each Host:
dummy: Default driver that doesnt perform any network operation. Firewalling rules are also ignored.
fw: Firewall rules are applied, but networking isolation is ignored.
802.1Q: restrict network access through VLAN tagging, which also requires support from the hardware
switches.
ebtables: restrict network access through Ebtables rules. No special hardware conguration required.
ovswitch: restrict network access with Open vSwitch Virtual Switch.
VMware: uses the VMware networking infrastructure to provide an isolated and 802.1Q compatible network for
VMs launched with the VMware hypervisor.
Warning: Default: The default conguration connects the virtual machine network interface to a bridge in the
physical host.
Please check the Networking Guide to nd out more information of the networking technologies supported by Open-
Nebula.
1.3.7 Authentication
You can choose from the following authentication models to access OpenNebula:
Built-in User/Password
SSH Authentication
X509 Authentication
LDAP Authentication
Warning: Default: OpenNebula comes by default with an internal built-in user/password authentication.
Please check the External Auth guide to nd out more information of the auth technologies supported by OpenNebula.
1.3.8 Advanced Components
Once you have an OpenNebula cloud up and running, you can install the following advanced components:
Application Flow and Auto-scaling: OneFlow allows users and administrators to dene, execute and manage
multi-tiered applications, or services composed of interconnected Virtual Machines with deployment depen-
dencies between them. Each group of Virtual Machines is deployed and managed as a single entity, and is
completely integrated with the advanced OpenNebula user and group management.
1.3. Planning the Installation 15
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Cloud Bursting: Cloud bursting is a model in which the local resources of a Private Cloud are combined with
resources from remote Cloud providers. Such support for cloud bursting enables highly scalable hosting envi-
ronments.
Public Cloud: Cloud interfaces can be added to your Private Cloud if you want to provide partners or external
users with access to your infrastructure, or to sell your overcapacity. The following interfaces provide a simple
and remote management of cloud (virtual) resources at a high abstraction level: Amazon EC2 and EBS APIs or
OGF OCCI.
Application Insight: OneGate allows Virtual Machine guests to push monitoring information to OpenNebula.
Users and administrators can use it to gather metrics, detect problems in their applications, and trigger OneFlow
auto-scaling rules.
1.4 Installing the Software
This page shows you how to install OpenNebula from the binary packages.
1.4.1 Step 1. Front-end Installation
Using the packages provided in our site is the recommended method, to ensure the installation of the latest version and
to avoid possible packages divergences of different distributions. There are two alternatives here, to install OpenNebula
you can add our package repositories to your system, or visit the software menu to download the latest package for
your Linux distribution.
Do not forget that we offer Quickstart guides for:
OpenNebula on CentOS and KVM
OpenNebula on CentOS and Xen
OpenNebula on CentOS and VMware
OpenNebula on Ubuntu and KVM
If there are no packages for your distribution, head to the Building from Source Code guide.
1.1. Installing on CentOS/RHEL
Before installing:
Activate the EPEL repo.
There are packages for the front-end, distributed in the various components that conform OpenNebula, and packages
for the virtualization host.
To install a CentOS/RHEL OpenNebula front-end with packages from our repository, execute the following as root:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/\$basearch
enabled=1
gpgcheck=0
EOT
# yum install opennebula-server opennebula-sunstone opennebula-ruby
16 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
CentOS/RHEL Package Description
These are the packages available for this distribution:
opennebula-server: Main OpenNebula daemon, scheduler, etc
opennebula-sunstone: OpenNebula Sunstone, EC2, OCCI
opennebula-ozones: OpenNebula OZones
opennebula-ruby: Ruby Bindings
opennebula-java: Java Bindings
opennebula-gate: Gate server that enables communication between VMs and OpenNebula
opennebula-ow: Manages services and elasticity
opennebula-node-kvm: Meta-package that installs the oneadmin user, libvirt and kvm
1.2. Installing on openSUSE
Before installing:
Activate the PackMan repo.
# zypper ar -f -n packman http://packman.inode.at/suse/openSUSE_12.3 packman
To install an openSUSE OpenNebula front-end with packages from our repository, execute the following as root:
# zypper addrepo --no-gpgcheck --refresh -t YUM http://downloads.opennebula.org/repo/openSUSE/12.3/stable/x86_64 opennebula
# zypper refresh
# zypper install opennebula opennebula-sunstone
To install an openSUSE OpenNebula front-end with packages from our repository, execute the following as root:
# tar xvzf openSUSE-12.3-<OpenNebula version>.tar.gz
# zypper install opennebula opennebula-sunstone
After installation you need to manually create /var/lib/one/.one/one_auth with the following contents:
oneadmin:<password>
openSUSE Package Description
These are the packages available for this distribution:
opennebula: main OpenNebula binaries
opennebula-devel: Examples, manpages and install_gems (depends on opennebula)
opennebula-zones: OpenNebula OZones (depends on opennebula)
opennebula-sunstone: OpenNebula Sunstone (depends on opennebula)
1.3. Installing on Debian/Ubuntu
Also the JSON ruby library packaged with Debian 6 is not compatible with ozones. To make it work a new gem should
be installed and the old one disabled. You can do so executing these commands:
$ sudo gem install json
$ sudo mv /usr/lib/ruby/1.8/json.rb /usr/lib/ruby/1.8/json.rb.no
1.4. Installing the Software 17
OpenNebula 4.6 Design and Installation Guide, Release 4.6
To install OpenNebula on a Debian/Ubuntu front-end from packages from our repositories execute as root:
# wget http://downloads.opennebula.org/repo/Debian/repo.key
# apt-key add repo.key
Debian
# echo "deb http://downloads.opennebula.org/repo/Debian/7 stable opennebula" > /etc/apt/sources.list.d/opennebula.list
Ubuntu 12.04
# echo "deb http://downloads.opennebula.org/repo/Ubuntu/12.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list
Ubuntu 14.04
# echo "deb http://downloads.opennebula.org/repo/Ubuntu/14.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list
To install the packages on a Debian/Ubuntu front-end:
# apt-get update
# apt-get install opennebula opennebula-sunstone
Debian/Ubuntu Package Description
These are the packages available for these distributions:
opennebula-common: provides the user and common les
libopennebula-ruby: all ruby libraries
opennebula-node: prepares a node as an opennebula-node
opennebula-sunstone: OpenNebula Sunstone Web Interface
opennebula-tools: Command Line interface
opennebula-gate: Gate server that enables communication between VMs and OpenNebula
opennebula-ow: Manages services and elasticity
opennebula: OpenNebula Daemon
1.4.2 Step 2. Ruby Runtime Installation
Some OpenNebula components need ruby libraries. OpenNebula provides a script that installs the required gems as
well as some development libraries packages needed.
As root execute:
18 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
# /usr/share/one/install_gems
The previous script is prepared to detect common linux distributions and install the required libraries. If it fails to nd
the packages needed in your system, manually install these packages:
sqlite3 development library
mysql client development library
curl development library
libxml2 and libxslt development libraries
ruby development library
gcc and g++
make
If you want to install only a set of gems for an specic component read Building fromSource Code where it is explained
in more depth.
For cloud bursting, a newer nokogiri gem than the on packed by current distros is required. If you are planning to use
cloud bursting, you need to install nokogiri >= 1.4.4 prior to run install_gems
# sudo gem install nokogiri -v 1.4.4
1.4.3 Step 3. Starting OpenNebula
Log in as the oneadmin user follow these steps:
If you installed from packages, you should have the ~/.one/one_auth le created with a randomly-generated
password. Otherwise, set oneadmins OpenNebula credentials (username and password) adding the following
to ~/.one/one_auth (change password for the desired password):
$ mkdir ~/.one
$ echo "oneadmin:password" > ~/.one/one_auth
$ chmod 600 ~/.one/one_auth
Warning: This will set the oneadmin password on the rst boot. From that point, you must use the oneuser
passwd command to change oneadmins password.
You are ready to start the OpenNebula daemons:
$ one start
Warning: Remember to always start OpenNebula as oneadmin!
1.4.4 Step 4. Verifying the Installation
After OpenNebula is started for the rst time, you should check that the commands can connect to the OpenNebula
daemon. In the front-end, run as oneadmin the command onevm:
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
If instead of an empty list of VMs you get an error message, then the OpenNebula daemon could not be started
properly:
1.4. Installing the Software 19
OpenNebula 4.6 Design and Installation Guide, Release 4.6
$ onevm list
Connection refused - connect(2)
The OpenNebula logs are located in /var/log/one, you should have at least the les oned.log and
sched.log, the core and scheduler logs. Check oned.log for any error messages, marked with [E].
Warning: The rst time OpenNebula is started, it performs some SQL queries to check if the DB exists and if it
needs a bootstrap. You will have two error messages in your log similar to these ones, and can be ignored:
[ONE][I]: Checking database version.
[ONE][E]: (..) error: no such table: db_versioning
[ONE][E]: (..) error: no such table: user_pool
[ONE][I]: Bootstraping OpenNebula database.
After installing the opennebula packages in the front-end the following directory structure will be used
1.4.5 Step 5. Node Installation
5.1. Installing on CentOS/RHEL
When the front-end is installed and veried, it is time to install the packages for the nodes if you are using KVM. To
install a CentOS/RHEL OpenNebula front-end with packages from our repository, execute the following as root:
# sudo yum localinstall opennebula-node-kvm
For further conguration and/or installation of other hypervisors, check their specic guides: Xen, KVM and VMware.
20 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
5.2. Installing on openSUSE
When the front-end is installed, it is time to install the virtualization nodes. Depending on the chosen hypervisor,
check their specic guides: Xen, KVM and VMware.
5.3. Installing on Debian/Ubuntu
When the front-end is installed, it is time to install the packages for the nodes if you are using KVM. To install a
Debian/Ubuntu OpenNebula front-end with packages from our repository, add the repo as described in the previous
section and then install the node package.
$ sudo apt-get install opennebula-node
For further conguration and/or installation of other hypervisors, check their specic guides: Xen, KVM and VMware.
1.4.6 Step 6. Manual Conguration of Unix Accounts
Warning: This step can be skipped if you have installed the kvm node package for CentOS or Ubuntu, as it has
already been taken care of.
The OpenNebula package installation creates a new user and group named oneadmin in the front-end. This account
will be used to run the OpenNebula services and to do regular administration and maintenance tasks. That means that
you eventually need to login as that user or to use the sudo -u oneadmin method.
The hosts need also this user created and congured. Make sure you change the uid and gid by the ones you have in
the frontend.
Get the user and group id of oneadmin. This id will be used later to create users in the hosts with the same id.
In the front-end, execute as oneadmin:
$ id oneadmin
uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin)
In this case the user id will be 1001 and group also 1001.
Then log as root in your hosts and follow these steps:
Create the oneadmin group. Make sure that its id is the same as in the frontend. In this example 1001:
# groupadd --gid 1001 oneadmin
Create the oneadmin account, we will use the OpenNebula var directory as the home directory for this user.
# useradd --uid 1001 -g oneadmin -d /var/lib/one oneadmin
Warning: You can use any other method to make a common oneadmin group and account in the nodes, for
example NIS.
1.4.7 Step 7. Manual Conguration of Secure Shell Access
You need to create ssh keys for the oneadmin user and congure the host machines so it can connect to them using
ssh without need for a password.
Follow these steps in the front-end:
1.4. Installing the Software 21
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Generate oneadmin ssh keys:
$ ssh-keygen
When prompted for password press enter so the private key is not encrypted.
Append the public key to ~/.ssh/authorized_keys to let oneadmin user log without the need to type
a password.
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Many distributions (RHEL/CentOS for example) have permission requirements for the public key authentication
to work:
$ chmod 700 ~/.ssh/
$ chmod 600 ~/.ssh/id_dsa.pub
$ chmod 600 ~/.ssh/id_dsa
$ chmod 600 ~/.ssh/authorized_keys
Tell ssh client to not ask before adding hosts to known_hosts le. Also it is a good idea to reduced the connec-
tion timeout in case of network problems. This is congured into ~/.ssh/config, see man ssh_config
for a complete reference.:
$ cat ~/.ssh/config
ConnectTimeout 5
Host
*
StrictHostKeyChecking no
Check that the sshd daemon is running in the hosts. Also remove any Banner option fromthe sshd_config
le in the hosts.
Finally, Copy the front-end /var/lib/one/.ssh directory to each one of the hosts in the same path.
To test your conguration just verify that oneadmin can log in the hosts without being prompt for a password.
22 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.4.8 Step 8. Networking Conguration
A network connection is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the
hypervisors; and move image les. It is highly recommended to install a dedicated network for this purpose.
There are various network models (please check the Networking guide to nd out the networking technologies sup-
ported by OpenNebula), but they all have something in common. They rely on network bridges with the same name
in all the hosts to connect Virtual Machines to the physical network interfaces.
The simplest network model corresponds to the dummy drivers, where only the network bridges are needed.
For example, a typical host with two physical networks, one for public IP addresses (attached to eth0 NIC) and the
other for private virtual LANs (NIC eth1) should have two bridges:
$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.001e682f02ac no eth0
br1 8000.001e682f02ad no eth1
1.4. Installing the Software 23
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.4.9 Step 9. Storage Conguration
OpenNebula uses Datastores to manage VM disk Images. There are two conguration steps needed to perform a basic
set up:
First, you need to congure the system datastore to hold images for the running VMs, check the the System
Datastore Guide, for more details.
Then you have to setup one ore more datastore for the disk images of the VMs, you can nd more information
on setting up Filesystem Datastores here.
The suggested conguration is to use a shared FS, which enables most of OpenNebula VM controlling features.
OpenNebula can work without a Shared FS, but this will force the deployment to always clone the images and you
will only be able to do cold migrations.
The simplest way to achieve a shared FS backend for OpenNebula datastores is to export via NFS
from the OpenNebula front-end both the system (/var/lib/one/datastores/0) and the images
(/var/lib/one/datastores/1) datastores. They need to be mounted by all the virtualization nodes to be
added into the OpenNebula cloud.
1.4.10 Step 10. Adding a Node to the OpenNebula Cloud
To add a node to the cloud, there are four needed parameters: name/IP of the host, virtualization, network and infor-
mation driver. Using the recommended conguration above, and assuming a KVM hypervisor, you can add your host
node01 to OpenNebula in the following fashion (as oneadmin, in the front-end):
$ onehost create node01 -i kvm -v kvm -n dummy
To learn more about the host subsystem, read this guide.
1.4.11 Step 11. Next steps
Now that you have a fully functional cloud, it is time to start learning how to use it. A good starting point is this
overview of the virtual resource management.
1.5 Glossary
1.5.1 OpenNebula Components
Front-end: Machine running the OpenNebula services.
Host: Physical machine running a supported hypervisor. See the Host subsystem.
Cluster: Pool of hosts that share datastores and virtual networks. Clusters are used for load balancing, high
availability, and high performance computing.
Image Repository: Storage for registered Images. Learn more about the Storage subsystem.
Sunstone: OpenNebula web interface. Learn more about Sunstone
OCCI Service: Server that enables the management of OpenNebula with OCCI interface. Learn more about
OCCI Service
Self-Service OpenNebula web interfaced towards the end user. It is implemented by conguring a user view of
the Sunstone Portal.
24 Chapter 1. Building your Cloud
OpenNebula 4.6 Design and Installation Guide, Release 4.6
EC2 Service: Server that enables the management of OpenNebula with EC2 interface. Learn more about EC2
Service
OCA: OpenNebula Cloud API. It is a set of libraries that ease the communication with the XML-RPC manage-
ment interface. Learn more about ruby and java APIs.
1.5.2 OpenNebula Resources
Template: Virtual Machine denition. These denitions are managed with the onetemplate command.
Image: Virtual Machine disk image, created and managed with the oneimage command.
Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual
Machines can be created from a single Template. Check out the VM management guide.
Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. See the Net-
working subsystem.
VDC: Virtual Data Center, fully-isolated virtual infrastructure environments where a group of users, under the
control of the VDC administrator.
Zone: A group of interconnected physical hosts with hypervisors controlled by OpenNebula.
1.5.3 OpenNebula Management
ACL: Access Control List. Check the managing ACL rules guide.
oneadmin: Special administrative account. See the Users and Groups guide.
Federation: Several OpenNebula instances can be congured as zones.
1.5. Glossary 25
OpenNebula 4.6 Design and Installation Guide, Release 4.6
26 Chapter 1. Building your Cloud
CHAPTER
TWO
QUICK STARTS
2.1 Quickstart: OpenNebula on CentOS 6 and KVM
The purpose of this guide is to provide users with step by step guide to install OpenNebula using CentOS 6 as the
operating system and KVM as the hypervisor.
After following this guide, users will have a working OpenNebula with graphical interface (Sunstone), at least one
hypervisor (host) and a running virtual machines. This is useful at the time of setting up pilot clouds, to quickly test
new features and as base deployment to build a large infrastructure.
Throughout the installation there are two separate roles: Frontend and Nodes. The Frontend server will execute
the OpenNebula services, and the Nodes will be used to execute virtual machines. Please not that it is possible
to follow this guide with just one host combining both the Frontend and Nodes roles in a single server. However
it is recommended execute virtual machines in hosts with virtualization extensions. To test if your host supports
virtualization extensions, please run:
grep -E svm|vmx /proc/cpuinfo
If you dont get any output you probably dont have virtualization extensions supported/enabled in your server.
2.1.1 Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebulas web GUI
opennebula-ozones: OpenNebulas web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Installs dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-ow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests
Additionally opennebula-common and opennebula-ruby exist but theyre intended to be used as de-
pendencies. opennebula-occi, which is RESTful service to manage the cloud, is included in the
opennebula-sunstone package.
27
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.1.2 Step 1. Installation in the Frontend
Warning: Commands prexed by # are meant to be run as root. Commands prexed by $ must be run as
oneadmin.
1.1. Install the repo
Enable the EPEL repo:
# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/x86_64
enabled=1
gpgcheck=0
EOT
1.2. Install the required packages
A complete install of OpenNebula will have at least both opennebula-server and opennebula-sunstone
package:
# yum install opennebula-server opennebula-sunstone
1.3. Congure and Start the services
There are two main processes that must be started, the main OpenNebula daemon: oned, and the graphical user
interface: sunstone.
Sunstone listens only in the loopback interface by default for security reasons. To change it edit
/etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.
Now we can start the services:
# service opennebula start
# service opennebula-sunstone start
1.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ from the frontend to the worker nodes. To do so add the following to the /etc/exports
le in the frontend:
/var/lib/one/
*
(rw,sync,no_subtree_check,root_squash)
Refresh the NFS exports by doing:
28 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
# service rpcbind restart
# service nfs restart
1.5. Congure SSH Public Key
OpenNebula will need to SSH passwordlessly from any node (including the frontend) to any other node.
Add the following snippet to ~/.ssh/config as oneadmin so it doesnt prompt to add the keys to the
known_hosts le:
# su - oneadmin
$ cat << EOT > ~/.ssh/config
Host
*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOT
$ chmod 600 ~/.ssh/config
2.1.3 Step 2. Installation in the Nodes
2.1. Install the repo
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/x86_64
enabled=1
gpgcheck=0
EOT
2.2. Install the required packages
# yum install opennebula-node-kvm
Start the required services:
# service messagebus start
# service libvirtd start
2.3. Congure the Network
Warning: Backup all the les that are modied in this section before making changes to them.
You will need to have your main interface, typically eth0, connected to a bridge. The name of the bridge should be
the same in all nodes.
To do so, substitute /etc/sysconfig/network-scripts/ifcfg-eth0 with:
2.1. Quickstart: OpenNebula on CentOS 6 and KVM 29
OpenNebula 4.6 Design and Installation Guide, Release 4.6
DEVICE=eth0
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0
And add a new /etc/sysconfig/network-scripts/ifcfg-br0 le.
If you were using DHCP for your eth0 interface, use this template:
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=dhcp
NM_CONTROLLED=no
If you were using a static IP address use this other template:
DEVICE=br0
TYPE=Bridge
IPADDR=<YOUR_IPADDRESS>
NETMASK=<YOUR_NETMASK>
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
After these changes, restart the network:
# service network restart
2.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Mount the datastores export. Add the following to your /etc/fstab:
192.168.1.1:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto
Warning: Replace 192.168.1.1 with the IP of the frontend.
Mount the NFS share:
# mount /var/lib/one/
2.1.4 Step 3. Basic Usage
Warning: All the operations in this section can be done using Sunstone instead of the command line. Point your
browser to: http://frontend:9869.
The default password for the oneadmin user can be found in ~/.one/one_auth which is randomly generated on
every installation.
30 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
To interact with OpenNebula, you have to do it from the oneadmin account in the frontend. We will assume all the
following commands are performed from that account. To login as oneadmin execute su - oneadmin.
3.1. Adding a Host
To start running VMs, you should rst register a worker node for OpenNebula.
Issue this command for each one of your nodes. Replace localhost with your nodes hostname.
$ onehost create localhost -i kvm -v kvm -n dummy
Run onehost list until its set to on. If it fails you probably have something wrong in your ssh conguration.
Take a look at /var/log/one/oned.log.
3.2. Adding virtual resources
Once its working you need to create a network, an image and a virtual machine template.
To create networks, we need to create rst a network template le mynetwork.one that contains:
NAME = "private"
TYPE = FIXED
BRIDGE = br0
LEASES = [ IP=192.168.0.100 ]
LEASES = [ IP=192.168.0.101 ]
LEASES = [ IP=192.168.0.102 ]
2.1. Quickstart: OpenNebula on CentOS 6 and KVM 31
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Warning: Replace the leases with free IPs in your hosts network. You can add any number of leases.
Now we can move ahead and create the resources in OpenNebula:
$ onevnet create mynetwork.one
$ oneimage create --name "CentOS-6.4_x86_64" \
--path "http://us.cloud.centos.org/i/one/c6-x86_64-20130910-1.qcow2.bz2" \
--driver qcow2 \
--datastore default
$ onetemplate create --name "CentOS-6.4" --cpu 1 --vcpu 1 --memory 512 \
--arch x86_64 --disk "CentOS-6.4_x86_64" --nic "private" --vnc \
--ssh
(The image will be downloaded from http://wiki.centos.org/Cloud/OpenNebula)
You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.
In order to dynamically add ssh keys to Virtual Machines we must add our ssh key to the user template, by editing the
user template:
$ EDITOR=vi oneuser update oneadmin
Add a new line like the following to the template:
SSH_PUBLIC_KEY="ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4Gt..."
Substitute the value above with the output of cat ~/.ssh/id_dsa.pub.
3.3. Running a Virtual Machine
To run a Virtual Machine, you will need to instantiate a template:
$ onetemplate instantiate "CentOS-6.4" --name "My Scratch VM"
Execute onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the vm
fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.
2.1.5 Further information
Planning the Installation
Installing the Software
FAQs. Good for troubleshooting
Main Documentation
2.2 Quickstart: OpenNebula on CentOS 6 and Xen
The purpose of this guide is to provide users with step by step guide to install OpenNebula using CentOS 6 as the
operating system and Xen as the hypervisor.
32 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
After following this guide, users will have a working OpenNebula with graphical interface (Sunstone), at least one
hypervisor (host) and a running virtual machines. This is useful at the time of setting up pilot clouds, to quickly test
new features and as base deployment to build a large infrastructure.
Throughout the installation there are two separate roles: Frontend and Nodes. The Frontend server will execute
the OpenNebula services, and the Nodes will be used to execute virtual machines. Please not that it is possible
to follow this guide with just one host combining both the Frontend and Nodes roles in a single server. However
it is recommended execute virtual machines in hosts with virtualization extensions. To test if your host supports
virtualization extensions, please run:
grep -E svm|vmx /proc/cpuinfo
If you dont get any output you probably dont have virtualization extensions supported/enabled in your server.
2.2.1 Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebulas web GUI
opennebula-ozones: OpenNebulas web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Installs dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-ow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests
Additionally opennebula-common and opennebula-ruby exist but theyre intended to be used as de-
pendencies. opennebula-occi, which is RESTful service to manage the cloud, is included in the
opennebula-sunstone package.
2.2.2 Step 1. Installation in the Frontend
Warning: Commands prexed by # are meant to be run as root. Commands prexed by $ must be run as
oneadmin.
1.1. Install the repo
Enable the EPEL repo:
# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/x86_64
enabled=1
2.2. Quickstart: OpenNebula on CentOS 6 and Xen 33
OpenNebula 4.6 Design and Installation Guide, Release 4.6
gpgcheck=0
EOT
1.2. Install the required packages
A complete install of OpenNebula will have at least both opennebula-server and opennebula-sunstone
package:
# yum install opennebula-server opennebula-sunstone
1.3. Congure and Start the services
There are two main processes that must be started, the main OpenNebula daemon: oned, and the graphical user
interface: sunstone.
Sunstone listens only in the loopback interface by default for security reasons. To change it edit
/etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.
Now we can start the services:
# service opennebula start
# service opennebula-sunstone start
1.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ from the frontend to the worker nodes. To do so add the following to the /etc/exports
le in the frontend:
/var/lib/one/
*
(rw,sync,no_subtree_check,root_squash)
Refresh the NFS exports by doing:
# service rpcbind restart
# service nfs restart
1.5. Congure SSH Public Key
OpenNebula will need to SSH passwordlessly from any node (including the frontend) to any other node.
Add the following snippet to ~/.ssh/config as oneadmin so it doesnt prompt to add the keys to the
known_hosts le:
# su - oneadmin
$ cat << EOT > ~/.ssh/config
Host
*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOT
$ chmod 600 ~/.ssh/config
34 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.2.3 Step 2. Installation in the Nodes
Warning: The process to install Xen might change in the future. Please refer to the CentOS documenation on
Xen4 CentOS6 QuickStart if any of the following steps do not work.
2.1. Install the repo
Add the CentOS Xen repo:
# yum install centos-release-xen
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/x86_64
enabled=1
gpgcheck=0
EOT
2.2. Install the required packages
# yum install opennebula-common xen
Enable the Xen kernel by doing:
# /usr/bin/grub-bootxen.sh
Disable xend since it is a deprecated interface:
# chkconfig xend off
Now you must reboot the system in order to start with a Xen kernel.
2.3. Congure the Network
Warning: Backup all the les that are modied in this section before making changes to them.
You will need to have your main interface, typically eth0, connected to a bridge. The name of the bridge should be
the same in all nodes.
To do so, substitute /etc/sysconfig/network-scripts/ifcfg-eth0 with:
DEVICE=eth0
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0
And add a new /etc/sysconfig/network-scripts/ifcfg-br0 le.
If you were using DHCP for your eth0 interface, use this template:
2.2. Quickstart: OpenNebula on CentOS 6 and Xen 35
OpenNebula 4.6 Design and Installation Guide, Release 4.6
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=dhcp
NM_CONTROLLED=no
If you were using a static IP address use this other template:
DEVICE=br0
TYPE=Bridge
IPADDR=<YOUR_IPADDRESS>
NETMASK=<YOUR_NETMASK>
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
After these changes, restart the network:
# service network restart
2.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Mount the datastores export. Add the following to your /etc/fstab:
192.168.1.1:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto
Warning: Replace 192.168.1.1 with the IP of the frontend.
Mount the NFS share:
# mount /var/lib/one/
2.2.4 Step 3. Basic Usage
Warning: All the operations in this section can be done using Sunstone instead of the command line. Point your
browser to: http://frontend:9869.
The default password for the oneadmin user can be found in ~/.one/one_auth which is randomly generated on
every installation.
36 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
To interact with OpenNebula, you have to do it from the oneadmin account in the frontend. We will assume all the
following commands are performed from that account. To login as oneadmin execute su - oneadmin.
3.1. Adding a Host
To start running VMs, you should rst register a worker node for OpenNebula.
Issue this command for each one of your nodes. Replace localhost with your nodes hostname.
$ onehost create localhost -i xen -v xen -n dummy
Run onehost list until its set to on. If it fails you probably have something wrong in your ssh conguration.
Take a look at /var/log/one/oned.log.
3.2. Adding virtual resources
Once its working you need to create a network, an image and a virtual machine template.
To create networks, we need to create rst a network template le mynetwork.one that contains:
NAME = "private"
TYPE = FIXED
BRIDGE = br0
LEASES = [ IP=192.168.0.100 ]
LEASES = [ IP=192.168.0.101 ]
LEASES = [ IP=192.168.0.102 ]
2.2. Quickstart: OpenNebula on CentOS 6 and Xen 37
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Warning: Replace the leases with free IPs in your hosts network. You can add any number of leases.
Now we can move ahead and create the resources in OpenNebula:
$ onevnet create mynetwork.one
$ oneimage create --name "CentOS-6.4_x86_64" \
--path "http://us.cloud.centos.org/i/one/c6-x86_64-20130910-1.qcow2.bz2" \
--driver qcow2 \
--datastore default
$ onetemplate create --name "CentOS-6.4" --cpu 1 --vcpu 1 --memory 512 \
--arch x86_64 --disk "CentOS-6.4_x86_64" --nic "private" --vnc \
--ssh
(The image will be downloaded from http://wiki.centos.org/Cloud/OpenNebula)
You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.
We must specify the desired bootloader to the template we just created. To do so execute the following command:
$ EDITOR=vi onetemplate update CentOS-6.4
Add a new line to the OS section of the template that species the bootloader:
OS=[
BOOTLOADER = "pygrub",
ARCH="x86_64" ]
In order to dynamically add ssh keys to Virtual Machines we must add our ssh key to the user template, by editing the
user template:
$ EDITOR=vi oneuser update oneadmin
Add a new line like the following to the template:
SSH_PUBLIC_KEY="ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4Gt..."
Substitute the value above with the output of cat ~/.ssh/id_dsa.pub.
3.3. Running a Virtual Machine
To run a Virtual Machine, you will need to instantiate a template:
$ onetemplate instantiate "CentOS-6.4" --name "My Scratch VM"
Execute onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the vm
fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.
2.2.5 Further information
Planning the Installation
Installing the Software
FAQs. Good for troubleshooting
Main Documentation
38 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.3 Quickstart: OpenNebula on CentOS 6 and ESX 5.x
This guide aids in the process of quickly get a VMware-based OpenNebula cloud up and running on CentOS. This
is useful at the time of setting up pilot clouds, to quickly test new features and as base deployment to build a large
infrastructure.
2.3.1 Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebulas web GUI
opennebula-ozones: OpenNebulas web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Installs dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-ow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests
Additionally opennebula-common and opennebula-ruby exist but theyre intended to be used as de-
pendencies. opennebula-occi, which is RESTful service to manage the cloud, is included in the
opennebula-sunstone package.
2.3.2 Step 1. Infrastructure Set-up
The infrastructure needs to be set up in a similar fashion as the one depicted in the gure.
Warning: A ESX version 5.0 was used to create this guide. This guide may be useful for other versions of ESX,
although the conguration (and therefore your mileage) may vary.
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 39
OpenNebula 4.6 Design and Installation Guide, Release 4.6
In this guide it is assumed that at least two physical servers are available, one to host the OpenNebula front-end and
one to be used as a ESX virtualization node (this is the one you need to congure in the following section). The gure
depicts one more ESX host, to show that the pilot cloud is ready to grow just by adding more virtualization nodes.
Front-End
Operating System: Centos 6.4
Required extra repository: EPEL
Required packages: NFS, libvirt
$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm
$ sudo yum install nfs-utils nfs-utils-lib libvirt
Virtualization node
Operating System: ESX 5.0
40 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Warning: The ESX hosts needs to be congured. To achieve this, you will need access to a Windows machine
with the Virtual Infrastructure Client (vSphere client) install. The VI client can be downloaded from the ESX node,
by pointing a browser to its IP.
Warning: The ESX hosts need to be properly licensed, with write access to the exported API (as the Evaluation
license does). More information on valid licenses here.
2.3.3 Step 2. OpenNebula Front-end Set-up
2.1 OpenNebula installation
The rst step is to install OpenNebula in the front-end. Please download OpeNebula from here, choosing the CentOS
package.
Once it is downloaded to the front-end, you need to untar it:
$ tar xvzf CentOS-6-opennebula-
*
.tar.gz
And then install all the needed packages:
$ sudo yum localinstall opennebula-
*
/
*
.rpm
Warning: Do not start OpenNebula at this point, some pre conguration needs to be done. Starting OpenNebula
is not due until here.
Lets install noVNC to gain access to the VMs:
$ sudo /usr/share/one/install_novnc.sh
Find out the uid and gid of oneadmin, we will need it for the next section:
$ id oneadmin
uid=499(oneadmin) gid=498(oneadmin)
In order to avoid problems, we recommend to disable SELinux for the pilot cloud front-end (sometimes it is the root
of all evil). Follow these instructions:
$ sudo vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
$ sudo setenforce 0
$ sudo getenforce
Permissive
2.2 NFS conguration
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 41
OpenNebula 4.6 Design and Installation Guide, Release 4.6
The front-end needs to export via NFS two datastores (the system and the images datastore). This is required just
so the ESX has access to two different datastores, and this guides uses NFS exported from the front-end to achieve
this. This can be seamlessly replaced with two iSCSI backed datastores or even two local hard disks. In any case, we
will use the vmfs drivers to manage both datastores, independently of the storage backend. See the VMFS Datastore
Guide for more details.
Lets congure the NFS server. You will need to allow incoming connections, here we will simply stop iptables (as
root):
$ sudo su - oneadmin
$ sudo vi /etc/exports
/var/lib/one/datastores/0
*
(rw,sync,no_subtree_check,root_squash,anonuid=499,anongid=498)
/var/lib/one/datastores/1
*
(rw,sync,no_subtree_check,root_squash,anonuid=499,anongid=498)
$ sudo service iptables stop
$ sudo service nfs start
$ sudo exportfs -a
Warning: Make sure anonuid and anongid are set to the oneadmin uid and gid.
2.3 Networking
There must be connection between the front-end and the ESX node. This can be tested with the ping command:
$ ping <esx-ip>
2.3.4 Step 3. VMware Virtualization Node Set-up
This is probably the step that involves more work to get the pilot cloud up and running, but it is crucial to ensure its
correct functioning. The ESX that is going to be used as worker node needs the following steps:
3.1 Creation of a oneadmin user
With the VI client connected to the ESX host, go to the local Users & Groups and add a new user like shown in
the gure (the UID is important, it needs to match the one of the front-end.). Make sure that you are selecting the
Grant shell to this user checkbox, and write down the password you enter.
42 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Afterwards, go to the Permissions tab and assign the Administrator Role to oneadmin (right click Add Permis-
sion...).
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 43
OpenNebula 4.6 Design and Installation Guide, Release 4.6
3.2 Grant ssh access
Again in the VI client go to Conguration Security Prole Services Properties (Upper right). Click on the SSH
label, select the Options button, and then Start. You can set it to start and stop with the host, as seen on the picture.
44 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Then the following needs to be done:
Connect via ssh to the OpenNebula front-end as the oneadmin user. Copy the output of the following command
to the clipboard:
$ ssh-keygen
Enter an empty passphrase
$ cat .ssh/id_rsa.pub
Connect via ssh to the ESX worker node (as oneadmin). Run the following from the front-end:
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 45
OpenNebula 4.6 Design and Installation Guide, Release 4.6
$ ssh <esx-ip>
Enter the password you set in the step 3.1
$ su
# mkdir /etc/ssh/keys-oneadmin
# chmod 755 /etc/ssh/keys-oneadmin
# vi /etc/ssh/keys-oneadmin/authorized_keys
paste here the contents of oneadmins id_rsa.pub and exit vi
# chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys
# chmod 600 /etc/ssh/keys-oneadmin/authorized_keys
# chmod +s /sbin/vmkfstools /bin/vim-cmd # This is needed to create volatile disks
Now oneadmin should be able to ssh without been prompted for a password
$ ssh <esx-ip>
3.3 Mount datastores
We need now to mount the two datastores exported by default by the OpenNebula front-end. First, you need to make
sure that the rewall will allow the NFS Client to connect to the front-end. Go to Conguration Software
Security Prole, and enable the row NFS Client:
46 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Again in the VI client, go to Conguration Storage Add Storage (Upper right). We need to add two datastores
(0 and 1). The picture shows the details for the datastore 100, to add datastore 0 and 1 simply change the reference
from 100 to 0 and then 1 in the Folder and Datastore Name textboxes.
Please note that the IP of the server displayed may not correspond with your value, which has to be the IP your
front-end uses to connect to the ESX.
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 47
OpenNebula 4.6 Design and Installation Guide, Release 4.6
The paths to be used as input:
/var/lib/one/datastores/0
/var/lib/one/datastores/1
More info on datastores and different possible congurations.
3.4 Congure VNC
Open an ssh connection to the ESX as root, and:
# cd /etc/vmware
# chown -R root firewall/
# chmod 7777 firewall/
# cd firewall/
# chmod 7777 service.xml
Add the following to /etc/vmware/rewall/service.xml
# vi /etc/vmware/firewall/service.xml
Warning: The service id must be the last service id+1. It will depend on your rewall conguration
48 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
<!-- VNC -->
<service id="0033">
<id>VNC</id>
<rule id=0000>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5800</begin>
<end>5999</end>
</port>
</rule>
<rule id=0001>
<direction>inbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5800</begin>
<end>5999</end>
</port>
</rule>
<enabled>true</enabled>
<required>false</required>
</service>
Refresh the rewall
# /sbin/esxcli network firewall refresh
# /sbin/esxcli network firewall ruleset list
2.3.5 Step 4. OpenNebula Conguration
Lets congure OpenNebula in the front-end to allow it to use the ESX hypervisor. The following must be run under
the oneadmin account.
4.1 Congure oned and Sunstone
Edit /etc/one/oned.conf with sudo and uncomment the following:
#
*******************************************************************************
# DataStore Configuration
#
*******************************************************************************
# DATASTORE_LOCATION:
*
Default
*
Path for Datastores in the hosts. It IS the
# same for all the hosts in the cluster. DATASTORE_LOCATION IS ONLY FOR THE
# HOSTS AND
*
NOT
*
THE FRONT-END. It defaults to /var/lib/one/datastores (or
# $ONE_LOCATION/var/datastores in self-contained mode)
#
# DATASTORE_BASE_PATH: This is the base path for the SOURCE attribute of
# the images registered in a Datastore. This is a default value, that can be
# changed when the datastore is created.
#
*******************************************************************************
DATASTORE_LOCATION = /vmfs/volumes
DATASTORE_BASE_PATH = /vmfs/volumes
#-------------------------------------------------------------------------------
# VMware Information Driver Manager Configuration
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 49
OpenNebula 4.6 Design and Installation Guide, Release 4.6
#-------------------------------------------------------------------------------
IM_MAD = [
name = "vmware",
executable = "one_im_sh",
arguments = "-c -t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------
# VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
name = "vmware",
executable = "one_vmm_sh",
arguments = "-t 15 -r 0 vmware -s sh",
default = "vmm_exec/vmm_exec_vmware.conf",
type = "vmware" ]
Edit /etc/one/sunstone-server.conf with sudo and allow incoming connections from any IP:
sudo vi /etc/one/sunstone-server.conf
# Server Configuration
#
:host: 0.0.0.0
:port: 9869
4.2 Add the ESX credentials
$ sudo vi /etc/one/vmwarerc
<Add the ESX oneadmin password, set in section 3.1>
# Username and password of the VMware hypervisor
:username: "oneadmin"
:password: "password"
Warning: Do not edit :libvirt_uri:, the HOST placeholder is needed by the drivers
4.3 Start OpenNebula
Start OpenNebula and Sunstone as oneadmin
$ one start
$ sunstone-server start
If no error message is shown, then everything went smooth!
4.4 Congure physical resources
Lets congure both system and image datastores:
$ onedatastore update 0
SHARED="YES"
TM_MAD="vmfs"
TYPE="SYSTEM_DS"
BASE_PATH="/vmfs/volumes"
$ onedatastore update 1
TM_MAD="vmfs"
DS_MAD="vmfs"
BASE_PATH="/vmfs/volumes"
CLONE_TARGET="SYSTEM"
DISK_TYPE="FILE"
LN_TARGET="NONE"
50 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
TYPE="IMAGE_DS"
BRIDGE_LIST="esx-ip"
$ onedatastore chmod 1 644
And the ESX Host:
$ onehost create <esx-ip> -i vmware -v vmware -n dummy
4.5 Create a regular cloud user
$ oneuser create oneuser <mypassword>
2.3.6 Step 5. Using the Cloud through Sunstone
Ok, so now that everything is in place, lets start using your brand new OpenNebula cloud! Use your browser to access
Sunstone. The URL would be http://@IP-of-the-front-end@:9869
Once you introduce the credentials for the oneuser user (with the chosen password in the previous section) you will
get to see the Sunstone dashboard. You can also log in as oneadmin, you will notice the access to more functionality
(basically, the administration and physical infrastructure management tasks)
It is time to launch our rst VM. Lets use one of the pre created appliances found in the marketplace.
Log in as oneuser, go to the Marketplace tab in Sunstone (in the left menu), and select the ttylinux-VMware row.
Click on the Import to local infrastructure button in the upper right, and set the new image a name (use ttylinux -
VMware) and place it in the VMwareImages datastore. If you go to the Virtual Resources/Image tab, you will see
that the new Image will eventually change its status from LOCKED to READY.
2.3. Quickstart: OpenNebula on CentOS 6 and ESX 5.x 51
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Now we need to create a template that uses this image. Go to the Virtual Resources/Templates tab, click on +Create
and follow the wizard, or use the Advanced mode tab of the wizard to paste the following:
NAME = "ttylinux"
CPU = "1"
MEMORY = "512"
DISK = [
IMAGE = "ttylinux - VMware",
IMAGE_UNAME = "oneuser"
]
GRAPHICS = [
TYPE = "vnc",
LISTEN = "0.0.0.0"
]
Select the newly created template and click on the Instantiate button. You can now proceed to the Virtual Machines
tab. Once the VM is in state RUNNING you can click on the VNC icon and you should see the ttylinux login
(root/password).
Please note that the minimal ttylinux VM does not come with the VMware Tools, and cannot be gracefully shutdown.
Use the Cancel action instead.
And thats it! You have now a fully functional pilot cloud. You can now create your own virtual machines, or import
other appliances from the marketplace, like Centos 6.2.
Enjoy!
52 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.3.7 Step 6. Next Steps
Follow the VMware Virtualization Driver Guide for the complete installation and tuning reference, and how to
enable the disk attach/detach functionality, and vMotion live migration.
OpenNebula can use VMware native networks to provide network isolation through VLAN tagging.
Warning: Did we miss something? Please let us know!
2.4 Quickstart: OpenNebula on Ubuntu 12.04 and KVM
The purpose of this guide is to provide users with step by step guide to install OpenNebula using Ubuntu 12.04 as the
operating system and KVM as the hypervisor.
After following this guide, users will have a working OpenNebula with graphical interface (Sunstone), at least one
hypervisor (host) and a running virtual machines. This is useful at the time of setting up pilot clouds, to quickly test
new features and as base deployment to build a large infrastructure.
Throughout the installation there are two separate roles: Frontend and Nodes. The Frontend server will execute
the OpenNebula services, and the Nodes will be used to execute virtual machines. Please not that it is possible
to follow this guide with just one host combining both the Frontend and Nodes roles in a single server. However
it is recommended execute virtual machines in hosts with virtualization extensions. To test if your host supports
virtualization extensions, please run:
grep -E svm|vmx /proc/cpuinfo
If you dont get any output you probably dont have virtualization extensions supported/enabled in your server.
2.4.1 Package Layout
opennebula-common: Provides the user and common les
libopennebula-ruby: All ruby libraries
opennebula-node: Prepares a node as an opennebula-node
opennebula-sunstone: OpenNebula Sunstone Web Interface
opennebula-tools: Command Line interface
opennebula-gate: Gate server that enables communication between VMs and OpenNebula
opennebula-ow: Manages services and elasticity
opennebula: OpenNebula Daemon
2.4.2 Step 1. Installation in the Frontend
Warning: Commands prexed by # are meant to be run as root. Commands prexed by $ must be run as
oneadmin.
2.4. Quickstart: OpenNebula on Ubuntu 12.04 and KVM 53
OpenNebula 4.6 Design and Installation Guide, Release 4.6
1.1. Install the repo
Add the OpenNebula repository:
# wget -q -O- http://downloads.opennebula.org/repo/Ubuntu/repo.key | apt-key add -
# echo "deb http://downloads.opennebula.org/repo/Ubuntu/12.04 stable opennebula"
> /etc/apt/sources.list.d/opennebula.list
1.2. Install the required packages
# apt-get update
# apt-get install opennebula opennebula-sunstone nfs-kernel-server
1.3. Congure and Start the services
There are two main processes that must be started, the main OpenNebula daemon: oned, and the graphical user
interface: sunstone.
Sunstone listens only in the loopback interface by default for security reasons. To change it edit
/etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.
Now we must restart the Sunstone:
# /etc/init.d/opennebula-sunstone restart
1.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ from the frontend to the worker nodes. To do so add the following to the /etc/exports
le in the frontend:
/var/lib/one/
*
(rw,sync,no_subtree_check,root_squash)
Refresh the NFS exports by doing:
# service nfs-kernel-server restart
1.5. Congure SSH Public Key
OpenNebula will need to SSH passwordlessly from any node (including the frontend) to any other node.
To do so run the following commands:
# su - oneadmin
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
Add the following snippet to ~/.ssh/config so it doesnt prompt to add the keys to the known_hosts le:
$ cat << EOT > ~/.ssh/config
Host
*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
54 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
EOT
$ chmod 600 ~/.ssh/config
2.4.3 Step 2. Installation in the Nodes
2.1. Install the repo
Add the OpenNebula repository:
# wget -q -O- http://downloads.opennebula.org/repo/Ubuntu/repo.key | apt-key add -
# echo "deb http://downloads.opennebula.org/repo/Ubuntu/12.04 stable opennebula" >
/etc/apt/sources.list.d/opennebula.list
2.2. Install the required packages
# apt-get update
# apt-get install opennebula-node nfs-common bridge-utils
2.3. Congure the Network
Warning: Backup all the les that are modied in this section before making changes to them.
You will need to have your main interface, typically eth0, connected to a bridge. The name of the bridge should be
the same in all nodes.
If you were using DHCP for your eth0 interface, replace /etc/network/interfaces with:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
If you were using a static IP addresses instead, use this other template:
auto lo
iface lo inet loopback
auto br0
iface br0 inet static
address 192.168.0.10
network 192.168.0.0
netmask 255.255.255.0
broadcast 192.168.0.255
gateway 192.168.0.1
bridge_ports eth0
bridge_fd 9
bridge_hello 2
2.4. Quickstart: OpenNebula on Ubuntu 12.04 and KVM 55
OpenNebula 4.6 Design and Installation Guide, Release 4.6
bridge_maxage 12
bridge_stp off
After these changes, restart the network:
# /etc/init.d/networking restart
2.4. Congure NFS
Warning: Skip this section if you are using a single server for both the frontend and worker node roles.
Mount the datastores export. Add the following to your /etc/fstab:
192.168.1.1:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto
Warning: Replace 192.168.1.1 with the IP of the frontend.
Mount the NFS share:
# mount /var/lib/one/
2.5. Congure Qemu
The oneadmin user must be able to manage libvirt as root:
# cat << EOT > /etc/libvirt/qemu.conf
user = "oneadmin"
group = "oneadmin"
dynamic_ownership = 0
EOT
Restart libvirt to capture these changes:
# service libvirt-bin restart
2.4.4 Step 3. Basic Usage
Warning: All the operations in this section can be done using Sunstone instead of the command line. Point your
browser to: http://frontend:9869.
The default password for the oneadmin user can be found in ~/.one/one_auth which is randomly generated on
every installation.
56 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
To interact with OpenNebula, you have to do it from the oneadmin account in the frontend. We will assume all the
following commands are performed from that account. To login as oneadmin execute su - oneadmin.
3.1. Adding a Host
To start running VMs, you should rst register a worker node for OpenNebula.
Issue this command for each one of your nodes. Replace localhost with your nodes hostname.
$ onehost create localhost -i kvm -v kvm -n dummy
Run onehost list until its set to on. If it fails you probably have something wrong in your ssh conguration.
Take a look at /var/log/one/oned.log.
3.2. Adding virtual resources
Once its working you need to create a network, an image and a virtual machine template.
To create networks, we need to create rst a network template le mynetwork.one that contains:
NAME = "private"
TYPE = FIXED
BRIDGE = br0
LEASES = [ IP=192.168.0.100 ]
LEASES = [ IP=192.168.0.101 ]
LEASES = [ IP=192.168.0.102 ]
2.4. Quickstart: OpenNebula on Ubuntu 12.04 and KVM 57
OpenNebula 4.6 Design and Installation Guide, Release 4.6
Warning: Replace the leases with free IPs in your hosts network. You can add any number of leases.
Now we can move ahead and create the resources in OpenNebula:
$ onevnet create mynetwork.one
$ oneimage create --name "CentOS-6.4_x86_64" \
--path "http://us.cloud.centos.org/i/one/c6-x86_64-20130910-1.qcow2.bz2" \
--driver qcow2 \
--datastore default
$ onetemplate create --name "CentOS-6.4" --cpu 1 --vcpu 1 --memory 512 \
--arch x86_64 --disk "CentOS-6.4_x86_64" --nic "private" --vnc \
--ssh
(The image will be downloaded from http://wiki.centos.org/Cloud/OpenNebula)
You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.
In order to dynamically add ssh keys to Virtual Machines we must add our ssh key to the user template, by editing the
user template:
$ EDITOR=vi oneuser update oneadmin
Add a new line like the following to the template:
SSH_PUBLIC_KEY="ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4Gt..."
Substitute the value above with the output of cat ~/.ssh/id_dsa.pub.
3.3. Running a Virtual Machine
To run a Virtual Machine, you will need to instantiate a template:
$ onetemplate instantiate "CentOS-6.4" --name "My Scratch VM"
Execute onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the vm
fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.
2.4.5 Further information
Planning the Installation
Installing the Software
FAQs. Good for troubleshooting
Main Documentation
2.5 Quickstart: Create Your First vDC
This guide will provide a quick example of how to partition your cloud for a vDC. In short, a vDC is a group of users
with part of the physical resources assigned to them. The Understanding OpenNebula guide explains the OpenNebula
provisioning model in detail.
58 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.5.1 Step 1. Create a Cluster
We will rst create a cluster, web-dev, where we can group hosts, datastores and virtual networks for the new vDC.
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 host01 web-dev 0 0 / 200 (0%) 0K / 7.5G (0%) on
1 host02 web-dev 0 0 / 200 (0%) 0K / 7.5G (0%) on
2 host03 - 0 0 / 200 (0%) 0K / 7.5G (0%) on
3 host04 - 0 0 / 200 (0%) 0K / 7.5G (0%) on
$ onedatastore list
ID NAME SIZE AVAIL CLUSTER IMAGES TYPE DS TM
0 system 113.3G 25% web-dev 0 sys - shared
1 default 113.3G 25% web-dev 1 img fs shared
2 files 113.3G 25% - 0 fil fs ssh
$ onevnet list
ID USER GROUP NAME CLUSTER TYPE BRIDGE LEASES
0 oneadmin oneadmin private web-dev R virbr0 0
2.5.2 Step 2. Create a vDC Group
We can now create the new group, named also web-dev. This group, or vDC, will have a special admin user, web-
dev-admin.
2.5. Quickstart: Create Your First vDC 59
OpenNebula 4.6 Design and Installation Guide, Release 4.6
$ onegroup create --name web-dev --admin_user web-dev-admin --admin_password abcd
ID: 100
$ onegroup add_provider 100 0 web-dev
60 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.5.3 Step 3. Optionally, Set Quotas
The cloud administrator can set usage quotas for the vDC. In this case, we will put a limit of 10 VMs.
$ onegroup show web-dev
GROUP 100 INFORMATION
ID : 100
NAME : web-dev
GROUP TEMPLATE
GROUP_ADMINS="web-dev-admin"
GROUP_ADMIN_VIEWS="vdcadmin"
SUNSTONE_VIEWS="cloud"
USERS
ID
2
RESOURCE PROVIDERS
ZONE CLUSTER
0 100
RESOURCE USAGE & QUOTAS
NUMBER OF VMS MEMORY CPU VOLATILE_SIZE
0 / 10 0M / 0M 0.00 / 0.00 0M / 0M
2.5. Quickstart: Create Your First vDC 61
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.5.4 Step 4. Prepare Virtual Resources for the Users
At this point, the cloud administrator can also prepare working Templates and Images for the vDC users.
$ onetemplate chgrp ubuntu web-dev
62 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.5.5 Reference for End Users
The vDC admin uses an interface similar to the cloud administrator, but without any information about the physical
infrastructure. He will be able to create new users inside the vDC, monitor their resources, and create new Templates
for them. The vDC admin can also decide to congure quota limits for each user.
Refer your vDC admin user to the vDC Admin View Guide.
End users access OpenNebula through a simplied instantiate, where they can launch their own VMs from the Tem-
plates prepared by the administrator. Users can also save the changes they make to their machines. This view is self
explanatory, you can read more about it in the Cloud View Guide.
2.5. Quickstart: Create Your First vDC 63
OpenNebula 4.6 Design and Installation Guide, Release 4.6
64 Chapter 2. Quick Starts
OpenNebula 4.6 Design and Installation Guide, Release 4.6
2.5. Quickstart: Create Your First vDC 65
OpenNebula 4.6 Administration Guide
Release 4.6
OpenNebula Project
April 28, 2014
CONTENTS
1 Hosts and Clusters 1
1.1 Hosts & Clusters Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Managing Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Managing Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Storage 17
2.1 Storage Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 The System Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 The Filesystem Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 The VMFS Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 LVM Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6 The FS LVM Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7 The Block LVM Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.8 The Ceph Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.9 The GlusterFS Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.10 The Kernels & Files Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3 Virtualization 49
3.1 Virtualization Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Xen Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 KVM Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4 VMware Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Networking 67
4.1 Networking Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2 802.1Q VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3 Ebtables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4 Open vSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5 VMware Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.6 Conguring Firewalls for VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.7 Virtual Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5 Monitoring 83
5.1 Monitoring Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2 KVM and Xen SSH-pull Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3 KVM and Xen UDP-push Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4 VMware VI API-pull Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6 Users and Groups 93
6.1 Users & Groups Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2 Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
i
6.3 Managing Groups & vDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.4 Managing Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.5 Accounting Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.6 Managing ACL Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.7 Managing Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7 Authentication 121
7.1 External Auth Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.2 SSH Auth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3 x509 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.4 LDAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8 Sunstone GUI 131
8.1 OpenNebula Sunstone: The Cloud Operations Center . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2 Sunstone Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.3 Self-service Cloud View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.4 vDC Admin View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.5 User Security and Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.6 Cloud Servers Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9 Other Subsystems 163
9.1 MySQL Backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10 References 167
10.1 ONED Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
10.2 Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.3 Logging & Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.4 Onedb Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.5 Datastore conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
ii
CHAPTER
ONE
HOSTS AND CLUSTERS
1.1 Hosts & Clusters Overview
A Host is a server that has the ability to run Virtual Machines and that is connected to OpenNebulas Frontend
server. OpenNebula can work with Hosts with a heterogeneous conguration, i.e. you can connect Hosts to the same
OpenNebula with different hypervisors or Linux distributions as long as these requirements are fullled:
Every Host need to have a oneadmin account.
OpenNebulas Frontend and all the Hosts need to be able to resolve, either by DNS or by /etc/hosts the
names of all the other Hosts and Frontend.
The oneadmin account in any Host or the Frontend should be able to ssh passwordlessly to any other Host or
Frontend. This is achieved either by sharing the $HOME of oneadmin accross all the servers with NFS or by
manually copying the ~/.ssh directory.
It needs to have a hypervisor supported by OpenNebula installed and properly congured. The correct way to
achieve this is to follow the specic guide for each hypervisor.
ruby >= 1.8.7
Clusters are pools of hosts that share datastores and virtual networks. Clusters are used for load balancing, high
availability, and high performance computing.
1.1.1 Overview of Components
There are three components regarding Hosts:
Host Management: Host management is achieved through the onehost CLI command or through the Sun-
stone GUI. You can read about Host Management in more detail in the Managing Hosts guide.
Host Monitorization: In order to keep track of the available resources in the Hosts, OpenNebula launches a
Host Monitoring driver, called IM (Information Driver), which gathers all the required information and submits
it to the Core. The default IM driver executes ssh commands in the host, but other mechanism are possible.
There is further information on this topic in the Monitoring Subsystem guide.
Cluster Management: Hosts can be grouped in Clusters. These Clusters are managed with the onecluster
CLI command, or through the Sunstone GUI. You can read about Cluster Management in more detail in the
Managing Clusters guide..
1
OpenNebula 4.6 Administration Guide, Release 4.6
1.2 Managing Hosts
In order to use your existing physical nodes, you have to add them to the system as OpenNebula hosts. You need the
following information:
Hostname of the host or IP
Information Driver to be used to monitor the host, e.g. kvm. These should match the Virtualization Drivers
installed and more info about them can be found at the Virtualization Subsystem guide.
Virtualization Driver to boot, stop, resume or migrate VMs in the host, e.g. kvm. Information about these
drivers can be found in its guide.
Networking Driver to isolate virtual networks and apply rewalling rules, e.g. 802.1Q. Information about
these drivers can be found in its guide.
Cluster where to place this host. The Cluster assignment is optional, you can read more about it in the Managing
Clusters guide.
Warning: Before adding a host check that you can ssh to it without being prompt for a password
1.2.1 onehost Command
The following sections show the basics of the onehost command with simple usage examples. A complete reference
for these commands can be found here.
This command enables Host management. Actions offered are:
create: Creates a new Host
delete: Deletes the given Host
enable: Enables the given Host
disable: Disables the given Host
update: Update the template contents.
sync: Synchronizes probes in all the hosts.
list: Lists Hosts in the pool
show: Shows information for the given Host
top: Lists Hosts continuously
flush: Disables the host and reschedules all the running VMs it.
Create and Delete
Hosts, also known as physical nodes, are the serves managed by OpenNebula responsible for Virtual Machine exe-
cution. To use these hosts in OpenNebula you need to register them so they are monitored and well-known to the
scheduler.
Creating a host:
$ onehost create host01 --im dummy --vm dummy --net dummy
ID: 0
The parameters are:
2 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
--im/-i: Information Manager driver. Valid options: kvm, xen, vmware, ec2, ganglia, dummy.
--vm/-v: Virtual Machine Manager driver. Valid options: kvm, xen, vmware, ec2, dummy.
--net/-n: Network manager driver. Valid options: 802.1Q,dummy,ebtables,fw,ovswitch,vmware.
To remove a host, just like with other OpenNebula commands, you can either specify it by ID or by name. The
following commands are equivalent:
$ onehost delete host01
$ onehost delete 0
Show, List and Top
To display information about a single host the show command is used:
$ onehost show 0
HOST 0 INFORMATION
ID : 0
NAME : host01
CLUSTER : -
STATE : MONITORED
IM_MAD : dummy
VM_MAD : dummy
VN_MAD : dummy
LAST MONITORING TIME : 07/06 17:40:41
HOST SHARES
TOTAL MEM : 16G
USED MEM (REAL) : 857.9M
USED MEM (ALLOCATED) : 0K
TOTAL CPU : 800
USED CPU (REAL) : 299
USED CPU (ALLOCATED) : 0
RUNNING VMS : 0
MONITORING INFORMATION
CPUSPEED="2.2GHz"
FREECPU="501"
FREEMEMORY="15898723"
HOSTNAME="host01"
HYPERVISOR="dummy"
TOTALCPU="800"
TOTALMEMORY="16777216"
USEDCPU="299"
USEDMEMORY="878493"
We can instead display this information in XML format with the -x parameter:
$ onehost show -x 0
<HOST>
<ID>0</ID>
<NAME>host01</NAME>
<STATE>2</STATE>
<IM_MAD>dummy</IM_MAD>
<VM_MAD>dummy</VM_MAD>
<VN_MAD>dummy</VN_MAD>
<LAST_MON_TIME>1341589306</LAST_MON_TIME>
<CLUSTER_ID>-1</CLUSTER_ID>
1.2. Managing Hosts 3
OpenNebula 4.6 Administration Guide, Release 4.6
<CLUSTER/>
<HOST_SHARE>
<DISK_USAGE>0</DISK_USAGE>
<MEM_USAGE>0</MEM_USAGE>
<CPU_USAGE>0</CPU_USAGE>
<MAX_DISK>0</MAX_DISK>
<MAX_MEM>16777216</MAX_MEM>
<MAX_CPU>800</MAX_CPU>
<FREE_DISK>0</FREE_DISK>
<FREE_MEM>12852921</FREE_MEM>
<FREE_CPU>735</FREE_CPU>
<USED_DISK>0</USED_DISK>
<USED_MEM>3924295</USED_MEM>
<USED_CPU>65</USED_CPU>
<RUNNING_VMS>0</RUNNING_VMS>
</HOST_SHARE>
<TEMPLATE>
<CPUSPEED><![CDATA[2.2GHz]]></CPUSPEED>
<FREECPU><![CDATA[735]]></FREECPU>
<FREEMEMORY><![CDATA[12852921]]></FREEMEMORY>
<HOSTNAME><![CDATA[host01]]></HOSTNAME>
<HYPERVISOR><![CDATA[dummy]]></HYPERVISOR>
<TOTALCPU><![CDATA[800]]></TOTALCPU>
<TOTALMEMORY><![CDATA[16777216]]></TOTALMEMORY>
<USEDCPU><![CDATA[65]]></USEDCPU>
<USEDMEMORY><![CDATA[3924295]]></USEDMEMORY>
</TEMPLATE>
</HOST>
To see a list of all the hosts:
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 - 0 800 198 800 16G 10.9G 16G on
1 host02 - 0 800 677 800 16G 3.7G 16G on
It can also be displayed in XML format using -x:
$ onehost list -x
<HOST_POOL>
<HOST>
...
</HOST>
...
</HOST_POOL>
The top command is similar to the list command, except that the output is refreshed until the user presses CTRL-C.
Enable, Disable and Flush
The disable command disables a host, which means that no further monitorization is performed on this host and no
Virtual Machines are deployed in it. It wont however affect the running VMs in the host.
$ onehost disable 0
To re-enable the host use the enable command:
4 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
$ onehost enable 0
The flush command will mark all the running VMs in the specied host as to be rescheduled, which means that they
will be migrated to another server with enough capacity. At the same time, the specied host will be disabled, so no
more Virtual Machines are deployed in it. This command is useful to clean a host of running VMs.
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 - 3 800 96 500 16G 11.1G 14.5G on
1 host02 - 0 800 640 800 16G 8.5G 16G on
2 host03 - 3 800 721 500 16G 8.6G 14.5G on
$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin vm01 runn 54 102.4M host03 0d 00h01
1 oneadmin oneadmin vm02 runn 91 276.5M host02 0d 00h01
2 oneadmin oneadmin vm03 runn 13 174.1M host01 0d 00h01
3 oneadmin oneadmin vm04 runn 72 204.8M host03 0d 00h00
4 oneadmin oneadmin vm05 runn 49 112.6M host02 0d 00h00
5 oneadmin oneadmin vm06 runn 87 414.7M host01 0d 00h00
$ onehost flush host02
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 - 3 800 264 500 16G 3.5G 14.5G on
1 host02 - 0 800 153 800 16G 3.7G 16G off
2 host03 - 3 800 645 500 16G 10.3G 14.5G on
$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin vm01 runn 95 179.2M host03 0d 00h01
1 oneadmin oneadmin vm02 runn 27 261.1M host03 0d 00h01
2 oneadmin oneadmin vm03 runn 70 343M host01 0d 00h01
3 oneadmin oneadmin vm04 runn 9 133.1M host03 0d 00h01
4 oneadmin oneadmin vm05 runn 87 281.6M host01 0d 00h01
5 oneadmin oneadmin vm06 runn 61 291.8M host01 0d 00h01
Update
Its sometimes useful to store information in the hosts template. To do so, the update command is used.
An example use case is to add the following line to the hosts template:
TYPE="production"
Which can be used at a later time for scheduling purposes by adding the following section in a VM template:
SCHED_REQUIREMENTS="TYPE=\"production\""
That will restrict the Virtual Machine to be deployed in TYPE=production hosts.
Sync
When OpenNebula monitors a host, it copies a certain amount of les to /var/tmp/one. When the administrator
changes these les, they can be copied again to the hosts with the sync command. When executed this command will
copy the probes to the nodes and will return the prompt after it has nished telling which nodes it could not update.
To keep track of the probes version theres a new le in /var/lib/one/remotes/VERSION. By default this
holds the OpenNebula version (ex. 4.4.0). This version can be seen in he hosts with a onehost show <host>:
1.2. Managing Hosts 5
OpenNebula 4.6 Administration Guide, Release 4.6
$ onehost show 0
HOST 0 INFORMATION
ID : 0
[...]
MONITORING INFORMATION
VERSION="4.4.0"
[...]
The command onehost sync only updates the hosts with VERSION lower than the one in the le
/var/lib/one/remotes/VERSION. In case you modify the probes this VERSION le should be modied with
a greater value, for example 4.4.0.01.
In case you want to force upgrade, that is, no VERSION checking you can do that adding -force option:
$ onehost sync --force
You can also select which hosts you want to upgrade naming them or selecting a cluster:
$ onehost sync host01,host02,host03
$ onehost sync -c myCluster
onehost sync command can alternatively use rsync as the method of upgrade. To do this you need to have
installed rsync command in the frontend and the nodes. This method is faster that the standard one and also has the
benet of deleting remote les no longer existing in the frontend. To use it add the parameter -rsync:
$ onehost sync --rsync
1.2.2 Host Information
Hosts include the following monitoring information. You can use this variables to create custom RANK and
REQUIREMENTS expressions for scheduling. Note also that you can manually add any tag and use it also for RANK
and REQUIREMENTS
6 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
Key Description
HYPER-
VISOR
Name of the hypervisor of the host, useful for selecting the hosts with an specic technology.
ARCH Architecture of the host CPUs, e.g. x86_64.
MODEL-
NAME
Model name of the host CPU, e.g. Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz.
CPUS-
PEED
Speed in Mhz of the CPUs.
HOST-
NAME
As returned by the hostname command.
VER-
SION
This is the version of the monitoring probes. Used to control local changes and the update process
MAX_CPU Number of CPUs multiplied by 100. For example, a 16 cores machine will have a value of 1600. The
value of RESERVED_CPU will be substracted from the information reported by the monitoring
system. This value is displayed as TOTAL CPU by the onehost show command under HOST
SHARE section.
MAX_MEMMaximum memory that could be used for VMs. It is advised to take out the memory used by the
hypervisor using RESERVED_MEM. This values is substracted from the memory amount reported.
This value is displayed as TOTAL MEM by the onehost show command under HOST SHARE
section.
MAX_DISKTotal space in megabytes in the DATASTORE LOCATION.
USED_CPUPercentage of used CPU multiplied by the number of cores. This value is displayed as USED CPU
(REAL) by the onehost show command under HOST SHARE section.
USED_MEMMemory used, in kilobytes. This value is displayed as USED MEM (REAL) by the onehost
show command under HOST SHARE section.
USED_DISKUsed space in megabytes in the DATASTORE LOCATION.
FREE_CPU Percentage of idling CPU multiplied by the number of cores. For example, if 50% of the CPU is
idling in a 4 core machine the value will be 200.
FREE_MEMAvailable memory for VMs at that moment, in kilobytes.
FREE_DISKFree space in megabytes in the DATASTORE LOCATION
CPU_USAGETotal CPU allocated to VMs running on the host as requested in CPU in each VM template. This
value is displayed as USED CPU (ALLOCATED) by the onehost show command under HOST
SHARE section.
MEM_USAGE Total MEM allocated to VMs running on the host as requested in MEMORY in each VM template.
This value is displayed as USED MEM (ALLOCATED) by the onehost show command under
HOST SHARE section.
DISK_USAGE Total size allocated to disk images of VMs running on the host computed using the SIZE attribute of
each image and considering the datastore characteristics.
NETRX Received bytes from the network
NETTX Transferred bytes to the network
1.2.3 Host Life-cycle
Short
state
State Meaning
init INIT Initial state for enabled hosts.
update MONITORING_MONITORED Monitoring a healthy Host.
on MONITORED The host has been successfully monitored.
err ERROR An error occurred while monitoring the host. See the Host information with
onehost show for an error message.
off DISABLED The host is disabled, and wont be monitored. The scheduler ignores Hosts in
this state.
retry MONITORING_ERRORMonitoring a host in error state.
1.2. Managing Hosts 7
OpenNebula 4.6 Administration Guide, Release 4.6
1.2.4 Scheduler Policies
You can dene global Scheduler Policies for all VMs in the sched.conf le, follow the Scheduler Guide for more in-
formation. Additionally, users can require their virtual machines to be deployed in a host that meets certain constrains.
These constrains can be dened using any attribute reported by onehost show, like the architecture (ARCH).
The attributes and values for a host are inserted by the monitoring probes that run from time to time on the nodes to
get information. The administrator can add custom attributes either creating a probe in the host, or updating the host
information with: onehost update <HOST_ID>. Calling this command will re up an editor (the one specied
in the EDITOR environment variable) and you will be able to add, delete or modify some of those values.
$ onehost show 3
[...]
MONITORING INFORMATION
CPUSPEED=2.2GHz
FREECPU=800
FREEMEMORY=16777216
HOSTNAME=ursa06
HYPERVISOR=dummy
TOTALCPU=800
TOTALMEMORY=16777216
USEDCPU=0
USEDMEMORY=0
$ onehost update 3
[in editor, add CUSTOM_ATTRIBUTE=VALUE]
$onehost show 3
[...]
MONITORING INFORMATION
CPUSPEED=2.2GHz
FREECPU=800
FREEMEMORY=16777216
HOSTNAME=ursa06
HYPERVISOR=dummy
TOTALCPU=800
TOTALMEMORY=16777216
USEDCPU=0
USEDMEMORY=0
CUSTOM_ATTRIBUTE=VALUE
This feature is useful when we want to separate a series of hosts or marking some special features of different hosts.
These values can then be used for scheduling the same as the ones added by the monitoring probes, as a placement
requirement:
SCHED_REQUIREMENTS = "CUSTOM_ATTRIBUTE = \"SOME_VALUE\""
1.2.5 A Sample Session
Hosts can be added to the system anytime with the onehost command. You can add the hosts to be used by
OpenNebula like this:
$ onehost create host01 --im kvm --vm kvm --net dummy
$ onehost create host02 --im kvm --vm kvm --net dummy
The status of the hosts can be checked with the onehost list command:
8 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 - 7 400 290 400 3.7G 2.2G 3.7G on
1 host02 - 2 400 294 400 3.7G 2.2G 3.7G on
2 host03 - 0 400 312 400 3.7G 2.2G 3.7G off
And specic information about a host with show:
$ onehost show host01
HOST 0 INFORMATION
ID : 0
NAME : host01
CLUSTER : -
STATE : MONITORED
IM_MAD : kvm
VM_MAD : kvm
VN_MAD : dummy
LAST MONITORING TIME : 1332756227
HOST SHARES
MAX MEM : 3921416
USED MEM (REAL) : 1596540
USED MEM (ALLOCATED) : 0
MAX CPU : 400
USED CPU (REAL) : 74
USED CPU (ALLOCATED) : 0
RUNNING VMS : 7
MONITORING INFORMATION
ARCH=x86_64
CPUSPEED=2393
FREECPU=326.0
FREEMEMORY=2324876
HOSTNAME=rama
HYPERVISOR=kvm
MODELNAME="Intel(R) Core(TM) i5 CPU M 450 @ 2.40GHz"
NETRX=0
NETTX=0
TOTALCPU=400
TOTALMEMORY=3921416
USEDCPU=74.0
USEDMEMORY=1596540
If you want not to use a given host you can temporarily disable it:
$ onehost disable host01
A disabled host should be listed with STAT off by onehost list. You can also remove a host permanently with:
$ onehost delete host01
Warning: Detailed information of the onehost utility can be found in the Command Line Reference
1.2.6 Using Sunstone to Manage Hosts
You can also manage your hosts using Sunstone. Select the Host tab, and there, you will be able to create, enable,
disable, delete and see information about your hosts in a user friendly way.
1.2. Managing Hosts 9
OpenNebula 4.6 Administration Guide, Release 4.6
1.3 Managing Clusters
A Cluster is a group of Hosts. Clusters can have associated Datastores and Virtual Networks, this is how the adminis-
trator sets which Hosts have the underlying requirements for each Datastore and Virtual Network congured.
1.3.1 Cluster Management
Clusters are managed with the onecluster command. To create new Clusters, use onecluster create
<name>. Existing Clusters can be inspected with the onecluster list and show commands.
$ onecluster list
ID NAME HOSTS NETS DATASTORES
$ onecluster create production
ID: 100
$ onecluster list
ID NAME HOSTS NETS DATASTORES
100 production 0 0 0
$ onecluster show production
CLUSTER 100 INFORMATION
ID : 100
NAME : production
HOSTS
VNETS
DATASTORES
10 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
Add Hosts to Clusters
Hosts can be created directly in a Cluster, using the -cluster option of onehost create, or be added at any
moment using the command onecluster addhost. Hosts can be in only one Cluster at a time.
To delete a Host from a Cluster, the command onecluster delhost must be used. When a Host is removed from
a Cluster, it is seen as part of the Cluster none, more about this below.
In the following example, we will add Host 0 to the Cluster we created before. You will notice that the onecluster
show command will list the Host ID 0 as part of the Cluster.
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 - 7 400 290 400 3.7G 2.2G 3.7G on
$ onecluster addhost production host01
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 producti 7 400 290 400 3.7G 2.2G 3.7G on
$ onecluster show production
CLUSTER 100 INFORMATION
ID : 100
NAME : production
HOSTS
0
VNETS
DATASTORES
Add Resources to Clusters
Datastores and Virtual Networks can be added to one Cluster. This means that any Host in that Cluster is properly
congured to run VMs using Images from the Datastores, or is using leases from the Virtual Networks.
For instance, if you have several Hosts congured to use Open vSwitch networks, you would group them in the same
Cluster. The Scheduler will know that VMs using these resources can be deployed in any of the Hosts of the Cluster.
These operations can be done with the onecluster addvnet/delvnet and
adddatastore/deldatastore:
$ onecluster addvnet production priv-ovswitch
$ onecluster adddatastore production iscsi
$ onecluster list
ID NAME HOSTS NETS DATASTORES
100 production 1 1 1
$ onecluster show 100
CLUSTER 100 INFORMATION
ID : 100
NAME : production
CLUSTER TEMPLATE
1.3. Managing Clusters 11
OpenNebula 4.6 Administration Guide, Release 4.6
HOSTS
0
VNETS
1
DATASTORES
100
The System Datastore for a Cluster
You can associate an specic System DS to a cluster to improve its performance (e.g. balance VM I/O between
different servers) or to use different system DS types (e.g. shared and ssh).
To use a specic System DS with your cluster, instead of the default one, just create it (with TYPE=SYSTEM_DS in
its template), and associate it just like any other datastore (onecluster adddatastore). Check the System DS guide for
more information.
Cluster Properties
Each cluster includes a generic template where cluster conguration properties or attributes can be dened. The
following list of attributes are recognized by OpenNebula:
Attribute Description
DATASTORE_LOCATION *Default* path for datastores in the cluster hosts. It is the same for all the hosts in the cluster.
Note that DATASTORE_LOCATION is only for the cluster hosts and not for the front-end. It
defaults to /var/lib/one/datastores
You can easily update this values with the onecluster command:
$ onecluster update production
-----8<----- editor session ------8<------
DATASTORE_LOCATION="/mnt/nas/datastores"
~
~
~
----->8----- editor session ------>8------
$oneluster show production
CLUSTER 100 INFORMATION
ID : 100
NAME : production
SYSTEM DATASTORE : 100
CLUSTER TEMPLATE
DATASTORE_LOCATION="/mnt/nas/datastores"
HOSTS
0
VNETS
1
12 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
DATASTORES
100
You can add as many variables as you want, following the standard template syntax. These variables will be used for
now only for informational purposes.
1.3.2 The Default Cluster None
Hosts, Datastores and Virtual Networks can be grouped into clusters, but this is optional. By default, these resources
are created outside of any Cluster, what can be seen as a special Cluster named none in Sunstone. In the CLI, this
Cluster name is shown as -.
Virtual Machines using resources from Datastores or Virtual Networks in the Cluster none can be deployed in any
Host, which must be properly congured.
Hosts in the Cluster none will only run VMs using resources without a Cluster.
1.3.3 Scheduling and Clusters
Automatic Requirements
When a Virtual Machine uses resources (Images or Virtual Networks) from a Cluster, OpenNebula adds the following
requirement to the template:
$ onevm show 0
[...]
AUTOMATIC_REQUIREMENTS="CLUSTER_ID = 100"
Because of this, if you try to use resources from more than one Cluster, the Virtual Machine creation will fail with a
message similar to this one:
$ onetemplate instantiate 0
[TemplateInstantiate] Error allocating a new virtual machine. Incompatible cluster IDs.
DISK [0]: IMAGE [0] from DATASTORE [1] requires CLUSTER [101]
NIC [0]: NETWORK [1] requires CLUSTER [100]
Manual Requirements and Rank
The placement attributes SCHED_REQUIREMENTS and SCHED_RANK can use attributes from the Cluster template.
Lets say you have the following scenario:
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
1 host01 cluster_a 0 0 / 200 (0%) 0K / 3.6G (0%) on
2 host02 cluster_a 0 0 / 200 (0%) 0K / 3.6G (0%) on
3 host03 cluster_b 0 0 / 200 (0%) 0K / 3.6G (0%) on
$ onecluster show cluster_a
CLUSTER TEMPLATE
QOS="GOLD"
$ onecluster show cluster_b
CLUSTER TEMPLATE
QOS="SILVER"
1.3. Managing Clusters 13
OpenNebula 4.6 Administration Guide, Release 4.6
You can use these expressions:
SCHED_REQUIREMENTS = "QOS = GOLD"
SCHED_REQUIREMENTS = "QOS != GOLD & HYPERVISOR = kvm"
1.3.4 System Storage
The system datastore holds les for running VMs. Each cluster can use a different system datastore, read more in the
system datastore guide.
1.3.5 Managing Clusters in Sunstone
The Sunstone UI interface offers an easy way to manage clusters and the resources whithin them. You will nd the
cluster submenu under the infraestructure menu. From there, you will be able to:
Create new clusters selecting the resources you want to include in this cluster:
See the list of current clusters, from which you can update the template of existing ones, or delete them.
14 Chapter 1. Hosts and Clusters
OpenNebula 4.6 Administration Guide, Release 4.6
1.3. Managing Clusters 15
OpenNebula 4.6 Administration Guide, Release 4.6
16 Chapter 1. Hosts and Clusters
CHAPTER
TWO
STORAGE
2.1 Storage Overview
A Datastore is any storage medium used to store disk images for VMs, previous versions of OpenNebula refer to this
concept as Image Repository. Typically, a datastore will be backed by SAN/NAS servers.
An OpenNebula installation can have multiple datastores of several types to store disk images. OpenNebula also uses
a special datastore, the system datastore, to hold images of running VMs.
2.1.1 What Datastore Types Are Available?
OpenNebula is shipped with 3 different datastore classes:
System, to hold images for running VMs, depending on the storage technology used these temporal images can
be complete copies of the original image, qcow deltas or simple lesystem links.
17
OpenNebula 4.6 Administration Guide, Release 4.6
Images, stores the disk images repository. Disk images are moved, or cloned to/from the System datastore when
the VMs are deployed or shutdown; or when disks are attached or snapshoted.
Files, This is a special datastore used to store plain les and not disk images. The plain les can be used as
kernels, ramdisks or context les.
Image datastores can be of different type depending on the underlying storage technology:
File-system, to store disk images in a le form. The les are stored in a directory mounted from a SAN/NAS
server.
vmfs, a datastore specialized in VMFS format to be used with VMware hypervisors. Cannot be mounted in the
OpenNebula front-end since VMFS is not *nix compatible.
LVM, The LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of
plain les to hold the Virtual Images. This reduces the overhead of having a le-system in place and thus
increases performance..
Ceph, to store disk images using Ceph block devices.
As usual in OpenNebula the system has been architected to be highly modular, so you can easily adapt the base types
to your deployment.
2.1.2 How Are the Images Transferred to the Hosts?
The Disk images registered in a datastore are transferred to the hosts by the transfer manager (TM) drivers. These
drivers are specialized pieces of software that perform low-level storage operations.
The transfer mechanism is dened for each datastore. In this way a single host can simultaneously access multiple
datastores that uses different transfer drivers. Note that the hosts must be congured to properly access each data-store
type (e.g. mount FS shares).
OpenNebula includes 6 different ways to distribute datastore images to the hosts:
shared, the datastore is exported in a shared lesystem to the hosts.
ssh, datastore images are copied to the remote hosts using the ssh protocol
vmfs, image copies are done using the vmkfstools (VMware lesystem tools)
qcow, a driver specialized to handle qemu-qcow format and take advantage of its snapshoting capabilities
ceph, a driver that delegates to libvirt/KVM the management of Ceph RBDs.
lvm, images are stored as LVs in a cLVM volume.
2.1.3 Planning your Storage
You can take advantage of the multiple datastore features of OpenNebula to better scale the storage for your VMs, in
particular:
Balancing I/O operations between storage servers
Different VM types or users can use datastores with different performance features
Different SLA policies (e.g. backup) can be applied to different VM types or users
Easily add new storage to the cloud
There are some limitations and features depending on the transfer mechanism you choose for your system and image
datastores (check each datastore guide for more information). The following table summarizes the valid combinations
of Datastore and transfer drivers:
18 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Datastore shared ssh qcow2 vmfs ceph lvm fs_lvm
System x x x
File-System x x x x
vmfs x
ceph x
lvm x
2.1.4 Tuning and Extending
Drivers can be easily customized please refer to the specic guide for each datastore driver or to the Storage substystem
developers guide.
However you may nd the les you need to modify here:
/var/lib/one/remotes/datastore/<DS_DRIVER>
/var/lib/one/remotes/tm/<TM_DRIVER>
2.2 The System Datastore
The system datastore is a special Datastore class that holds images for running VMs. As opposed to the regular images
datastores you cannot register new images into a system datastore.
2.2.1 Types of System Datastore
For each running VM in the datastore there is a directory containing the disk images and additional conguration les.
For example, the structure of the system datastore 0 with 3 VMs (VM 0 and 2 running, and VM 7 stopped) could be:
datastores
|-- 0/
| |-- 0/
| | |-- disk.0
| | -- disk.1
| |-- 2/
| | -- disk.0
| -- 7/
| |-- checkpoint
| -- disk.0
There are three system datastore types, based on the TM_MAD driver used:
shared, the storage area for the system datastore is a shared directory across the hosts.
vmfs, a specialized version of the shared one to use the vmfs le system. The infrastructure notes explained
here for shared apply to vmfs. Then please follow to the specic VMFS storage guide here.
ssh, uses a local storage area from each host for the system datastore
The Shared System Datastore
The shared transfer driver requires the hosts to share the system datastore directory (it does not need to be shared with
the front-end). Typically these storage areas are shared using a distributed FS like NFS, GlusterFS, Lustre, etc.
2.2. The System Datastore 19
OpenNebula 4.6 Administration Guide, Release 4.6
A shared system datastore usually reduces VM deployment times and enables live-migration, but it can also become a
bottleneck in your infrastructure and degrade your VMs performance if the virtualized services perform disk-intensive
workloads. Usually this limitation may be overcome by:
Using different lesystem servers for the images datastores, so the actual I/O bandwith is balanced
Using an ssh system datastore instead, the images are copied locally to each host
Tuning or improving the lesystem servers
The SSH System Datastore
In this case the system datastore is distributed among the hosts. The ssh transfer driver uses the hosts local storage
to place the images of running VMs (as opposed to a shared FS in the shared driver). All the operations are then
performed locally but images have to be copied always to the hosts, which in turn can be a very resource demanding
operation. Also this driver prevents the use of live-migrations between hosts.
20 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
The System and Image Datastores
OpenNebula will automatically transfer VM disk images to/from the system datastore when a VM is booted or shut-
down. The actual transfer operations and the space taken from the system datastore depends on both the image
conguration (persistent vs non-persistent) as well as the drivers used by the images datastore. The following table
summarizes the actions performed by each transfer manager driver type.
Image Type shared ssh qcow2 vmfs ceph lvm shared lvm
Persistent link copy link link link link lv copy
Non-persistent copy copy snapshot cp rdb copy+ lv copy+ lv copy
Volatile new new new new new new new
In the table above:
link is the equivalent to a symbolic link operation that will not take any signicant amount of storage from the
system datastore
copy, rbd copy and lv copy, are copy operations as in regular cp le operations, that may involve creation of
special devices like a logical volume. This will take the same size as the original image.
snapshot, qcow2 snapshot operation.
new, a new image le is created on the system datastore of the specied size.
Important Note, operations with +, are performed on the original image datastore; an so those operations take
storage from the image datastore and not from the system one.
Once the disk images are transferred from the image datastore to the system datastore using the operations described
above, the system datastore (and its drivers) is responsible for managing the images, mainly to:
Move the images across hosts, e.g. when the VM is stopped or migrated
Delete any copy from the hosts when the VM is shutdown
2.2. The System Datastore 21
OpenNebula 4.6 Administration Guide, Release 4.6
2.2.2 Conguration Overview
You need to congure one or more system datastores for each of your clusters. In this way you can better plan the
storage requirements, in terms of total capacity assigned, performance requirements and load balancing across system
datastores. Note that hosts not assigned to a cluster can still use system datastores that are neither assigned to a cluster.
To congure the system datastores for your OpenNebula cloud you need to:
Create as many system datastores as needed (you can add more later if you need them)
Assign the system datastores to a given cluster
Congure the cluster hosts to access the system datastores
2.2.3 Step 1. Create a New System Datastore
To create a new system datastore you need to specify its type as system either in Sunstone (system) or through the
CLI (adding TYPE = SYSTEM_DS to the datastore template). And you need to select the system datastore drivers, as
discussed above: shared, vmfs and ssh.
For example to create a system datastore using the shared drivers simply:
$ cat system.ds
NAME = nfs_ds
TM_MAD = shared
TYPE = SYSTEM_DS
$ onedatastore create system.ds
ID: 100
2.2.4 Step 2. Assign the System Datastores
Hosts can only use use a system datastore if they are in the same cluster, so once created you need to add the system
datastores to the cluster. You can add more than one system datastore to a cluster, the actual system DS used to
deploy the VM will be selected based on storage scheduling policies, see below.
Warning: Host not associated to a cluster will also use system datastores not associated to a cluster. If you are
not using clusters you can skip this section.
To associate this system datastore to the cluster, add it:
$ onecluster adddatastore production_cluster nfs_ds
As well see shortly, hosts need to be congured to access the systems datastore through a well-known location, that
defaults to /var/lib/one/datastores. You can also override this setting for the hosts of a cluster using the
DATASTORE_LOCATION attribute. It can be changed with the onecluster update command.
$ onecluster update production_cluster
#Edit the file to read as:
DATASTORE_LOCATION=/path/to/datastores/
Warning: DATASTORE_LOCATION denes the path to access the datastores in the hosts. It can be dened for
each cluster, or if not dened for the cluster the default in oned.conf will be used.
22 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: When needed, the front-end will access the datastores at /var/lib/one/datastores, this path
cannot be changed, you can link each datastore directory to a suitable location
2.2.5 Step 3. Congure the Hosts
The specic conguration for the hosts depends on the system datastore type (shared or ssh). Before continuing check
that SSH is congured to enable oneadmin passwordless access in every host.
Congure the Hosts for the Shared System Datastore
A NAS has to be congured to export a directory to the hosts, this directory will be used as the storage area for the
systemdatastore. Each host has to mount this directory under $DATASTORE_LOCATION/<ds_id>. In small instal-
lations the front-end can be also used to export the system datastore directory to the hosts. Although this deployment
is not recommended for medium-large size deployments.
Warning: It is not needed to mount the system datastore in the OpenNebula front-end as
/var/lib/one/datastores/<ds_id>
Congure the Hosts for the SSH System Datastore
There is no special conguration needed to take place to use the ssh drivers for the system datastore. Just be sure that
there is enough space under $DATASTORE_LOCATION to hold the images of the VMs that will run in each particular
host.
Also be sure that there is space in the frontend under /var/lib/one/datastores/<ds_id> to hold the images
of the stopped or undeployed VMs.
2.2.6 Multiple System Datastore Setups
In order to distribute efciently the I/Oof the VMs across different disks, LUNs or several storage backends, OpenNeb-
ula is able to dene multiple system datastores per cluster. Scheduling algorithms take into account disk requirements
of a particular VM, so OpenNebula is able to pick the best execution host based on capacity and storage metrics.
Admin Perspective
For an admin, it means that she would be able to decide which storage policy to apply for the whole cloud she is
administering, that will then be used to chose which system datastore is more suitable for a certain VM.
When more than one system datastore is added to a cluster, all of them can be taken into account by the scheduler to
place VMs into.
System scheduling policies are dened in /etc/one/sched.conf. These are the defaults the scheduler would
use if the VM template doesnt state otherwise. The possibilities are described here:
Packing. Tries to optimize storage usage by selecting the datastore with less free space.
Striping. Tries to optimize I/O by distributing the VMs across datastores.
Custom. Based on any of the attributes present in the datastore template.
To activate for instance the Stripping storage policy, /etc/one/sched.conf must contain:
2.2. The System Datastore 23
OpenNebula 4.6 Administration Guide, Release 4.6
DEFAULT_DS_SCHED = [
policy = 1
]
Warning: Any host belonging to a given cluster must be able to access any system or image datastore dened in
that cluster.
User Perspective
For a user, OpenNebulas ability to handle multiples datastore means that she would be able to require for its VMs
to be run on a system datastore backed by a fast storage cabin, or run on the host with a datastore with the most free
space available. This choice is obviously limited to the underlying hardware and the administrator conguration.
This control can be exerted within the VM template, with two attributes:
Attribute Description Examples
SCHED_DS_REQUIREMENTS Boolean expression that rules out en-
tries from the pool of datastores suit-
able to run this VM.
SCHED_DS_REQUIREMENTS=ID=100
SCHED_DS_REQUIREMENTS=NAME=GoldenCephDS
SCHED_DS_REQUIREMENTS=FREE_MB
> 250000)
SCHED_DS_RANK States which attribute will be used
to sort the suitable datastores for this
VM. Basically, it denes which data-
stores are more suitable than others.
SCHED_DS_RANK= FREE_MB
SCHED_DS_RANK=-
FREE_MB
Warning: Admins and user with admins rights can force the deployment to a certain datastore, using onevm
deploy command.
2.2.7 Tuning and Extending
Drivers can be easily customized. Please refer to the specic guide for each datastore driver or to the Storage substys-
tem developers guide.
However you may nd the les you need to modify here:
/var/lib/one/remotes/datastore/<DS_DRIVER>
/var/lib/one/remotes/tm/<TM_DRIVER>
2.3 The Filesystem Datastore
The Filesystem datastore lets you store VM images in a le form. The datastore is format agnostic, so you can store
any le-type depending on the target hypervisor. The use of le-based disk images presents several benets over
deviced backed disks (e.g. easily backup images, or use of shared FS) although it may less performing in some cases.
Usually it is a good idea to have multiple lesystem datastores to:
Group images of the same type, so you can have a qcow datastore for KVM hosts and a raw one for Xen
Balance I/O operations, as the datastores can be in different servers
24 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Use different datastores for different cluster hosts
Apply different QoS policies to different images
2.3.1 Requirements
There are no special requirements or software dependencies to use the lesystem datastore. The drivers make use of
standard lesystem utils (cp, ln, mv, tar, mkfs...) that should be installed in your system.
2.3.2 Conguration
Conguring the System Datastore
Filesystem datastores can work with a system datastore that uses either the shared or the SSH transfer drivers, note
that:
Shared drivers for the system datastore enables live-migrations, but it could demand a high-performance SAN.
SSH drivers for the system datastore may increase deployment/shutdown times but all the operations are per-
formed locally, so improving performance in general.
See more details on the System Datastore Guide
Conguring the FileSystem Datastores
The rst step to create a lesystem datastore is to set up a template le for it. In the following table you can see the
valid conguration attributes for a lesystem datastore. The datastore type is set by its drivers, in this case be sure to
add DS_MAD=fs.
The other important attribute to congure the datastore is the transfer drivers. These drivers determine how the images
are accessed in the hosts. The Filesystem datastore can use shared, ssh and qcow2. See below for more details.
Attribute Description Values
NAME The name of the datastore N/A
DS_MAD The DS type, use fs for the Filesystem datastore fs
TM_MAD Transfer drivers for the datastore: shared, ssh or qcow2, see below shared,
ssh,
qcow2
RESTRICTED_DIRSPaths that can not be used to register images. A space separated list of paths. N/A
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A
space separated list of paths.
N/A
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for
specialized Transfer Managers. Use value yes to disable decompression.
yes
LIMIT_TRANSFER_BW Specify the maximum transfer rate in bytes/second when downloading images
from a http/https URL. Sufxes K, M or G can be used.
N/A
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a
new image
yes
BASE_PATH Base path to build the path of the Datastore Images. This path is used to store
the images when they are created in the datastore
N/A
Note: The RESTRICTED_DIRS directive will prevent users registering important les as VM images and accessing
them thourgh their VMs. OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and
oneadmins home. If users try to register an image froma restricted directory, they will get the following error message:
Not allowed to copy image file.
2.3. The Filesystem Datastore 25
OpenNebula 4.6 Administration Guide, Release 4.6
For example, the following illustrates the creation of a lesystem datastore using the shared transfer drivers.
> cat ds.conf
NAME = production
DS_MAD = fs
TM_MAD = shared
> onedatastore create ds.conf
ID: 100
> onedatastore list
ID NAME CLUSTER IMAGES TYPE TM
0 system none 0 fs shared
1 default none 3 fs shared
100 production none 0 fs shared
The DS and TM MAD can be changed later using the onedatastore update command. You can check more
details of the datastore by issuing the onedatastore show command.
Finally, you have to prepare the storage for the datastore and congure the hosts to access it. This depends on the
transfer mechanism you have chosen for your datastore.
After creating a new datastore the LN_TARGET and CLONE_TARGET parameters will be added to the template.
These values should not be changed since they dene the datastore behaviour. The default values for these parameters
are dened in oned.conf for each driver.
Warning: Note that datastores are not associated to any cluster by default, and their are supposed to be accessible
by every single host. If you need to congure datastores for just a subset of the hosts take a look to the Cluster
guide.
2.3.3 Using the Shared Transfer Driver
The shared transfer driver assumes that the datastore is mounted in all the hosts of the cluster. When a VM is created,
its disks (the disk.i les) are copied or linked in the corresponding directory of the system datastore. These le
operations are always performed remotely on the target host.
26 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Persistent & Non Persistent Images
If the VM uses a persistent image, a symbolic link to the datastore is created in the corresponding directory of the sys-
tem datastore. Non-persistent images are copied instead. For persistent images, this allows an immediate deployment,
and no extra time is needed to save the disk back to the datastore when the VM is shut down.
On the other hand, the original le is used directly, and if for some reason the VM fails and the image data is corrupted
or lost, there is no way to cancel the persistence.
Finally images created using the onevm disk-snapshot command will be moved to the datastore only after the VM is
successfully shut down. This means that the VM has to be shutdown using the onevm shutdown command, and not
onevm delete. Suspending or stopping a running VM wont copy the disk le to the datastore either.
Host Conguration
Each host has to mount the datastore under $DATASTORE_LOCATION/<datastore_id>. You also have to
mount the datastore in the front-end in /var/lib/one/datastores/<datastore_id>.
Warning: DATASTORE_LOCATION denes the path to access the datastores in the hosts. It can be dened for
each cluster, or if not dened for the cluster the default in oned.conf will be used.
Warning: When needed, the front-end will access the datastores using BASE_PATH (defaults to
/var/lib/one/datastores). You can set the BASE_PATH for the datastore at creation time.
2.3. The Filesystem Datastore 27
OpenNebula 4.6 Administration Guide, Release 4.6
2.3.4 Using the SSH Transfer Driver
In this case the datastore is only directly accessed by the front-end. VM images are copied from/to the datastore
using the SSH protocol. This may impose high VM deployment times depending on your infrastructure network
connectivity.
Persistent & Non Persistent Images
In either case (persistent and non-persistent) images are always copied from the datastore to the corresponding direc-
tory of the system datastore in the target host.
If an image is persistent (or for the matter of fact, created with the onevm disk-snapshot command), it is transferred
back to the Datastore only after the VM is successfully shut down. This means that the VM has to be shut down using
the onevm shutdown command, and not onevm delete. Note that no modication to the image registered in the
datastore occurs till that moment. Suspending or stopping a running VM wont copy/modify the disk le registered in
the datastore either.
Host Conguration
There is no special conguration for the hosts in this case. Just make sure that there is enough space under
$DATASTORE_LOCATION to hold the images of the VMs running in that host.
2.3.5 Using the qcow2 Transfer driver
The qcow2 drivers are a specialization of the shared drivers to work with the qcow2 format for disk images. The same
features/restrictions and conguration applies so be sure to read the shared driver section.
The following list details the differences:
Persistent images are created with the qemu-img command using the original image as backing le
When an image has to be copied back to the datastore the qemu-img convert command is used instead of
a direct copy
28 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
2.3.6 Tuning and Extending
Drivers can be easily customized please refer to the specic guide for each datastore driver or to the Storage substystem
developers guide.
However you may nd the les you need to modify here:
/var/lib/one/remotes/datastore/<DS_DRIVER>
/var/lib/one/remotes/tm/<TM_DRIVER>
2.4 The VMFS Datastore
In order to use VMware hypervisors in your OpenNebula cloud you will need to use VMFS Datastores. To congure
them, it is important to keep in mind that there are (at least) two datastores to dene, the system datastore
(where the running VMs and their images reside, only need transfer manager drivers) and the images datastore
(where the images are stored, needs both datastore and transfer manager drivers).
2.4.1 Requirements
In order to use the VMFS datastore, the ESX servers need to have the SSH access congured for the oneadmin
account.
If the VMFS volumes are exported through a SAN, it should be accesible and congured so the ESX server can
mount the iSCSI export.
2.4.2 Description
This storage model implies that all the volumes involved in the image staging are purely VMFS volumes, taking full
advantage of the VMware lesystem (VM image locking and improved performance).
2.4. The VMFS Datastore 29
OpenNebula 4.6 Administration Guide, Release 4.6
2.4.3 Infrastructure Conguration
The OpenNebula front-end doesnt need to mount any datastore.
The ESX servers needs to present or mount (as iSCSI, NFS or local storage) both the system datastore and the
image datastore (naming them with just the <datastore-id>, for instance 0 for the system datastore and 1 for
the image datastore).
Warning: The system datastore can be other than the default one (0). In this case, the ESX will need to mount
the datastore with the same id as the datastores has in OpenNebula. More details in the System Datastore Guide.
2.4.4 OpenNebula Conguration
The datastore location on ESX hypervisors is /vmfs/volumes. There are two choices:
In homogeneous clouds (all the hosts are ESX) set the following in /etc/one/oned.conf:
DATASTORE_LOCATION=/vmfs/volumes
30 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
In heterogeneous clouds (mix of ESX and other hypervisor hosts) put all the ESX hosts in clusters with the
following attribute in their template (e.g. onecluster update):
DATASTORE_LOCATION=/vmfs/volumes
Warning: You need also to set the BASE_PATH attribute in the template when the Datastore is created.
Datastore Conguration
The system and images datastores needs to be congured with the following drivers:
Datastore DS Drivers TM Drivers
System

vmfs
Images vmfs vmfs
System Datastore
vmfs drivers: the system datastore needs to be updated in OpenNebula (onedatastore update <ds_id>) to
set the TM_MAD drivers to vmfs. There is no need to congure datastore drivers for the system datastore.
OpenNebula expects the system datastore to have the ID=0, but a system datastore with different ID can be dened
per cluster. See the system datastore guide for more details.
Images Datastore
The image datastore needs to be updated to use vmfs drivers for the datastore drivers, and vmfs drivers for the transfer
manager drivers. The default datastore can be updated as:
$ onedatastore update 1
DS_MAD=vmfs
TM_MAD=vmfs
BRIDGE_LIST=<space-separated list of ESXi host>
Apart from DS_MAD, TM_MAD and BRIDGE_LIST; the following attributes can be set:
Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use vmfs
TM_MAD Must be vmfs
RESTRICTED_DIRSPaths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
UMASK Default mask for the les created in the datastore. Defaults to 0007
BRIDGE_LIST Space separated list of ESX servers that are going to be used as proxies to stage images into
the datastore (vmfs datastores only)
DS_TMP_DIR Path in the OpenNebula front-end to be used as a buffer to stage in les in vmfs datastores.
Defaults to the value in
/var/lib/one/remotes/datastore/vmfs/vmfs.conf.
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for specialized Transfer
Managers
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a new image
BASE_PATH This variable must be set to /vmfs/volumes for VMFS datastores.
2.4. The VMFS Datastore 31
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: RESTRICTED_DIRS will prevent users registering important les as VM images and accessing them
through their VMs. OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and
oneadmins home. If users try to register an image from a restricted directory, they will get the following error
message: Not allowed to copy image le.
After creating a new datastore the LN_TARGET and CLONE_TARGET parameters will be added to the template.
These values should not be changed since they dene the datastore behaviour. The default values for these parameters
are dened in oned.conf for each driver.
Driver Conguration
Transfer Manager Drivers
These drivers trigger the events remotely through an ssh channel. The vmfs drivers are a specialization of the shared
drivers to work with the VMware vmdk lesystem tools using the vmkfstool command. This comes with a number
of advantages, like FS locking, easier VMDK cloning, format management, etc.
Datastore Drivers
The vmfs datastore drivers allows the use of the VMware VM lesystem, which handles VM le locks and also boosts
I/O performance.
To correctly congure a vmfs datastore set of drivers there is the need to chose the ESX bridges, i.e., the ESX
serves that are going to be used as proxies to stage images into the vmfs datastore. A list of bridges must be
dened with the BRIDGE_LIST attribute of the datastore template (see the table below). The drivers will pick
one ESX server from that list in a round robin fashion.
The vmfs datastore needs to use the front-end as a buffer for the image staging in some cases, this buffer can
be set in the DS_TMP_DIR attribute.
2.4.5 Tuning and Extending
Drivers can be easily customized please refer to the specic guide for each datastore driver or to the Storage substystem
developers guide.
However you may nd the les you need to modify here:
/var/lib/one/remotes/datastore/<DS_DRIVER>
/var/lib/one/remotes/tm/<TM_DRIVER>
2.5 LVM Drivers
The LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of plain les to
hold the Virtual Images. This reduces the overhead of having a le-system in place and thus increases performance.
2.5.1 Overview
OpenNebula ships with two sets of LVM drivers:
FS LVM, le based VM disk images with Logical Volumes (LV), using the fs_lvm drivers
32 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Block LVM, pure Logical Volume (LV), using the lvm drivers
In both cases Virtual Machine will run from Logical Volumes in the host, and they both require cLVM in order to
provide live-migration.
However there are some differences, in particular the way non active images are stored, and the name of the Volume
Group where they are executed.
This is a brief description of both drivers:
2.5.2 FS LVM
In a FS LVM datastore using the fs_lvm drivers (the now recommended LVM drivers), images are registered as les
in a shared FS volume, under the usual path: /var/lib/one/datastores/<id>.
This directory needs to be accessible in the worker nodes, using NFS or any other shared/distributed le-system.
When a Virtual Machine is instantiated OpenNebula will dynamically select the system datastore. Lets assume for
instance the selected datastore is 104. The virtual disk image will be copied from the stored image le under the
datastores directory and dumped into a LV under the Volume Group: vg-one-104. It follows that each node
must have a cluster-aware LVM Volume Group for every possible system datastore it may execute.
This set of drivers brings precisely the advantage of dynamic selection of the system datastore, allowing therefore
more granular control of the performance of the storage backend.
Read more
2.5.3 Block LVM
The Block LVM datastore use the lvm drivers with the classical approach to using LVM in OpenNebula.
When a new datastore that uses this set of drivers is created, it requires the VG_NAME parameter, which will tie
the images to that Volume Group. Images will be registered directly as Logical Volumes in that Volume Group (as
opposed to being registered as les in the frontend), and when they are instantiated the new cloned Logical Volume
will also be created in that very same Volume Group.
2.5. LVM Drivers 33
OpenNebula 4.6 Administration Guide, Release 4.6
Read more
2.6 The FS LVM Datastore
2.6.1 Overview
The FS LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of plain les
to hold the Virtual Images.
It is assumed that the OpenNebula hosts using this datastore will be congured with CLVM, therefore modifying the
OpenNebula Volume Group in one host will reect in the others.
34 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
2.6.2 Requirements
OpenNebula Front-end
Password-less ssh access to an OpenNebula LVM-enabled host.
OpenNebula LVM Hosts
LVM must be available in the Hosts. The oneadmin user should be able to execute several LVM related commands
with sudo passwordlessly.
Password-less sudo permission for: lvremove, lvcreate, lvs, vgdisplay and dd.
LVM2
oneadmin needs to belong to the disk group (for KVM).
2.6.3 Conguration
Conguring the System Datastore
To use LVM drivers, the system datastore must be shared. This sytem datastore will hold only the symbolic links
to the block devices, so it will not take much space. See more details on the System Datastore Guide
It will also be used to hold context images and Disks created on the y, they will be created as regular les.
It is worth noting that running virtual disk images will be created in Volume Groups that are hardcoded to be
vg-one-<system_ds_id>. Therefore the nodes must have those Volume Groups pre-created and available for
all possible system datastores.
Conguring LVM Datastores
The rst step to create a LVM datastore is to set up a template le for it. In the following table you can see the
supported conguration attributes. The datastore type is set by its drivers, in this case be sure to add DS_MAD=fs and
TM_MAD=fs_lvm for the transfer mechanism, see below.
Attribute Description
NAME The name of the datastore
DS_MAD Must be fs
TM_MAD Must be fs_lvm
DISK_TYPE Must be block
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
BRIDGE_LIST Mandatory space separated list of LVM frontends.
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for specialized
Transfer Managers
LIMIT_TRANSFER_BW Specify the maximum transfer rate in bytes/second when downloading images from a
http/https URL. Sufxes K, M or G can be used.
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a new image
Note: The RESTRICTED_DIRS directive will prevent users registering important les as VM images and accessing
them through their VMs. OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and
oneadmins home. If users try to register an image froma restricted directory, they will get the following error message:
Not allowed to copy image le.
2.6. The FS LVM Datastore 35
OpenNebula 4.6 Administration Guide, Release 4.6
For example, the following examples illustrates the creation of an LVM datastore using a conguration le. In this
case we will use the host host01 as one of our OpenNebula LVM-enabled hosts.
> cat ds.conf
NAME = production
DS_MAD = fs
TM_MAD = fs_lvm
> onedatastore create ds.conf
ID: 100
> onedatastore list
ID NAME CLUSTER IMAGES TYPE TM
0 system none 0 fs shared
1 default none 3 fs shared
100 production none 0 fs fs_lvm
Note: Datastores are not associated to any cluster by default, and they are supposed to be accessible by every single
host. If you need to congure datastores for just a subset of the hosts take a look to the Cluster guide.
After creating a new datastore the LN_TARGET and CLONE_TARGET parameters will be added to the template.
These values should not be changed since they dene the datastore behaviour. The default values for these parameters
are dened in oned.conf for each driver.
Host Conguration
The hosts must have LVM2 and must have a Volume-Group for every possible system-datastore that can run in the
host. CLVM must also be installed and active accross all the hosts that use this datastore.
Its also required to have password-less sudo permission for: lvremove, lvcreate, lvs and dd.
2.6.4 Tuning & Extending
System administrators and integrators are encouraged to modify these drivers in order to integrate them with their
datacenter:
Under /var/lib/one/remotes/:
tm/fs_lvm/ln: Links to the LVM logical volume.
tm/fs_lvm/clone: Clones the image by creating a snapshot.
tm/fs_lvm/mvds: Saves the image in a new LV for SAVE_AS.
tm/fs_lvm/cpds: Saves the image in a new LV for SAVE_AS while VM is running.
2.7 The Block LVM Datastore
2.7.1 Overview
The Block LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of plain
les to hold the Virtual Images.
36 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
It is assumed that the OpenNebula hosts using this datastore will be congured with CLVM, therefore modifying the
OpenNebula Volume Group in one host will reect in the others. There is a special list of hosts (BRIDGE_LIST)
which belong to the LVM cluster, that will be the ones OpenNebula uses to speak to when doing LVM operations.
2.7.2 Requirements
OpenNebula Front-end
Password-less ssh access to an OpenNebula LVM-enabled host.
OpenNebula LVM Hosts
LVM must be available in the Hosts. The oneadmin user should be able to execute several LVM related commands
with sudo passwordlessly.
Password-less sudo permission for: lvremove, lvcreate, lvs, vgdisplay and dd.
LVM2
oneadmin needs to belong to the disk group (for KVM).
2.7.3 Conguration
Conguring the System Datastore
To use LVM drivers, the system datastore will work both with shared or as ssh. This sytem datastore will hold
only the symbolic links to the block devices, so it will not take much space. See more details on the System Datastore
Guide
It will also be used to hold context images and Disks created on the y, they will be created as regular les.
Conguring Block LVM Datastores
The rst step to create a LVM datastore is to set up a template le for it. In the following table you can see the
supported conguration attributes. The datastore type is set by its drivers, in this case be sure to add DS_MAD=lvm
2.7. The Block LVM Datastore 37
OpenNebula 4.6 Administration Guide, Release 4.6
and TM_MAD=lvm for the transfer mechanism, see below.
Attribute Description
NAME The name of the datastore
DS_MAD Must be lvm
TM_MAD Must be lvm
DISK_TYPE Must be block
VG_NAME The LVM volume group name. Defaults to vg-one
BRIDGE_LIST Mandatory space separated list of LVM frontends.
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for specialized
Transfer Managers
LIMIT_TRANSFER_BW Specify the maximum transfer rate in bytes/second when downloading images from a
http/https URL. Sufxes K, M or G can be used.
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a new image
Warning: RESTRICTED_DIRS will prevent users registering important les as VM images and accessing them
through their VMs. OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and
oneadmins home. If users try to register an image from a restricted directory, they will get the following error
message: Not allowed to copy image file.
For example, the following examples illustrates the creation of an LVM datastore using a conguration le. In this
case we will use the host host01 as one of our OpenNebula LVM-enabled hosts.
> cat ds.conf
NAME = production
DS_MAD = lvm
TM_MAD = lvm
VG_NAME = vg-one
HOST = host01
> onedatastore create ds.conf
ID: 100
> onedatastore list
ID NAME CLUSTER IMAGES TYPE TM
0 system none 0 fs shared
1 default none 3 fs shared
100 production none 0 lvm shared
The DS and TM MAD can be changed later using the onedatastore update command. You can check more
details of the datastore by issuing the onedatastore show command.
Warning: Note that datastores are not associated to any cluster by default, and they are supposed to be accessible
by every single host. If you need to congure datastores for just a subset of the hosts take a look to the Cluster
guide.
After creating a new datastore the LN_TARGET and CLONE_TARGET parameters will be added to the template.
These values should not be changed since they dene the datastore behaviour. The default values for these parameters
are dened in oned.conf for each driver.
38 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Host Conguration
The hosts must have LVM2 and have the Volume-Group used in the VG_NAME attributed of the datastore template.
CLVM must also be installed and active accross all the hosts that use this datastore.
Its also required to have password-less sudo permission for: lvremove, lvcreate, lvs and dd.
2.7.4 Tuning & Extending
System administrators and integrators are encouraged to modify these drivers in order to integrate them with their
datacenter:
Under /var/lib/one/remotes/:
datastore/lvm/lvm.conf: Default values for LVM parameters
HOST: Default LVM target host
VG_NAME: Default volume group
datastore/lvm/cp: Registers a new image. Creates a new logical volume in LVM.
datastore/lvm/mkfs: Makes a new empty image. Creates a new logical volume in LVM.
datastore/lvm/rm: Removes the LVM logical volume.
tm/lvm/ln: Links to the LVM logical volume.
tm/lvm/clone: Clones the image by creating a snapshot.
tm/lvm/mvds: Saves the image in a new LV for SAVE_AS.
2.8 The Ceph Datastore
The Ceph datastore driver provides OpenNebula users with the possibility of using Ceph block devices as their Virtual
Images.
Warning: This driver only works with libvirt/KVM drivers. Xen is not (yet) supported.
Warning: This driver requires that the OpenNebula nodes using the Ceph driver must be part of a running Ceph
cluster. More information in Ceph documentation.
Warning: The hypervisor nodes need to be part of a working Ceph cluster and the Libvirt and QEMU packages
need to be recent enough to have support for Ceph. For Ubuntu systems this is available out of the box, however
for CentOS systems you will need to manually install this version of qemu-kvm.
2.8.1 Requirements
Ceph Cluster Conguration
The hosts where Ceph datastores based images will be deployed must be part of a running Ceph cluster. To do so refer
to the Ceph documentation.
2.8. The Ceph Datastore 39
OpenNebula 4.6 Administration Guide, Release 4.6
The Ceph cluster must be congured in such a way that no specic authentication is required, which means that
for cephx authentication the keyring must be in the expected path so that rbd and ceph commands work without
specifying explicitely the keyrings location.
Also the mon daemon must be dened in the ceph.conf for all the nodes, so hostname and port doesnt need
to be specied explicitely in any Ceph command.
Additionally each OpenNebula datastore is backed by a ceph pool, these pools must be created and congured in the
Ceph cluster. The name of the pool by default is one but can be changed on a per-datastore basis (see below).
This driver can work with either RBD Format 1 or RBD Format 2. To set the default you can specify this option in
ceph.conf:
[global]
rbd_default_format = 2
OpenNebula Ceph Frontend
This driver requires the system administrator to specify one or several Ceph frontends (which need to be nodes in the
Ceph cluster) where many of the datastores storage actions will take place. For instance, when creating an image,
OpenNebula will choose one of the listed Ceph frontends (using a round-robin algorithm) and transfer the image to
that node and run qemu-img convert -O rbd. These nodes need to be specied in the BRIDGE_LIST section.
Note that this Ceph frontend can be any node in the OpenNebula setup: the OpenNebula frontend, any worker node,
or a specic node (recommended).
Ceph Nodes
All the nodes listed in the BRIDGE_LIST variable must haveqemu-img installed.
OpenNebula Hosts
There are no specic requirements for the host, besides being libvirt/kvm nodes, since xen is not (yet) supported for
the Ceph drivers.
2.8.2 Conguration
Conguring the System Datastore
To use ceph drivers, the system datastore will work both with shared or as ssh. This sytem datastore will hold
only the symbolic links to the block devices, so it will not take much space. See more details on the System Datastore
Guide
It will also be used to hold context images and Disks created on the y, they will be created as regular les.
Conguring Ceph Datastores
The rst step to create a Ceph datastore is to set up a template le for it. In the following table you can see the
supported conguration attributes. The datastore type is set by its drivers, in this case be sure to add DS_MAD=ceph
and TM_MAD=ceph for the transfer mechanism, see below.
40 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use ceph for the Ceph datastore
TM_MAD Transfer drivers for the datastore, use ceph, see below
DISK_TYPE The type must be RBD
BRIDGE_LIST Mandatory space separated list of Ceph servers that are going to be used as frontends.
POOL_NAME The OpenNebula Ceph pool name. Defaults to one. This pool must exist before using
the drivers.
STAGING_DIR Default path for image operations in the OpenNebula Ceph frontend.
RESTRICTED_DIRSPaths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for specialized Transfer
Managers
LIMIT_TRANSFER_BW Specify the maximum transfer rate in bytes/second when downloading images from a
http/https URL. Sufxes K, M or G can be used.
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a new image
CEPH_HOST Space-separated list of Ceph monitors. Example: host1 host2:port2 host3
host4:port4 (if no port is specied, the default one is chosen). Required for Libvirt
1.x when cephx is enabled .
CEPH_SECRET A generated UUID for a LibVirt secret (to hold the CephX authentication key in Libvirt on
each hypervisor). This should be generated when creating the Ceph datastore in
OpenNebula. Required for Libvirt 1.x when cephx is enabled .
RBD_FORMAT By default RBD Format 1 will be used, with no snapshotting support. If RBD_FORMAT=2
is specied then when instantiating non-persistent images the Ceph driver will perform
rbd snap instead of rbd copy.
Warning: This will prevent users registering important les as VM images and accessing them through their VMs.
OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and oneadmins home. If
users try to register an image from a restricted directory, they will get the following error message: Not allowed
to copy image le.
For example, the following examples illustrates the creation of an Ceph datastore using a conguration le. In this
case we will use the host cephfrontend as one the OpenNebula Ceph frontend The one pool must already exist,
if it doesnt create it with:
> ceph osd pool create one 128
> ceph osd lspools
0 data,1 metadata,2 rbd,6 one,
An example of datastore:
> cat ds.conf
NAME = "cephds"
DS_MAD = ceph
TM_MAD = ceph
# the following line
*
must
*
be preset
DISK_TYPE = RBD
POOL_NAME = one
BRIDGE_LIST = cephfrontend
> onedatastore create ds.conf
ID: 101
2.8. The Ceph Datastore 41
OpenNebula 4.6 Administration Guide, Release 4.6
> onedatastore list
ID NAME CLUSTER IMAGES TYPE TM
0 system none 0 fs shared
1 default none 3 fs shared
100 cephds none 0 ceph ceph
The DS and TM MAD can be changed later using the onedatastore update command. You can check more
details of the datastore by issuing the onedatastore show command.
Warning: Note that datastores are not associated to any cluster by default, and they are supposed to be accessible
by every single host. If you need to congure datastores for just a subset of the hosts take a look to the Cluster
guide.
After creating a new datastore the LN_TARGET and CLONE_TARGET parameters will be added to the template.
These values should not be changed since they dene the datastore behaviour. The default values for these parameters
are dened in oned.conf for each driver.
2.8.3 Using Datablocks with Ceph
It is worth noting that when creating datablock, creating a RAW image is very fast whereas creating a formatted block
device takes a longer time. If you want to use a RAW image remember to use the following attribute/option when
creating the Image datablock: FS_TYPE = RAW.
2.8.4 Ceph Authentication (Cephx)
If Cephx is enabled, there are some special considerations the OpenNebula administrator must take into account.
Create a Ceph user for the OpenNebula hosts. We will use the name client.libvirt, but any other name is ne.
Create the user in Ceph and grant it rwx permissions on the one pool:
ceph auth get-or-create client.libvirt mon allow r osd allow class-read object_prefix rbd_children, allow rwx pool=one
Extract the client.libvirt key, save it to a le named client.libvirt.keyand distribute it to all the KVM
hosts:
sudo ceph auth list
# save client.libvirts key to client.libvirt.key
Generate a UUID, for example running uuigden (the generated uuid will referenced as %UUID% from now onwards).
Create a le named secret.xml (using the genereated %UUID% and distribute it to all the KVM hosts:
cat > secret.xml <<EOF
<secret ephemeral=no private=no>
<uuid>%UUID%</uuid>
<usage type=ceph>
<name>client.libvirt secret</name>
</usage>
</secret>
EOF
The following commands must be executed in all the KVM hosts as oneadmin (assuming the secret.xml and
client.libvirt.key les have been distributed to the hosts):
42 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
# Replace %UUID% with the value generated in the previous step
virsh secret-set-value --secret %UUID% --base64 $(cat client.libvirt.key)
Finally, the Ceph datastore must be updated to add the following values:
CEPH_USER="libvirt"
CEPH_SECRET="%UUID%"
CEPH_HOST="<list of ceph mon hosts, see table above>"
You can read more information about this in the Ceph guide Using libvirt with Ceph.
2.8.5 Using the Ceph Transfer Driver
The workow for Ceph images is similar to the other datastores, which means that a user will create an image inside
the Ceph datastores by providing a path to the image le locally available in the OpenNebula frontend, or to an http
url, and the driver will convert it to a Ceph block device.
All the usual operations are avalaible: oneimage create, oneimage delete, oneimage clone, oneimage persistent, oneim-
age nonpersistent, onevm disk-snapshot, etc...
2.8.6 Tuning & Extending
File Location
System administrators and integrators are encouraged to modify these drivers in order to integrate them with their
datacenter:
Under /var/lib/one/remotes/:
datastore/ceph/ceph.conf: Default values for ceph parameters
HOST: Default OpenNebula Ceph frontend
POOL_NAME: Default volume group
STAGING_DIR: Default path for image operations in the OpenNebula Ceph frontend.
datastore/ceph/cp: Registers a new image. Creates a new logical volume in ceph.
datastore/ceph/mkfs: Makes a new empty image. Creates a new logical volume in ceph.
datastore/ceph/rm: Removes the ceph logical volume.
tm/ceph/ln: Does nothing since its handled by libvirt.
tm/ceph/clone: Copies the image to a new image.
tm/ceph/mvds: Saves the image in a Ceph block device for SAVE_AS.
tm/ceph/delete: Removes a non-persistent image from the Virtual Machine directory if it hasnt been subject to
a disk-snapshot operation.
Using SSH System Datastore
Another option would be to manually patch the post and pre-migrate scripts for the ssh system datastore to scp the
les residing in the system datastore before the live-migration. Read more.
2.8. The Ceph Datastore 43
OpenNebula 4.6 Administration Guide, Release 4.6
2.9 The GlusterFS Datastore
GlusterFS driver allows KVM machines access VM images using native GlusterFS API. This datastores uses the
Shared Transfer Manager and the Filesystem Datastore to access a Gluster fuse lesystem to manage images.
Warning: This driver only works with libvirt/KVM drivers. Xen is not (yet) supported.
Warning: All virtualization nodes and the head need to mount the GlusterFS volume used to store images.
Warning: The hypervisor nodes need to be part of a working GlusterFS cluster and the Libvirt and QEMU
packages need to be recent enough to have support for GlusterFS.
2.9.1 Requirements
GlusterFS Volume Conguration
OpenNebula does not run as root user. To be able to access native GlusterFS API user access must be allowed. This
can be achieved by adding this line to /etc/glusterfs/glusterd.vol:
option rpc-auth-allow-insecure on
and executing this command (replace <volume> with your volume name):
# gluster volume set <volume> server.allow-insecure on
As stated in the Libvirt documentation it will be useful to set the owner-uid and owner-gid to the ones used by
oneadmin user and group:
# gluster volume set <volume> storage.owner-uid=<oneadmin uid>
# gluster volume set <volume> storage.owner-gid=<oneadmin gid>
Datastore Mount
The GlusterFS volume must be mounted in all the virtualization nodes and the head node using fuse mount. This
mount will be used to manage images and VM related les (images and checkpoints). The oneadmin account must
have write permissions on the mounted lesystem and it must be accessible by both the system and image datastore.
The recommended way of setting the mount points is to mount the gluster volume in a specic path and to symlink
the datastore directories:
# mkdir -p /var/lib/one/datastores/0
# mount -t gluster server:/volume /var/lib/one/datastores/0
# chown oneadmin:oneadmin /var/lib/one/datastores/0
# ln -s /var/lib/one/datastores/0 /var/lib/one/datastores/1
2.9.2 Conguration
Conguring the System Datastore
The system datastore must be of type shared. See more details on the System Datastore Guide
It will also be used to hold context images and volatile disks.
44 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
Conguring GlusterFS Datastore
The datastore that holds the images will also be of type shared but you will need to add the parameters DISK_TYPE,
GLUSTER_HOST and GLUSTER_VOLUME described ins this table.
Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use shared for the Gluster datastore
TM_MAD Transfer drivers for the datastore, use shared, see below
DISK_TYPE The type must be GLUSTER
RESTRICTED_DIRSPaths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
NO_DECOMPRESS Do not try to untar or decompress the le to be registered. Useful for specialized Transfer
Managers
GLUSTER_HOST Host and port of one (only one) Gluster server host:port
GLUSTER_VOLUME Gluster volume to use for the datastore
An example of datastore:
> cat ds.conf
NAME = "glusterds"
DS_MAD = shared
TM_MAD = shared
# the following line
*
must
*
be preset
DISK_TYPE = GLUSTER
GLUSTER_HOST = gluster_server:24007
GLUSTER_VOLUME = one_vol
CLONE_TARGET="SYSTEM"
LN_TARGET="NONE"
> onedatastore create ds.conf
ID: 101
> onedatastore list
ID NAME SIZE AVAIL CLUSTER IMAGES TYPE DS TM
0 system 9.9G 98% - 0 sys - shared
1 default 9.9G 98% - 2 img shared shared
2 files 12.3G 66% - 0 fil fs ssh
101 default 9.9G 98% - 0 img shared shared
Warning: It is recommended to group the Gluster datastore and the Gluster enabled hypervisors in an OpenNebula
ref:cluster<cluster_guide>.
2.10 The Kernels & Files Datastore
The Files Datastore lets you store plain les to be used as VM kernels, ramdisks or context les. The Files Datastore
does not expose any special storage mechanism but a simple and secure way to use les within VM templates. There
is a Files Datastore (datastore ID: 2) ready to be used in OpenNebula.
2.10. The Kernels & Files Datastore 45
OpenNebula 4.6 Administration Guide, Release 4.6
2.10.1 Requirements
There are no special requirements or software dependencies to use the Files Datastore. The recommended drivers
make use of standard lesystem utils (cp, ln, mv, tar, mkfs...) that should be installed in your system.
2.10.2 Conguration
Most of the conguration considerations used for disk images datastores do apply to the Files Datastore (e.g. driver
setup, cluster assignment, datastore management...). However, given the special nature of the Files Datastore most of
these attributes can be xed as summarized in the following table:
Attribute Description
NAME The name of the datastore
TYPE Use FILE_DS to setup a Files datastore
DS_MAD The DS type, use fs to use the le-based drivers
TM_MAD Transfer drivers for the datastore, use ssh to transfer the les
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths.
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space
separated list of paths.
LIMIT_TRANSFER_BW Specify the maximum transfer rate in bytes/second when downloading images from a
http/https URL. Sufxes K, M or G can be used.
DATASTORE_CAPACITY_CHECK If yes, the available capacity of the datastore is checked before creating a new image
Warning: This will prevent users registering important les as VM images and accessing them thourgh their VMs.
OpenNebula will automatically add its conguration directories: /var/lib/one, /etc/one and oneadmins home. If
users try to register an image from a restricted directory, they will get the following error message: Not allowed
to copy image le.
For example, the following illustrates the creation of File Datastore.
> cat kernels_ds.conf
NAME = kernels
DS_MAD = fs
TM_MAD = ssh
TYPE = FILE_DS
SAFE_DIRS = /var/tmp/files
> onedatastore create kernels_ds.conf
ID: 100
> onedatastore list
ID NAME CLUSTER IMAGES TYPE DS TM
0 system - 0 sys - dummy
1 default - 0 img dummy dummy
2 files - 0 fil fs ssh
100 kernels - 0 fil fs ssh
The DS and TM MAD can be changed later using the onedatastore update command. You can check more
details of the datastore by issuing the onedatastore show command.
2.10.3 Host Conguration
The recommended ssh driver for the File Datastore does not need any special conguration for the hosts. Just make
sure that there is enough space under $DATASTORE_LOCATION to hold the VM les in the front-end and hosts.
46 Chapter 2. Storage
OpenNebula 4.6 Administration Guide, Release 4.6
For more details refer to the Filesystem Datastore guide, as the same conguration guidelines applies.
2.10. The Kernels & Files Datastore 47
OpenNebula 4.6 Administration Guide, Release 4.6
48 Chapter 2. Storage
CHAPTER
THREE
VIRTUALIZATION
3.1 Virtualization Overview
The Virtualization Subsystem is the component in charge of talking with the hypervisor installed in the hosts and
taking the actions needed for each step in the VM lifecycle.
Conguration options and specic information for each hypervisor can be found in these guides:
Xen Driver
KVM Driver
VMware Driver
3.1.1 Common Conguration Options
Drivers accept a series of parameters that control their execution. The parameters allowed are:
parameter description
-r <num> number of retries when executing an action
-t <num number of threads, i.e. number of actions done at the same time
-l <actions> actions executed locally
See the Virtual Machine drivers reference for more information about these parameters, and how to customize and
extend the drivers.
3.1.2 Hypervisor Conguration
A feature supported by both KVM and Xen Hypervisor drivers is selecting the timeout for VM Shutdown. This
feature is useful when a VM gets stuck in Shutdown (or simply does not notice the shutdown command). By
default, after the timeout time the VM will return to Running state but is can also be congured so the VM is
destroyed after the grace time. This is congured in both /var/lib/one/remotes/vmm/xen/xenrc and
/var/lib/one/remotes/vmm/kvm/kvmrc:
# Seconds to wait after shutdown until timeout
export SHUTDOWN_TIMEOUT=300
# Uncomment this line to force VM cancellation after shutdown timeout
#export FORCE_DESTROY=yes
49
OpenNebula 4.6 Administration Guide, Release 4.6
3.2 Xen Driver
The XEN hypervisor offers a powerful, efcient and secure feature set for virtualization of x86, IA64, PowerPC and
other CPU architectures. It delivers both paravirtualization and full virtualization. This guide describes the use of
Xen with OpenNebula, please refer to the Xen specic documentation for further information on the setup of the Xen
hypervisor itself.
3.2.1 Requirements
The Hosts must have a working installation of Xen that includes a Xen aware kernel running in Dom0 and the Xen
utilities.
3.2.2 Considerations & Limitations
Xen HVM currently only supports 4 IDE devices, for more disk devices you should better use SCSI. You have to
take this into account when adding disks. See the Virtual Machine Template documentation for an explanation
on how OpenNebula assigns disk targets.
OpenNebula manages kernel and initrd les. You are encouraged to register them in the les datastore.
To modify the default disk driver to one that works with your Xen version
you can change the les /etc/one/vmm_exec/vmm_exec_xen
*
.conf and
/var/lib/one/remotes/vmm/xen
*
/xenrc. Make sure that you have blktap2 modules loaded to
use tap2:tapdisk:aio:
export IMAGE_PREFIX="tap2:tapdisk:aio"
DISK = [ driver = "tap2:tapdisk:aio:" ]
If target device is not supported by the linux kernel you will be able to attach disks but not detach them. It is
recomended to attach xvd devices for xen paravirtualized hosts.
3.2.3 Conguration
Xen Conguration
In each Host you must perform the following steps to get the driver running:
The remote hosts must have the xend daemon running (/etc/init.d/xend) and a XEN aware kernel running
in Dom0
The <oneadmin> user may need to execute Xen commands using root privileges. This can be done by adding
this two lines to the sudoers le of the hosts so <oneadmin> user can execute Xen commands as root
(change paths to suit your installation):
%xen ALL=(ALL) NOPASSWD: /usr/sbin/xm
*
%xen ALL=(ALL) NOPASSWD: /usr/sbin/xentop
*
You may also want to congure network for the virtual machines. OpenNebula assumes that the VMs have
network access through standard bridging, please refer to the Xen documentation to congure the network for
your site.
Some distributions have requiretty option enabled in the sudoers le. It must be disabled to so ONE can
execute commands using sudo. The line to remove or comment out (by placing a # at the beginning of the line)
is this one:
50 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
#Defaults requiretty
OpenNebula Conguration
OpenNebula needs to know if it is going to use the XEN Driver. There are two sets of Xen VMM drivers, one for Xen
version 3.x and other for 4.x, you will have to uncomment the version you will need. To achieve this for Xen version
4.x, uncomment these drivers in /etc/one/oned.conf :
IM_MAD = [
name = "xen",
executable = "one_im_ssh",
arguments = "xen" ]
VM_MAD = [
name = "xen",
executable = "one_vmm_exec",
arguments = "xen4",
default = "vmm_exec/vmm_exec_xen4.conf",
type = "xen" ]
xen4 drivers are meant to be used with Xen >=4.2 with xl/xenlight interface. You will need to stop xend daemon
for this. For Xen 3.x and Xen 4.x with xm (with xend daemon) you will need to use xen3 drivers.
Warning:
When using xen3
drivers for Xen 4.x you should change the conguration le
/var/lib/oneremotes/vmm/xen3/xenrc and uncomment the XM_CREDITS line.
3.2.4 Usage
The following are template attributes specic to Xen, please refer to the template reference documentation for a
complete list of the attributes supported to dene a VM.
XEN Specic Attributes
DISK
driver, This attribute denes the Xen backend for disk images, possible values are file:, tap:aio:... Note
the trailing :.
NIC
model, This attribute denes the type of the vif. This corresponds to the type attribute of a vif, possible values
are ioemu, netfront...
ip, This attribute denes the ip of the vif and can be used to set antispoong rules. For example if you want to
use antispoong with network-bridge, you will have to add this line to /etc/xen/xend-config.sxp:
(network-script network-bridge antispoofing=yes)
3.2. Xen Driver 51
OpenNebula 4.6 Administration Guide, Release 4.6
OS
bootloader, You can use this attribute to point to your pygrub loader. This way you wont need to specify
the kernel/initrd and it will use the internal one. Make sure the kernel inside is domU compatible if using
paravirtualization.
When no kernel/initrd or bootloader attributes are set then a HVM machine is created.
CONTEXT
driver, for the CONTEXT device, e.g. le:, phy:...
Additional Attributes
The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to Xen.
Basically, everything placed here will be written ad literally into the Xen deployment le.
RAW = [ type="xen", data="on_crash=destroy" ]
3.2.5 Tuning & Extending
The driver consists of the following les:
/usr/lib/one/mads/one_vmm_exec : generic VMM driver.
/var/lib/one/remotes/vmm/xen : commands executed to perform actions.
And the following driver conguration les:
/etc/one/vmm_exec/vmm_exec_xen3/4.conf : This le is home for default values for domain def-
initions (in other words, OpenNebula templates). Lets go for a more concrete and VM related example. If
the user wants to set a default value for KERNEL for all of their XEN domain denitions, simply edit the
vmm_exec_xen.conf le and set a
OS = [ kernel="/vmlinuz" ]
into it. Now, when dening a ONE template to be sent to a XEN resource, the user has the choice of forgetting to
set the KERNEL parameter, in which case it will default to /vmlinuz.
It is generally a good idea to place defaults for the XEN-specic attributes, that is, attributes mandatory for the XEN
hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for XEN but specic to them are
also recommended to have a default.
/var/lib/one/remotes/vmm/xen/xenrc : This le contains environment variables for the driver. You
may need to tune the values for XM_PATH, if /usr/sbin/xm do not live in their default locations in the remote
hosts. This le can also hold instructions to be executed before the actual driver load to perform specic tasks
or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be
evaluated before the driver execution. For the latter, the syntax is the familiar:
ENVIRONMENT_VARIABLE=VALUE
Parameter Description
IMAGE_PREFIX This will be used as the default handler for disk hot plug
SHUTDOWN_TIMEOUT Seconds to wait after shutdown until timeout
FORCE_DESTROY Force VM cancellation after shutdown timeout
See the Virtual Machine drivers reference for more information.
52 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
3.2.6 Credit Scheduler
Xen comes with a credit scheduler. The credit scheduler is a proportional fair share CPU scheduler built from the
ground up to be work conserving on SMP hosts. This attribute sets a 16 bit value that will represent the amount of
sharing this VM will have respect to the others living in the same host. This value is set into the driver conguration
le, is not intended to be dened per domain.
Xen drivers come precongured to use this credit scheduler and uses the scale 1 OpenNebula CPU = 256 xen
scheduler credits. A VM created with CPU=2.0 will have 512 xen scheduler credits. If you need to change this
scaling parameter it can be congured in /etc/one/vmm_exec/vmm_exec_xen[3/4].conf. The variable
name is called CREDIT.
3.3 KVM Driver
KVM (Kernel-based Virtual Machine) is a complete virtualization technique for Linux. It offers full virtualization,
where each Virtual Machine interacts with its own virtualized hardware. This guide describes the use of the KVM
virtualizer with OpenNebula, please refer to KVM specic documentation for further information on the setup of the
KVM hypervisor itself.
3.3.1 Requirements
The hosts must have a working installation of KVM, that usually requires:
CPU with VT extensions
libvirt >= 0.4.0
kvm kernel modules (kvm.ko, kvm-{intel,amd}.ko). Available from kernel 2.6.20 onwards.
the qemu user-land tools
3.3.2 Considerations & Limitations
KVM currently only supports 4 IDE devices, for more disk devices you should better use SCSI or virtio. You
have to take this into account when adding disks. See the Virtual Machine Template documentation for an
explanation on how OpenNebula assigns disk targets.
By default live migrations are started from the host the VM is currently running. If this is a problem in your
setup you can activate local live migration adding -l migrate=migrate_local to vmm_mad arguments.
If you get error messages similar to error: cannot close file: Bad file descriptor up-
grade libvirt version. Version 0.8.7 has a bug related to le closing operations.
In case you are using disks with a cache setting different to none may may have problems with live migration
depending on the libvirt version. You can enable the migration adding the --unsafe parameter to the virsh
command. The le to change is /var/lib/one/remotes/vmm/kvm/migrate, change this:
exec_and_log "virsh --connect $LIBVIRT_URI migrate --live $deploy_id $QEMU_PROTOCOL://$dest_host/system" \
"Could not migrate $deploy_id to $dest_host"
to this:
exec_and_log "virsh --connect $LIBVIRT_URI migrate --live --unsafe $deploy_id $QEMU_PROTOCOL://$dest_host/system" \
"Could not migrate $deploy_id to $dest_host"
and execute onehost sync --force.
3.3. KVM Driver 53
OpenNebula 4.6 Administration Guide, Release 4.6
3.3.3 Conguration
KVM Conguration
OpenNebula uses the libvirt interface to interact with KVM, so the following steps are required in the hosts to get the
KVM driver running:
Qemu should be congured to not change le ownership. Modify /etc/libvirt/qemu.conf to include
dynamic_ownership = 0. To be able to use the images copied by OpenNebula, change also the user and
group under which the libvirtd is run to oneadmin:
$ grep -vE ^($|#) /etc/libvirt/qemu.conf
user = "oneadmin"
group = "oneadmin"
dynamic_ownership = 0
Warning: Note that oneadmins group may be other than oneadmin. Some distributions adds oneadmin to the
cloud group. Use group = cloud above in that case.
The remote hosts must have the libvirt daemon running.
The user with access to these remotes hosts on behalf of OpenNebula (typically <oneadmin>) has to pertain
to the <libvirtd> and <kvm> groups in order to use the deaemon and be able to launch VMs.
Warning: If apparmor is active (by default in Ubuntu it is), you should add /var/lib/one to the end of
/etc/apparmor.d/libvirt-qemu
owner /var/lib/one/
**
rw,
Warning: If your distro is using PolicyKit you can use this recipe by Jan Horacek to add the require privileges to
oneadmin user:
# content of file: /etc/polkit-1/localauthority/50-local.d/50-org.libvirt.unix.manage-opennebula.pkla
[Allow oneadmin user to manage virtual machines]
Identity=unix-user:oneadmin
Action=org.libvirt.unix.manage
#Action=org.libvirt.unix.monitor
ResultAny=yes
ResultInactive=yes
ResultActive=yes
OpenNebula uses libvirts migration capabilities. More precisely, it uses the TCP protocol offered by libvirt. In order
to congure the physical hosts, the following les have to be modied:
/etc/libvirt/libvirtd.conf : Uncomment listen_tcp = 1. Security conguration is left to the
admins choice, le is full of useful comments to achieve a correct conguration. As a tip, if you dont want to
use TLS for connections set listen_tls = 0.
Add the listen option to libvirt init script:
/etc/default/libvirt-bin : add -l option to libvirtd_opts
For RHEL based distributions, edit this le instead: /etc/sysconfig/libvirtd : uncomment
LIBVIRTD_ARGS="--listen"
54 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
OpenNebula Conguration
OpenNebula needs to know if it is going to use the KVM Driver. To achieve this, uncomment these drivers in
/etc/one/oned.conf :
IM_MAD = [
name = "kvm",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 kvm" ]
VM_MAD = [
name = "kvm",
executable = "one_vmm_exec",
arguments = "-t 15 -r 0 kvm",
default = "vmm_exec/vmm_exec_kvm.conf",
type = "kvm" ]
Working with cgroups (Optional)
Warning: This section outlines the conguration and use of cgroups with OpenNebula and libvirt/KVM. Please
refer to the cgroups documentation of your Linux distribution for specic details.
Cgroups is a kernel feature that allows you to control the amount of resources allocated to a given process (among
other things). This feature can be used to enforce the amount of CPU assigned to a VM, as dened in its template. So,
thanks to cgroups a VM with CPU=0.5 will get half of the physical CPU cycles than a VM with CPU=1.0.
Cgroups can be also used to limit the overall amount of physical RAM that the VMs can use, so you can leave always
a fraction to the host OS.
The following outlines the steps need to congure cgroups, this should be performed in the hosts, not in the front-
end:
Dene where to mount the cgroup controller virtual le systems, at least memory and cpu are needed.
(Optional) You may want to limit the total memory devoted to VMs. Create a group for the libvirt processes
(VMs) and the total memory you want to assign to them. Be sure to assign libvirt processes to this group, e.g.
wih CGROUP_DAEMON or in cgrules.conf. Example:
#/etc/cgconfig.conf
group virt {
memory {
memory.limit_in_bytes = 5120M;
}
}
mount {
cpu = /mnt/cgroups/cpu;
memory = /mnt/cgroups/memory;
}
# /etc/cgrules.conf
*
:libvirtd memory virt/
After conguring the hosts start/restart the cgroups service.
3.3. KVM Driver 55
OpenNebula 4.6 Administration Guide, Release 4.6
(Optional) If you have limited the amount of memory for VMs, you may want to set RESERVED_MEM parameter
in host or cluster templates.
Thats it. OpenNebula automatically generates a number of CPU shares proportional to the CPU attribute in the VM
template. For example, consider a host running 2 VMs (73 and 74, with CPU=0.5 and CPU=1) respectively. If
everything is properly congured you should see:
/mnt/cgroups/cpu/sysdefault/libvirt/qemu/
|-- cgroup.event_control
...
|-- cpu.shares
|-- cpu.stat
|-- notify_on_release
|-- one-73
| |-- cgroup.clone_children
| |-- cgroup.event_control
| |-- cgroup.procs
| |-- cpu.shares
| ...
| -- vcpu0
| |-- cgroup.clone_children
| ...
|-- one-74
| |-- cgroup.clone_children
| |-- cgroup.event_control
| |-- cgroup.procs
| |-- cpu.shares
| ...
| -- vcpu0
| |-- cgroup.clone_children
| ...
-- tasks
and the cpu shares for each VM:
> cat /mnt/cgroups/cpu/sysdefault/libvirt/qemu/one-73/cpu.shares
512
> cat /mnt/cgroups/cpu/sysdefault/libvirt/qemu/one-74/cpu.shares
1024
Udev Rules
When creating VMs as a regular user, /dev/kvm needs to be chowned to the oneadmin user. For that to be
persistent you have to apply the following UDEV rule:
# cat /etc/udev/rules.d/60-qemu-kvm.rules
KERNEL=="kvm", GROUP="oneadmin", MODE="0660"
3.3.4 Usage
The following are template attributes specic to KVM, please refer to the template reference documentation for a
complete list of the attributes supported to dene a VM.
56 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
Default Attributes
There are some attributes required for KVM to boot a VM. You can set a suitable defaults for them so, all the VMs get
needed values. These attributes are set in /etc/one/vmm_exec/vmm_exec_kvm.conf. The following can be
set for KVM:
emulator, path to the kvm executable. You may need to adjust it to your ditsro
os, the attraibutes: kernel, initrd, boot, root, kernel_cmd, and arch
vcpu
features, attributes: acpi, pae
disk, attributes driver and cache. All disks will use that driver and caching algorithm
nic, attribute lter.
raw, to add libvirt attributes to the domain XML le.
For example:
OS = [
KERNEL = /vmlinuz,
BOOT = hd,
ARCH = "x86_64"]
DISK = [ driver = "raw" , cache = "none"]
NIC = [ filter = "clean-traffic", model = "virtio" ]
RAW = "<devices><serial type=\"pty\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></serial><console type=\"pty\" tty=\"/dev/pts/5\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></console></devices>"
KVM Specic Attributes
DISK
type, This attribute denes the type of the media to be exposed to the VM, possible values are: disk (default),
cdrom or floppy. This attribute corresponds to the media option of the -driver argument of the kvm
command.
driver, species the format of the disk image; possible values are raw, qcow2... This attribute corresponds to
the format option of the -driver argument of the kvm command.
cache, species the optional cache mechanism, possible values are default, none, writethrough and
writeback.
io, set IO policy possible values are threads and native
NIC
target, name for the tun device created for the VM. It corresponds to the ifname option of the -net argument
of the kvm command.
script, name of a shell script to be executed after creating the tun device for the VM. It corresponds to the
script option of the -net argument of the kvm command.
model, ethernet hardware to emulate. You can get the list of available models with this command:
3.3. KVM Driver 57
OpenNebula 4.6 Administration Guide, Release 4.6
$ kvm -net nic,model=? -nographic /dev/null
lter to dene a network ltering rule for the interface. Libvirt includes some predened rules (e.g. clean-
trafc) that can be used. Check the Libvirt documentation for more information, you can also list the rules in
your system with:
$ virsh -c qemu:///system nwfilter-list
Graphics
If properly congured, libvirt and KVM can work with SPICE (check this for more information). To select it, just add
to the GRAPHICS attribute:
type = spice
Enabling spice will also make the driver inject specic conguration for these machines. The conguration can be
changed in the driver conguration le, variable SPICE_OPTIONS.
Virtio
Virtio is the framework for IO virtualization in KVM. You will need a linux kernel with the virtio drivers for the guest,
check the KVM documentation for more info.
If you want to use the virtio drivers add the following attributes to your devices:
DISK, add the attribute DEV_PREFIX=vd
NIC, add the attribute model=virtio
Additional Attributes
The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to KVM.
Basically, everything placed here will be written literally into the KVM deployment le (use libvirt xml format and
semantics).
RAW = [ type = "kvm",
data = "<devices><serial type=\"pty\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></serial><console type=\"pty\" tty=\"/dev/pts/5\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></console></devices>" ]
Disk/Nic Hotplugging
KVM supports hotplugging to the virtio and the SCSI buses. For disks, the bus the disk will be attached to is
inferred from the DEV_PREFIX attribute of the disk template.
sd: SCSI (default).
vd: virtio.
If TARGET is passed instead of DEV_PREFIX the same rules apply (what happens behind the scenes is that Open-
Nebula generates a TARGET based on the DEV_PREFIX if no TARGET is provided).
The conguration for the default cache type on newly attached disks is congured in
/var/lib/one/remotes/vmm/kvm/kvmrc:
58 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
# This parameter will set the default cache type for new attached disks. It
# will be used in case the attached disk does not have an specific cache
# method set (can be set using templates when attaching a disk).
DEFAULT_ATTACH_CACHE=none
For Disks and NICs, if the guest OS is a Linux avour, the guest needs to be explicitly tell to rescan the PCI bus. This
can be done issuing the following command as root:
# echo 1 > /sys/bus/pci/rescan
3.3.5 Tuning & Extending
The driver consists of the following les:
/usr/lib/one/mads/one_vmm_exec : generic VMM driver.
/var/lib/one/remotes/vmm/kvm : commands executed to perform actions.
And the following driver conguration les:
/etc/one/vmm_exec/vmm_exec_kvm.conf : This le is home for default values for domain denitions
(in other words, OpenNebula templates).
It is generally a good idea to place defaults for the KVM-specic attributes, that is, attributes mandatory in the KVM
driver that are not mandatory for other hypervisors. Non mandatory attributes for KVM but specic to them are also
recommended to have a default.
/var/lib/one/remotes/vmm/kvm/kvmrc : This le holds instructions to be executed before the actual
driver load to perform specic tasks or to pass environmental variables to the driver. The syntax used for the
former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the
familiar:
ENVIRONMENT_VARIABLE=VALUE
The parameters that can be changed here are as follows:
Parameter Description
LIBVIRT_URI Connection string to libvirtd
QEMU_PROTOCOLProtocol used for live migrations
SHUT-
DOWN_TIMEOUT
Seconds to wait after shutdown until timeout
FORCE_DESTROYForce VM cancellation after shutdown timeout
CAN-
CEL_NO_ACPI
Force VMs without ACPI enabled to be destroyed on shutdown
DE-
FAULT_ATTACH_CACHE
This parameter will set the default cache type for new attached disks. It will be used in case
the attached disk does not have an specic cache method set (can be set using templates when
attaching a disk).
See the Virtual Machine drivers reference for more information.
3.4 VMware Drivers
The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware
Server hypervisors. They use libvirt and direct API calls using RbVmomi to invoke the Virtual Infrastructure
SOAP API exposed by the VMware hypervisors, and feature a simple conguration process that will leverage the
stability, performance and feature set of any existing VMware based OpenNebula cloud.
3.4. VMware Drivers 59
OpenNebula 4.6 Administration Guide, Release 4.6
3.4.1 Requirements
In order to use the VMware Drivers, some software dependencies have to be met:
libvirt: At the OpenNebula front-end, libvirt is used to access the VMware hypervisors , so it needs to be
installed with ESX support. We recommend version 0.8.3 or higher, which enables interaction with the vCenter
VMware product, required to use vMotion. This will be installed by the OpenNebula package.
rbvmomi: Also at the OpenNebula front-end, the rbvmomi gem needs to be installed. This will be installed by
the OpenNebula package or the install_gems script.
ESX, VMware Server: At least one VMware hypervisor needs to be installed. Further conguration for the
DATASTORE is needed, and it is explained in the TM part of the Conguration section.
Optional Requirements. To enable some OpenNebula features you may need:
vMotion: VMwares vMotion capabilities allows to perform live migration of a Virtual Machine between two
ESX hosts, allowing for load balancing between cloud worker nodes without downtime in the migrated virtual
machine. In order to use this capability, the following requisites have to be meet:
Shared storage between the source and target host, mounted in both hosts as the same DATASTORE (we
are going to assume it is called images in the rest of this document)
vCenter Server installed and congured, details in the Installation Guide for ESX and vCenter.
A datacenter created in the vCenter server that includes all ESX hosts between which Virtual Machines
want to be live migrated (we are going to assume it is called onecenter in the rest of this document).
A user created in vCenter with the same username and password than the ones in the ESX hosts, with
administrator permissions.
Warning: Please note that the libvirt version shipped with some linux distribution does not include ESX support.
In these cases it may be needed to recompile the libvirt package with the with-esx option.
3.4.2 Considerations & Limitations
Only one vCenter can be used for livemigration.
Datablock images and volatile disk images will be always created without format, and thus have to be formatted
by the guest.
In order to use the attach/detach functionality, the original VM must have at least one SCSI disk, and the disk
to be attached/detached must be placed on a SCSI bus (ie, sd as DEV_PREFIX).
The ESX hosts need to be properly licensed, with write access to the exported API (as the Evaluation license
does).
3.4.3 VMware Conguration
Users & Groups
The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client,
and create a new user (for instance, oneadmin) with the same UID and username as the oneadmin user executing
OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).
60 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: After registering a datastore, make sure that the oneadmin user can write in said datastore (this is not
needed if the root user is used to access the ESX). In case oneadmin cannot write in /vmfs/volumes/<ds_id>,
then permissions need to be adjusted. This can be done in various ways, the recommended one being:
Add oneadmin to the root group using the Users & Group tab in the VI Client
$ chmod g+w /vmfs/volumes/<ds_id> in the ESX host
SSH Access
SSH access from the front-end to the ESX hosts is required (or, at least, they need it to unlock all the functionality of
OpenNebula). to ensure so, please remember to click the Grant shell access to this user checkbox when creating the
oneadmin user.
The access via SSH needs to be passwordless. Please follow the next steps to congure the ESX node:
login to the esx host (ssh <esx-host>)
$ su -
$ mkdir /etc/ssh/keys-oneadmin
$ chmod 755 /etc/ssh/keys-oneadmin
$ su - oneadmin
$ vi /etc/ssh/keys-oneadmin/authorized_keys
<paste here the contents of the oneadmins front-end account public key (FE -> $HOME/.ssh/id_{rsa,dsa}.pub) and exit vi>
$ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys
More information on passwordless ssh connections here.
3.4. VMware Drivers 61
OpenNebula 4.6 Administration Guide, Release 4.6
Tools Setup
In order to enable all the functionality of the drivers, several short steps remain:
$ su
$ chmod +s /sbin/vmkfstools
In order to use the attach/detach functionality for VM disks, some extra conguration steps are needed in the
ESX hosts. For ESX > 5.0
$ su
$ chmod +s /bin/vim-cmd
In order to use the dynamic network mode for VM disks, some extra conguration steps are needed in the ESX
hosts. For ESX > 5.0
$ su
$ chmod +s /sbin/esxcfg-vswitch
Persistency
Persistency of the ESX lesystem has to be handled with care. Most of ESX 5 les reside in a in-memory lesystem,
meaning faster access and also non persistency across reboots, which can be inconvenient at the time of managing a
ESX farm for a OpenNebula cloud.
Here is a recipe to make the conguration needed for OpenNebula persistent across reboots. The changes need to be
done as root.
# vi /etc/rc.local
## Add this at the bottom of the file
mkdir /etc/ssh/keys-oneadmin
cat > /etc/ssh/ssh-oneadmin/authorized_keys << _SSH_HEYS_
ssh-rsa <really long string with oneadmins ssh public key>
_SSH_KEYS_
chmod 600 /etc/ssh/keys-oneadmin/authorized_keys
chmod +s /sbin/vmkfstools /bin/vim-cmd
chmod 755 /etc/ssh/keys-oneadmin
chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys
# /sbin/auto-backup.sh
This information was based on this blog post.
Storage
There are additional conguration steps regarding storage. Please refer to the VMware Storage Model guide for more
details.
Networking
Networking can be used in two different modes: pre-dened (to use pre-dened port groups) or dynamic (to dynam-
ically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.
62 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
VNC
In order to access running VMs through VNC, the ESX host needs to be congured beforehand, basically to allow
VNC inbound connections via their rewall. To do so, please follow this guide.
3.4.4 OpenNebula Conguration
OpenNebula Daemon
In order to congure OpenNebula to work with the VMware drivers, the following sections need to be uncom-
mented or added in the /etc/one/oned.conf le.
#-------------------------------------------------------------------------------
# VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
name = "vmware",
executable = "one_vmm_sh",
arguments = "-t 15 -r 0 vmware -s sh",
default = "vmm_exec/vmm_exec_vmware.conf",
type = "vmware" ]
#-------------------------------------------------------------------------------
# VMware Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
name = "vmware",
executable = "one_im_sh",
arguments = "-c -t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------
SCRIPTS_REMOTE_DIR=/tmp/one
VMware Drivers
The conguration attributes for the VMware drivers are set in the /etc/one/vmwarerc le. In particular the
following values can be set:
SCHEDULER
OPTIONS
DESCRIPTION
:libvirt_uri used to connect to VMware through libvirt. When using VMware Server, the connection string
set under LIBVIRT_URI needs to have its prex changed from esx to gsx
:username username to access the VMware hypervisor
:password password to access the VMware hypervisor
:datacenter (only for vMotion) name of the datacenter where the hosts have been registered.
:vcenter (only for vMotion) name or IP of the vCenter that manage the ESX hosts
Example of the conguration le:
:libvirt_uri: "esx://@HOST@/?no_verify=1&auto_answer=1"
:username: "oneadmin"
:password: "mypass"
:datacenter: "ha-datacenter"
:vcenter: "London-DC"
3.4. VMware Drivers 63
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: Please be aware that the above rc le, in stark contrast with other rc les in OpenNebula, uses yaml
syntax, therefore please input the values between quotes.
VMware Physical Hosts
The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If
the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the
following command (dynamic netwotk mode):
$ onehost create esx-host -i vmware -v vmware -n vmware
or for pre-dened networking
$ onehost create esx-host -i vmware -v vmware -n dummy
3.4.5 Usage
Images
To register an existing VMware disk in an OpenNebula image catalog you need to:
Place all the .vmdk les that conform a disk (they can be easily spotted, there is a main <name-of-the-
image>.vmdk le, and various <name-of-the-image-sXXX.vmdk at les) in the same directory, with no more
les than these.
Afterwards, an image template needs to be written, using the the absolut path to the directory as the PATH value.
For example:
NAME = MyVMwareDisk
PATH =/absolute/path/to/disk/folder
TYPE = OS
Warning: To register a .iso le with type CDROM there is no need to create a folder, just point with PATH to he
absolute path of the .iso le.
Warning: In order to register a VMware disk through Sunstone, create a zip compressed tarball (.tar.gz) and
upload that (it will be automatically decompressed in the datastore). Please note that the tarball is only of the
folder with the .vmdk les inside, no extra directories can be contained in that folder.
Once registered the image can be used as any other image in the OpenNebula system as described in the Virtual
Machine Images guide.
Datablocks & Volatile Disks
Datablock images and volatile disks will appear as a raw devices on the guest, which will then need to be formatted.
The FORMAT attribute is compulsory, possible values (more info on this here) are:
vmdk_thin
vmdk_zeroedthick
vmdk_eagerzeroedthick
64 Chapter 3. Virtualization
OpenNebula 4.6 Administration Guide, Release 4.6
Virtual Machines
The following attributes can be used for VMware Virtual Machines:
GuestOS: This parameter can be used in the OS section of the VM template. The os-identier can be one of this
list.
OS=[GUESTOS=<os-identifier]
PCIBridge: This parameter can be used in the FEATURES section of the VM template. The <bridge-number>
is the number of PCI Bridges that will be available in the VM (that is, 0 means no PCI Bridges, 1 means PCI
Bridge with ID = 0 present, 2 means PCI Bridges with ID = 0,1 present, and so on).
FEATURES=[PCIBRIDGE=<bridge-number>]
3.4.6 Custom VMX Attributes
You can add metadata straight to the .vmx le using RAW/DATA_VMX. This comes in handy to specify for example
a specic guestOS type, more info here.
Following the two last sections, if we want a VM of guestOS type Windows 7 server 64bit, with disks plugged into
a LSI SAS SCSI bus, we can use a template like:
NAME = myVMwareVM
CPU = 1
MEMORY = 256
DISK = [IMAGE_ID="7"]
NIC = [NETWORK="public"]
RAW=[
DATA="<devices><controller type=scsi index=0 model=lsisas1068/></devices>",
DATA_VMX="pciBridge0.present = \"TRUE\"\npciBridge4.present = \"TRUE\"\npciBridge4.virtualDev = \"pcieRootPort\"\npciBridge4.functions = \"8\"\npciBridge5.present = \"TRUE\"\npciBridge5.virtualDev = \"pcieRootPort\"\npciBridge5.functions = \"8\"\npciBridge6.present = \"TRUE\"\npciBridge6.virtualDev = \"pcieRootPort\"\npciBridge6.functions = \"8\"\npciBridge7.present = \"TRUE\"\npciBridge7.virtualDev = \"pcieRootPort\"\npciBridge7.functions = \"8\"\nguestOS = \"windows7srv-64\"",
TYPE="vmware" ]
3.4.7 Tuning & Extending
The VMware Drivers consists of three drivers, with their corresponding les:
VMM Driver
/var/lib/one/remotes/vmm/vmware : commands executed to perform actions.
IM Driver
/var/lib/one/remotes/im/vmware.d : vmware IM probes.
TM Driver
/usr/lib/one/tm_commands : commands executed to perform transfer actions.
And the following driver conguration les:
VMM Driver
/etc/one/vmm_exec/vmm_exec_vmware.conf : This le is home for default values for do-
main denitions (in other words, OpenNebula templates). For example, if the user wants to set
a default value for CPU requirements for all of their VMware domain denitions, simply edit the
/etc/one/vmm_exec/vmm_exec_vmware.conf le and set a
3.4. VMware Drivers 65
OpenNebula 4.6 Administration Guide, Release 4.6
CPU=0.6
into it. Now, when dening a template to be sent to a VMware resource, the user has the choice of forgetting to set
the CPU requirement, in which case it will default to 0.6.
It is generally a good idea to place defaults for the VMware-specic attributes, that is, attributes mandatory for the
VMware hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specic
to them are also recommended to have a default.
TM Driver
/etc/one/tm_vmware/tm_vmware.conf : This les contains the scripts tied to the different ac-
tions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this
can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.
More generic information about drivers:
Virtual Machine Manager drivers reference
Transfer Manager driver reference
66 Chapter 3. Virtualization
CHAPTER
FOUR
NETWORKING
4.1 Networking Overview
Before diving into Network conguration in OpenNebula make sure that youve followed the steps described in the
Networking section of the Installation guide.
When a new Virtual Machine is launched, OpenNebula will connect its network interfaces (dened in the NIC section
of the template) to the bridge or physical device specied in the Virtual Network denition. This will allow the VM to
have access to different networks, public or private.
The OpenNebula administrator must take into account that although this is a powerful setup, it should be comple-
mented with mechanisms to restrict network access only to the expected Virtual Machines, to avoid situations in
which an OpenNebula user interacts with another users VM. This functionality is provided through Virtual Network
Manager drivers. The OpenNebula administrator may associate one of the following drivers to each Host, when the
hosts are created with the onehost command:
dummy: Default driver that doesnt perform any network operation. Firewalling rules are also ignored.
fw: Firewall rules are applied, but networking isolation is ignored.
802.1Q: restrict network access through VLAN tagging, which also requires support from the hardware
switches.
ebtables: restrict network access through Ebtables rules. No special hardware conguration required.
ovswitch: restrict network access with Open vSwitch Virtual Switch.
VMware: uses the VMware networking infrastructure to provide an isolated and 802.1Q compatible network for
VMs launched with the VMware hypervisor.
Note that some of these drivers also create the bridging device in the hosts.
The administrator must take into account the following matrix that shows the compatibility of the hypervisors with
each networking driver:
Firewall Open vSwitch 802.1Q ebtables VMware
KVM Yes Yes Yes Yes No
Xen Yes Yes Yes Yes No
VMware No No No No Yes
The Virtual Network isolation is enabled with any of the 801.1Q, ebtables, vmware or ovswitch drivers. These drivers
also enable the rewalling rules to allow a regular OpenNebula user to lter TCP, UDP or ICMP trafc.
OpenNebula also comes with a Virtual Router appliance that provides networking services like DHCP, DNS, etc.
67
OpenNebula 4.6 Administration Guide, Release 4.6
4.1.1 Tuning & Extending
Customization of the Drivers
The network is dynamically congured in three diferent steps:
Pre: Right before the hypervisor launches the VM.
Post: Right after the hypervisor launches the VM.
Clean: Right after the hypervisor shuts down the VM.
Each driver execute different actions (or even none at all) in these phases depending on the underlying switching
fabric. Note that, if either Pre or Post fail, the VM will be shut down and will be placed in a FAIL state.
You can easily customize the behavior of the driver for your infrastructure by modifying the les in located in
/var/lib/one/remotes/vnm. Each driver has its own folder that contains at least three programs pre, post
and clean. These programs are executed to perform the steps described above.
Fixing Default Paths
The default paths for the binaries/executables used during the network conguration may change depend-
ing on the distro. OpenNebula ships with the most common paths, however these may be wrong
for your particular distro. In that case, please x the proper paths in the COMMANDS hash of
/var/lib/one/remotes/vnm/OpenNebulaNetwork.rb:
COMMANDS = {
:ebtables => "sudo /sbin/ebtables",
:iptables => "sudo /sbin/iptables",
:brctl => "sudo /sbin/brctl",
:ip => "sudo /sbin/ip",
:vconfig => "sudo /sbin/vconfig",
:virsh => "virsh -c qemu:///system",
:xm => "sudo /usr/sbin/xm",
:ovs_vsctl=> "sudo /usr/local/bin/ovs-vsctl",
:lsmod => "/sbin/lsmod"
}
4.2 802.1Q VLAN
This guide describes how to enable Network isolation provided through host-managed VLANs. This driver will create
a bridge for each OpenNebula Virtual Network and attach an VLAN tagged network interface to the bridge. This
mechanism is compliant with IEEE 802.1Q.
The VLAN id will be the same for every interface in a given network, calculated by adding a constant to the network
id. It may also be forced by specifying an VLAN_ID parameter in the Virtual Network template.
4.2.1 Requirements
A network switch capable of forwarding VLAN tagged trafc. The physical switch ports should be VLAN trunks.
68 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
4.2.2 Considerations & Limitations
This driver requires some previous work on the network components, namely the switches, to enable VLAN trunking
in the network interfaces connected to the OpenNebula hosts. If this is not activated the VLAN tags will not get trough
and the network will behave erratically.
In OpenNebula 3.0, this functionality was provided through a hook, and it wasnt effective after a migration. Since
OpenNebula 3.2 this limitation does not apply.
4.2.3 Conguration
Hosts Conguration
The sudoers le must be congured so oneadmin can execute vconfig, brctl and ip in the hosts.
The package vconfig must be installed in the hosts.
Hosts must have the module 8021q loaded.
To enable VLAN (802.1Q) support in the kernel, one must load the 8021q module:
$ modprobe 8021q
If the module is not available, please refer to your distributions documentation on how to install it. This module,
along with the vconfig binary which is also required by the script, is generally supplied by the vlan package.
OpenNebula Conguration
To enable this driver, use 802.1Q as the Virtual Network Manager driver parameter when the hosts are created with
the onehost command:
$ onehost create host01 -i kvm -v kvm -n 802.1Q
Driver Actions
Action Description
Pre Creates a VLAN tagged interface in the Host and a attaches it to a dynamically created bridge.
Post N/A
Clean It doesnt do anything. The VLAN tagged interface and bridge are kept in the Host to speed up future
VMs
4.2.4 Usage
The driver will be automatically applied to every Virtual Machine deployed in the Host. However, this driver requires
a special conguration in the Virtual Network template: only the virtual networks with the attribute VLAN set to YES
will be isolated. The attribute PHYDEV must be also dened, with the name of the physical network device that will
be attached to the bridge. The BRIDGE attribute is not mandatory, if it isnt dened, OpenNebula will generate one
automatically.
NAME = "hmnet"
TYPE = "fixed"
PHYDEV = "eth0"
VLAN = "YES"
4.2. 802.1Q VLAN 69
OpenNebula 4.6 Administration Guide, Release 4.6
VLAN_ID = 50 # optional
BRIDGE = "brhm" # optional
LEASES = ...
In this scenario, the driver will check for the existence of the brhm bridge. If it doesnt exist it will be created. eth0
will be tagged (eth0.<vlan_id>) and attached to brhm (unless its already attached).
Warning: Any user with Network creation/modication permissions may force a custom vlan id with the
VLAN_ID parameter in the network template. In that scenario, any user may be able to connect to another network
with the same network id. Techniques to avoid this are explained under the Tuning & Extending section.
4.2.5 Tuning & Extending
Warning: Remember that any change in the /var/lib/one/remotes directory wont be effective in the
Hosts until you execute, as oneadmin:
oneadmin@frontend $ onehost sync
Calculating VLAN ID
The vlan id is calculated by adding the network id to a constant dened in
/var/lib/one/remotes/vnm/OpenNebulaNetwork.rb. You can customize that value to your own
needs:
CONF = {
:start_vlan => 2
}
Restricting Manually the VLAN ID
You can either restrict permissions on Network creation with ACL rules, or you can en-
tirely disable the possibility to redene the VLAN_ID by modifying the source code of
/var/lib/one/remotes/vnm/802.1Q/HostManaged.rb. Change these lines:
if nic[:vlan_id]
vlan = nic[:vlan_id]
else
vlan = CONF[:start_vlan] + nic[:network_id].to_i
end
with this one:
vlan = CONF[:start_vlan] + nic[:network_id].to_i
4.3 Ebtables
This guide describes how to enable Network isolation provided through ebtables rules applied on the bridges. This
method will only permit isolation with a mask of 255.255.255.0.
70 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
4.3.1 Requirements
This hook requires ebtables to be available in all the OpenNebula Hosts.
4.3.2 Considerations & Limitations
Although this is the most easily usable driver, since it doesnt require any special hardware or any software congura-
tion, it lacks the ability of sharing IPs amongst different VNETs, that is, if an VNET is using leases of 192.168.0.0/24,
another VNET cant be using IPs in the same network.
4.3.3 Conguration
Hosts Conguration
The package ebtables must be installed in the hosts.
The sudoers le must be congured so oneadmin can execute ebtables in the hosts.
OpenNebula Conguration
To enable this driver, use ebtables as the Virtual Network Manager driver parameter when the hosts are created with
the onehost command:
$ onehost create host01 -i kvm -v kvm -n ebtables
Driver Actions
Action Description
Pre N/A
Post Creates EBTABLES rules in the Host where the VM has been placed.
Clean Removes the EBTABLES rules created during the Post action.
4.3.4 Usage
The driver will be automatically applied to every Virtual Machine deployed in the Host. Only the virtual networks
with the attribute VLAN set to YES will be isolated. There are no other special attributes required.
NAME = "ebtables_net"
TYPE = "fixed"
BRIDGE = vbr1
VLAN = "YES"
LEASES = ...
4.3.5 Tuning & Extending
EBTABLES Rules
This section lists the EBTABLES rules that are created:
4.3. Ebtables 71
OpenNebula 4.6 Administration Guide, Release 4.6
# Drop packets that dont match the networks MAC Address
-s ! <mac_address>/ff:ff:ff:ff:ff:0 -o <tap_device> -j DROP
# Prevent MAC spoofing
-s ! <mac_address> -i <tap_device> -j DROP
4.4 Open vSwitch
This guide describes how to use the Open vSwitch network drives. They provide two indepent functionalities that
can be used together: network isolation using VLANs, and network ltering using OpenFlow. Each Virtual Network
interface will receive a VLAN tag enabling network isolation. Other trafc attributes that may be congured through
Open vSwitch are not modied.
The VLAN id will be the same for every interface in a given network, calculated by adding a constant to the network
id. It may also be forced by specifying an VLAN_ID parameter in the Virtual Network template.
The network ltering functionality is very similar to the Firewall drivers, with a few limitations discussed below.
4.4.1 Requirements
This driver requires Open vSwitch to be installed on each OpenNebula Host. Follow the resources specied in
hosts_conguration to install it.
4.4.2 Considerations & Limitations
Integrating OpenNebula with Open vSwitch brings a long list of benets to OpenNebula, read Open vSwitch Features
to get a hold on these features.
This guide will address the usage of VLAN tagging and OpenFlow ltering of OpenNebula Virtual Machines. On top
of that any other Open vSwitch feature may be used, but thats outside of the scope of this guide.
ovswitch and ovswitch_brcompat
OpenNebula ships with two sets of drivers that provide the same functionality: ovswitch and ovsvswitch_brcompat.
The following list details the differences between both drivers:
ovswitch: Recommended for kvm hosts. Only works with kvm. Doesnt require the Open vSwitch compati-
bility layer for Linux bridging.
ovswitch_brcompat: Works with kvm and xen. This is the only set that currently works with xen. Not
recommended for kvm. Requires Open vSwitch compatibility layer for Linux bridging.
4.4.3 Conguration
Hosts Conguration
You need to install Open vSwitch on each OpenNebula Host. Please refer to the Open vSwitch documentation
to do so.
If using ovswitch_brcompat it is also necessary to install the Open vSwitch compatibility layer for Linux
bridging.
The sudoers le must be congured so oneadmin can execute ovs_vsctl in the hosts.
72 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
OpenNebula Conguration
To enable this driver, use ovswitch or ovswitch_brcompat as the Virtual Network Manager driver parameter when
the hosts are created with the onehost command:
# for kvm hosts
$ onehost create host01 -i kvm -v kvm -n ovswitch
# for xen hosts
$ onehost create host02 -i xen -v xen -n ovswitch_brcompat
Driver Actions
Action Description
Pre N/A
Post Performs the appropriate Open vSwitch commands to tag the virtual tap interface.
Clean It doesnt do anything. The virtual tap interfaces will be automatically discarded when the VM is shut
down.
Multiple VLANs (VLAN trunking)
VLAN trunking is also supported by adding the following tag to the NIC element in the VM template or to the virtual
network template:
VLAN_TAGGED_ID: Specify a range of VLANs to tag, for example: 1,10,30,32.
4.4.4 Usage
Network Isolation
The driver will be automatically applied to every Virtual Machine deployed in the Host. Only the virtual networks
with the attribute VLAN set to YES will be isolated. There are no other special attributes required.
NAME = "ovswitch_net"
TYPE = "fixed"
BRIDGE = vbr1
VLAN = "YES"
VLAN_ID = 50 # optional
LEASES = ...
Warning: Any user with Network creation/modication permissions may force a custom vlan id with the
VLAN_ID parameter in the network template. In that scenario, any user may be able to connect to another network
with the same network id. Techniques to avoid this are explained under the Tuning & Extending section.
Network Filtering
The rst rule that is always applied when using the Open vSwitch drivers is the MAC-spoong rule, that prevents any
trafc coming out of the VM if the user changes the MAC address.
4.4. Open vSwitch 73
OpenNebula 4.6 Administration Guide, Release 4.6
The rewall directives must be placed in the network section of the Virtual Machine template. These are the possible
attributes:
BLACK_PORTS_TCP = iptables_range: Doesnt permit access to the VM through the specied ports
in the TCP protocol. Superseded by WHITE_PORTS_TCP if dened.
BLACK_PORTS_UDP = iptables_range: Doesnt permit access to the VM through the specied ports
in the UDP protocol. Superseded by WHITE_PORTS_UDP if dened.
ICMP = drop: Blocks ICMP connections to the VM. By default its set to accept.
iptables_range: a list of ports separated by commas, e.g.: 80,8080. Currently no ranges are supporteg, e.g.:
5900:6000 is not supported.
Example:
NIC = [ NETWORK_ID = 3, BLACK_PORTS_TCP = "80, 22", ICMP = drop ]
Note that WHITE_PORTS_TCP and BLACK_PORTS_TCP are mutually exclusive. In the event where theyre both
dened the more restrictive will prevail i.e. WHITE_PORTS_TCP. The same happens with WHITE_PORTS_UDP
and BLACK_PORTS_UDP.
4.4.5 Tuning & Extending
Warning: Remember that any change in the /var/lib/one/remotes directory wont be effective in the
Hosts until you execute, as oneadmin:
oneadmin@frontend $ onehost sync
This way in the next monitoring cycle the updated les will be copied again to the Hosts.
Calculating VLAN ID
The vlan id is calculated by adding the network id to a constant dened in
/var/lib/one/remotes/vnm/OpenNebulaNetwork.rb. You can customize that value to your own
needs:
CONF = {
:start_vlan => 2
}
Restricting Manually the VLAN ID
You can either restrict permissions on Network creation with ACL rules, or you can en-
tirely disable the possibility to redene the VLAN_ID by modifying the source code of
/var/lib/one/remotes/vnm/ovswitch/OpenvSwitch.rb. Change these lines:
if nic[:vlan_id]
vlan = nic[:vlan_id]
else
vlan = CONF[:start_vlan] + nic[:network_id].to_i
end
with this one:
74 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
vlan = CONF[:start_vlan] + nic[:network_id].to_i
OpenFlow Rules
To modify these rules you have to edit: /var/lib/one/remotes/vnm/ovswitch/OpenvSwitch.rb.
Mac-spoong
These rules prevent any trafc to come out of the port the MAC address has changed.
in_port=<PORT>,dl_src=<MAC>,priority=40000,actions=normal
in_port=<PORT>,priority=39000,actions=normal
IP hijacking
These rules prevent any trafc to come out of the port for IPv4 IPs not congured for a VM
in_port=<PORT>,arp,dl_src=<MAC>priority=45000,actions=drop
in_port=<PORT>,arp,dl_src=<MAC>,nw_src=<IP>,priority=46000,actions=normal
Black ports (one rule per port)
tcp,dl_dst=<MAC>,tp_dst=<PORT>,actions=drop
ICMP Drop
icmp,dl_dst=<MAC>,actions=drop
4.5 VMware Networking
This guide describes how to use the VMware network driver in OpenNebula. This driver optionally provides network
isolation through VLAN tagging. The VLAN id will be the same for every interface in a given network, calculated
by adding a constant to the network id. It may also be forced by specifying an VLAN_ID parameter in the Virtual
Network template.
4.5.1 Requirements
In order to use the dynamic network mode for VM disks, some extra conguration steps are needed in the ESX hosts.
$ su
$ chmod +s /sbin/esxcfg-vswitch
4.5.2 Considerations & Limitations
It should be noted that the drivers will not create/delete/manage VMware virtual switches, these should be created
before-hand by VMware administrators.
Since the dynamic driver will however create VMware port groups, it should be noted that theres a default limit of 56
port groups per switch. Administrators should be aware of these limitations.
4.5. VMware Networking 75
OpenNebula 4.6 Administration Guide, Release 4.6
4.5.3 Conguration
The vSphere hosts can work in two different networking modes, namely:
pre-dened: The VMWare administrator has set up the network for each vSphere host, dening the vSwitches
and port groups for the VMs. This mode is associated with the dummy network driver. To congure this mode
use dummy as the Virtual Network Manager driver parameter when the hosts are created:
$ onehost create host01 -i vmware -v vmware -n dummy
dynamic: In this mode OpenNebula will create on-the-y port groups (with an optional VLAN_ID) for your
VMs. The VMWare administrator has to set up only the vSwitch to be used by OpenNebula. To enable this
driver, use vmware as the VNM driver for the hosts:
$ onehost create host02 -i vmware -v vmware -n vmware
Warning: Dynamic and pre-dened networking modes can be mixed in a datacenter. Just use the desired mode
for each host.
4.5.4 Usage
Using the Pre-dened Network Mode
In this mode there the VMware admin has created one or more port groups in the ESX hosts to bridge the VMs. The
port group has to be specied for each Virtual Network in its template through the BRIDGE attribute (check the Virtual
Network usage guide for more info).
The NICs of the VM in this Virtual Network will be attached to the specied port group in the vSphere host. For
example:
NAME = "pre-defined_vmware_net"
TYPE = "fixed"
BRIDGE = "VM Network" # This is the port group
LEASES = ...
Using the Dynamic Network Mode
In this mode the driver will dynamically create a port-group with name one-pg-<network_id> in the specied
vSwitch of the target host. In this scenario the vSwitch is specied by the BRIDGE attribute of the Virtual Network
template.
Additionally the port groups can be tagged with a vlan_id. You can set VLAN=YES in the Virtual Network template
to automatically tag the port groups in each ESX host. Optionally the tag can be specied through the VLAN_ID
attribute. For example:
NAME = "dynamic_vmware_net"
TYPE = "fixed"
BRIDGE = "vSwitch0" # In this mode this is the vSwitch name
VLAN = "YES"
VLAN_ID = 50 # optional
LEASES = ...
76 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
4.5.5 Tuning & Extending
The predened mode (dummy driver) does not execute any operation in the pre, post and clean steps (see for more
details on these phases).
The strategy of the dynamic driver is to dynamically create a VMware port group attached to a pre-existing VMware
virtual switch (standard or distributed) for each Virtual Network.
Action Description
Pre Creates the VMware port group with name one-pg-<network_id>.
Post No operation
Clean No operation
Calculating VLAN ID
The vlan id is calculated by adding the network id to a constant dened in
/var/lib/one/remotes/vnm/OpenNebulaNetwork.rb. The administrator may customize that value to
their own needs:
CONF = {
:start_vlan => 2
}
4.6 Conguring Firewalls for VMs
This driver installs iptables rules in the physical host executing the VM. This driver can be used to lter (and enforce)
TCP and UDP ports, and to dene a policy for ICMP connections, without any additional modication to the guest
VMs.
4.6.1 Requirements
The package iptables must be installed in the hosts.
4.6.2 Considerations & Limitations
In OpenNebula 3.0, this functionality was provided through a hook, and it wasnt effective after a migration. Since
OpenNebula 3.2 this limitation does not apply.
4.6.3 Conguration
Hosts Conguration
The sudoers le must be congured so oneadmin can execute iptables in the hosts.
OpenNebula Conguration
This Virtual Machine Network Manager driver can be used individually, or combined with the isolation features of
either 802.1Q or ebtables. However its not currently supported with the ovswitch drivers, they provide their own
ltering mechanism.
4.6. Conguring Firewalls for VMs 77
OpenNebula 4.6 Administration Guide, Release 4.6
To enable rewalling without any network isolation features, use fw as the Virtual Network Manager driver parameter
when the hosts are created with the onehost command:
$ onehost create host01 -i kvm -v kvm -n fw
The rewall driver is automatically enabled when any of the previously mentioned drivers are used, additional cong-
uration is not required.
Driver Actions
Action Description
Pre N/A
Post Creates appropriate IPTABLES rules in the Host where the VM has been placed.
Clean Removes the IPTABLES rules created during the Post action.
4.6.4 Usage
The rewall directives must be placed in the network section of the Virtual Machine template. These are the possible
attributes:
WHITE_PORTS_TCP = <iptables_range>: Permits access to the VM only through the specied ports
in the TCP protocol. Supersedes BLACK_PORTS_TCP if dened.
BLACK_PORTS_TCP = <iptables_range>: Doesnt permit access to the VM through the specied
ports in the TCP protocol. Superseded by WHITE_PORTS_TCP if dened.
WHITE_PORTS_UDP = <iptables_range>: Permits access to the VM only through the specied ports
in the UDP protocol. Supersedes BLACK_PORTS_UDP if dened.
BLACK_PORTS_UDP = <iptables_range>: Doesnt permit access to the VM through the specied
ports in the UDP protocol. Superseded by WHITE_PORTS_UDP if dened.
ICMP = drop: Blocks ICMP connections to the VM. By default its set to accept.
iptables_range: a list of ports separated by commas or ranges separated by semicolons, e.g.:
22,80,5900:6000
Example:
NIC = [ NETWORK_ID = 3, WHITE_PORTS_TCP = "80, 22", ICMP = drop ]
Note that WHITE_PORTS_TCP and BLACK_PORTS_TCP are mutually exclusive. In the event where theyre both
dened the more restrictive will prevail i.e. WHITE_PORTS_TCP. The same happens with WHITE_PORTS_UDP
and BLACK_PORTS_UDP.
4.6.5 Tuning & Extending
IPTABLES Rules
This section lists the IPTABLES rules that are created for each possible conguration:
TCP_WHITE_PORTS and UDP_WHITE_PORTS
# Create a new chain for each network interface
-A FORWARD -m physdev --physdev-out <tap_device> -j one-<vm_id>-<net_id>
# Accept already established connections
-A one-<vm_id>-<net_id> -p <protocol> -m state --state ESTABLISHED -j ACCEPT
78 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
# Accept the specified <iprange>
-A one-<vm_id>-<net_id> -p <protocol> -m multiport --dports <iprange> -j ACCEPT
# Drop everything else
-A one-<vm_id>-<net_id> -p <protocol> -j DROP
TCP_BLACK_PORTS and UDP_BLACK_PORTS
# Create a new chain for each network interface
-A FORWARD -m physdev --physdev-out <tap_device> -j one-<vm_id>-<net_id>
# Drop traffic directed to the iprange ports
-A one-<vm_id>-<net_id> -p <protocol> -m multiport --dports <iprange> -j DROP
ICMP DROP
# Create a new chain for each network interface
-A FORWARD -m physdev --physdev-out <tap_device> -j one-<vm_id>-<net_id>
# Accept already established ICMP connections
-A one-<vm_id>-<net_id> -p icmp -m state --state ESTABLISHED -j ACCEPT
# Drop new ICMP connections
-A one-<vm_id>-<net_id> -p icmp -j DROP
These rules will be removed once the VM is shut down or destroyed.
4.7 Virtual Router
This guide describes how to use the Virtual Router in OpenNebula.
4.7.1 Overview
When instantiated in a network, this appliance provides the following services for other Virtual Machines running in
the same network:
Router (masquerade)
Port forwarding
DHCP server
RADVD server
DNS server
A big advantage of using this appliance is that Virtual Machines can be run in the same network without being
contextualized for OpenNebula.
4.7. Virtual Router 79
OpenNebula 4.6 Administration Guide, Release 4.6
This appliance is controlled via CONTEXT. More information in the following sections.
4.7.2 Considerations & Limitations
This is a 64-bit appliance and will run both in KVM, Xen and VMware environments. It will run with any network
driver.
Since each virtual router will start a DHCP server and its not recommended to have more than one DHCP server
per network, its recommend to use it along network isolation drivers if youre going to deploy two or more router
instances in your environment:
Open vSwitch
Ebtables
802.1Q VLAN
4.7.3 Conguration
The appliance is based on alpinelinux. Theres only one user account: root. There is no default password for the
root account, however, it can be specied in the CONTEXT section along with roots public key.
ROOT_PUBKEY: If set, it will be set as roots authorized_keys.
ROOT_PASSWORD: To change the root account password use this attribute. It expects the password in an
encrypted format as returned by openssl passwd -1 and encoded in base64.
4.7.4 Usage
The virtual router can be used in two ways:
DHCP or RADVD Server
Only one interface. Useful if you only want DHCP or RADVD. Of course, enabling RADVD only makes sense if the
private network is IPv6.
To enable this you need to add the following context to the VM:
TARGET = "hdb"
PRIVNET = "$NETWORK[TEMPLATE, NETWORK=\"private_network_name\"]",
TEMPLATE = "$TEMPLATE"
DHCP = "YES|NO"
RADVD = "YES|NO"
If youre going to a use a netmask different to 255.255.255.0 you will have to add the following to the private
networks template:
NETWORK_MASK = 255.255.255.254
Full Router
In this case, the Virtual Machine will need two network interfaces: a private and a public one. The public one will
be masqueraded. In this mode you can also congure a DNS server by setting the DNS and optionally the SEARCH
80 Chapter 4. Networking
OpenNebula 4.6 Administration Guide, Release 4.6
attribute (useful for domain searches in /etc/resolv.conf). This mode also includes all the attributes related to
the previous section, i.e. DHCP and RADVD servers.
This is an example context for the router mode:
TARGET = "hdb"
PRIVNET = "$NETWORK[TEMPLATE, NETWORK=\"private_network\"]",
PUBNET = "$NETWORK[TEMPLATE, NETWORK=\"public_network\"]",
TEMPLATE = "$TEMPLATE"
DHCP = "YES|NO"
RADVD = "YES|NO" # Only useful for an IPv6 private network
NTP_SERVER = "10.0.10.1"
DNS = "8.8.4.4 8.8.8.8"
SEARCH = "local.domain"
FORWARDING = "8080:10.0.10.10:80 10.0.10.10:22"
DNS
This attribute expects a list of dns servers separated by spaces.
NTP_SERVER
This attribute expects the IP of the NTP server of the cluster. The DHCP server will be congured to serve the NTP
parameter to its leases.
FORWARDING
This attribute expects a list of forwarding rules separated by spaces. Each rule has either 2 or 3 components separated
by :. If only two components are specied, the rst is the IP to forward the port to, and the second is the port number.
If there are three components, the rst is the port in the router, the second the IP to forward to, and the third the port in
the forwarded Virtual Machine. Examples:
8080:10.0.10.10:80 This will forward the port 8080 in the router to the port 80 to the VM with IP
10.0.10.10.
10.0.10.10:22 This will forward the port 22 in the router to the port 22 to the VM with IP 10.0.10.10.
If the public network uses a netmask different to 255.255.255.0 or if the gateway is not the ips network with one
as the last byte: x.y.z.1 it can be explicitely set adding the following attributes to the public networks template:
GATEWAY = "192.168.1.100"
NETWORK_MASK = "255.255.254.0"
4.7. Virtual Router 81
OpenNebula 4.6 Administration Guide, Release 4.6
82 Chapter 4. Networking
CHAPTER
FIVE
MONITORING
5.1 Monitoring Overview
This guide provides an overview of the OpenNebula monitoring subsystem. The monitoring subsystem gathers infor-
mation relative to the hosts and the virtual machines, such as the host status, basic performance indicators, as well as
VM status and capacity consumption. This information is collected by executing a set of static probes provided by
OpenNebula. The output of these probes is sent to OpenNebula in two different ways: using a push or a pull paradigm.
Below you can nd a brief description of the two models and when to use one or the other.
5.1.1 The UDP-push Model
Warning: Default. This is the default IM for KVM and Xen in OpenNebula >= 4.4.
In this model, each host periodically sends monitoring data via UDP to the frontend which collects it and processes it
in a dedicated module. This distributed monitoring system resembles the architecture of dedicated monitoring systems,
using a lightweight communication protocol, and a push model.
This model is highly scalable and its limit (in terms of number of VMs monitored per second) is bounded to the
performance of the server running oned and the database server.
Please read the UDP-push guide for more information.
When to Use the UDP-push Model
This mode can be used only with Xen and KVM (VMware only supports the SSH-pull mode).
This monitoring model is adequate when:
You are using KVM or Xen (VMware is not supported in this mode)
Your infrastructure has a medium-to-high number of hosts (e.g. more than 50)
You need a high responsive system
You need a high frequently updated monitor information
All your hosts communicate through a secure network (UDP packages are not encrypted and their origin is not
veried)
83
OpenNebula 4.6 Administration Guide, Release 4.6
5.1.2 The Pull Model
When using this mode OpenNebula periodically actively queries each host and executes the probes via ssh. In
KVM and Xen this means establishing an ssh connection to each host and executing several scripts to retrieve this
information. Note that VMware uses the VI API for this, instead of a ssh connection.
This mode is limited by the number of active connections that can be made concurrently, as hosts are queried sequen-
tially.
Please read the KVM and Xen SSH-pull guide or the ESX-pull guide for more information.
When to Use the SSH-pull Model
This mode can be used with VMware, Xen and KVM.
This monitoring model is adequate when:
Your infrastructure has a low number of hosts (e.g. 50 or less)
You are communicating with the hosts through an insecure network
You do not need to update the monitoring with a high frequency (e.g. for 50 hosts the monitoring period would
be typically of about 5 minutes)
5.1.3 Other Monitorization Systems
OpenNebula can be easily integrated with other monitorization system. Please read the Information Manager Driver
integration guide for more information.
5.1.4 The Monitor Metrics
The information manage by the monitoring system includes the typical performance and conguration parameters for
the host and VMs, e.g. CPU or network consumption, Hostname or CPU model.
These metrics are gathered by specialized programs, called probes, that can be easily added to the system. Just write
your own program, or shell script that returns the metric that you are interested in. Please read the Information
Manager Driver integration guide for more information.
5.2 KVM and Xen SSH-pull Monitoring
KVM and Xen can be monitored with this ssh based monitoring system. The OpenNebula frontend starts a driver
which triggers ssh connections to the hosts which return the monitoring information of the host and of all the virtual
machines running within.
5.2.1 Requirements
ssh access from the frontends to the hosts as oneadmin without password has to be possible.
ruby is required in the hosts.
KVM hosts: libvirt must be enabled.
Xen hosts: sudo access to run xl or xm and xentop as oneadmin.
84 Chapter 5. Monitoring
OpenNebula 4.6 Administration Guide, Release 4.6
5.2.2 OpenNebula Conguration
Enabling the Drivers
To enable this monitoring system /etc/one/oned.conf must be congured with the following snippets:
KVM:
IM_MAD = [
name = "kvm",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 kvm-probes" ]
Xen 3.x:
IM_MAD = [
name = "xen",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 xen3-probes" ]
Xen 4.x:
IM_MAD = [
name = "xen",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 xen4-probes" ]
The arguments passed to this driver are:
-r: number of retries when monitoring a host
-t: number of threads, i.e. number of hosts monitored at the same time
Monitoring Conguration Parameters
OpenNebula allows to customize the general behaviour of the whole monitoring subsystem:
Parameter Description
MONITOR-
ING_INTERVAL
Time in seconds between host and VM monitorization. It must have a value greater
than the manager timer
HOST_PER_INTERVAL Number of hosts monitored in each interval.
VM_PER_INTERVAL Number of VMs monitored in each interval.
Warning: VM_PER_INTERVAL is only relevant in case of host failure when OpenNebula pro-actively monitors
each VM. You need to set VM_INDIVIDUAL_MONITORING to yes in oned.conf.
The information gathered by the probes is also stored in a monitoring table. This table is used by Sunstone to draw
monitoring graphics and can be queried using the OpenNebula API. The size of this table can be controlled with:
Parameter Description
HOST_MONITORING_EXPIRATION_TIME Time, in seconds, to expire monitoring information. Use 0 to disable
HOST monitoring recording.
VM_MONITORING_EXPIRATION_TIME Time, in seconds, to expire monitoring information. Use 0 to disable VM
monitoring recording.
5.2. KVM and Xen SSH-pull Monitoring 85
OpenNebula 4.6 Administration Guide, Release 4.6
5.2.3 Troubleshooting
In order to test the driver, add a host to OpenNebula using onehost, specifying the dened IM driver:
$ onehost create ursa06 --im xen --vm xen --net dummy
Now give it time to monitor the host (this time is determined by the value of MONITORING_INTERVAL in
/etc/one/oned.conf). After one interval, check the output of onehost list, it should look like the following:
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 ursa06 - 0 0 / 400 (0%) 0K / 7.7G (0%) on
Host management information is logged to /var/log/one/oned.log. Correct monitoring log lines look like
this:
Fri Nov 22 12:02:26 2013 [InM][D]: Monitoring host ursa06 (0)
Fri Nov 22 12:02:30 2013 [InM][D]: Host ursa06 (0) successfully monitored.
Both lines have the ID of the host being monitored.
If there are problems monitoring the host you will get an err state:
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 ursa06 - 0 0 / 400 (0%) 0K / 7.7G (0%) err
The way to get the error message for the host is using onehost show command, specifying the host id or name:
$ onehost show 0
[...]
MONITORING INFORMATION
ERROR=[
MESSAGE="Error monitoring host 0 : MONITOR FAILURE 0 Could not update remotes",
TIMESTAMP="Nov 22 12:02:30 2013" ]
The log le is also useful as it will give you even more information on the error:
Mon Oct 3 15:26:57 2011 [InM][I]: Monitoring host ursa06 (0)
Mon Oct 3 15:26:57 2011 [InM][I]: Command execution fail: scp -r /var/lib/one/remotes/. ursa06:/var/tmp/one
Mon Oct 3 15:26:57 2011 [InM][I]: ssh: Could not resolve hostname ursa06: nodename nor servname provided, or not known
Mon Oct 3 15:26:57 2011 [InM][I]: lost connection
Mon Oct 3 15:26:57 2011 [InM][I]: ExitCode: 1
Mon Oct 3 15:26:57 2011 [InM][E]: Error monitoring host 0 : MONITOR FAILURE 0 Could not update remotes
In this case the node ursa06 could not be found in the DNS or /etc/hosts.
5.2.4 Tuning & Extending
The probes are specialized programs that obtain the monitor metrics. Probes are dened for each hypervisor, and are
located at /var/lib/one/remotes/im/<hypervisor>-probes.d for Xen and KVM.
You can easily write your own probes or modify existing ones, please see the Information Manager Drivers guide.
Remember to synchronize the monitor probes in the hosts using onehost sync as described in the Managing Hosts
guide.
86 Chapter 5. Monitoring
OpenNebula 4.6 Administration Guide, Release 4.6
5.3 KVM and Xen UDP-push Monitoring
KVM and Xen can be monitored with this UDP based monitoring system.
Monitorization data is sent from each host to the frontend periodically via UDP by an agent. This agent is started by
the initial bootstrap system of the monitoring system which is performed via ssh like with the SSH-pull system.
5.3.1 Requirements
ssh access from the frontends to the hosts as oneadmin without password has to be possible.
ruby is required in the hosts.
KVM hosts: libvirt must be enabled.
Xen hosts: sudo access to run xl or xm and xentop as oneadmin.
The rewall of the frontend (if enabled) must allow UDP packages incoming from the hosts on port 4124.
5.3.2 Overview
OpenNebula starts a collectd daemon running in the frontend host that listens for UDP connections on port 4124.
In the rst monitoring cycle the OpenNebula connects to the host using ssh and starts a daemon that will execute
the probe scripts as in the SSH-pull model and sends the collected data to the collectd daemon in the fronted every
specic amount of seconds (congurable with the -i option of the collectd IM_MAD). This way the monitoring
subsystem doesnt need to make new ssh connections to the hosts when it needs data.
If the agent stops in a specic host, OpenNebula will detect that no monitorization data is received from that hosts and
will automatically fallback to the SSH-pull model, thus starting the agent again in the host.
5.3. KVM and Xen UDP-push Monitoring 87
OpenNebula 4.6 Administration Guide, Release 4.6
5.3.3 OpenNebula Conguration
Enabling the Drivers
To enable this monitoring system /etc/one/oned.conf must be congured with the following snippets:
collectd must be enabled both for KVM and Xen:
IM_MAD = [
name = "collectd",
executable = "collectd",
arguments = "-p 4124 -f 5 -t 50 -i 20" ]
Valid arguments for this driver are:
-a: Address to bind the collectd sockect (defults 0.0.0.0)
-p: port number
-f: Interval in seconds to ush collected information to OpenNebula (default 5)
-t: Number of threads for the collectd server (defult 50)
-i: Time in seconds of the monitorization push cycle. This parameter must be smaller than MONITOR-
ING_INTERVAL (see below), otherwise push monitorization will not be effective.
KVM:
IM_MAD = [
name = "kvm",
executable = "one_im_ssh",
arguments = "-r 3 -t 15 kvm" ]
Xen 3:
IM_MAD = [
name = "xen",
executable = "one_im_ssh",
arguments = "-r 3 -t 15 xen3" ]
Xen 4:
IM_MAD = [
name = "xen",
executable = "one_im_ssh",
arguments = "-r 3 -t 15 xen4" ]
The arguments passed to this driver are:
-r: number of retries when monitoring a host
-t: number of threads, i.e. number of hosts monitored at the same time
Monitoring Conguration Parameters
OpenNebula allows to customize the general behaviour of the whole monitoring subsystem:
Parameter Description
MONITOR-
ING_INTERVAL
Time in seconds between host and VM monitorization. It must have a value greater
than the manager timer
HOST_PER_INTERVAL Number of hosts monitored in each interval.
88 Chapter 5. Monitoring
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: Note that in this case HOST_PER_INTERVAL is only relevant when bootstraping the monitor agents.
Once the agents are up and running, OpenNebula does not polls the hosts.
5.3.4 Troubleshooting
Healthy Monitoring System
If the UDP-push model is running successfully, it means that it has not fallen back to the SSH-pull model. We can
verify this based on the information logged in oned.log.
Every (approximately) monitoring_push_cycle of seconds OpenNebula is receiving the monitoring data of
every Virtual Machine and of a host like such:
Mon Nov 18 22:25:00 2013 [InM][D]: Host thost001 (1) successfully monitored.
Mon Nov 18 22:25:01 2013 [VMM][D]: VM 0 successfully monitored: ...
Mon Nov 18 22:25:21 2013 [InM][D]: Host thost001 (1) successfully monitored.
Mon Nov 18 22:25:21 2013 [VMM][D]: VM 0 successfully monitored: ...
Mon Nov 18 22:25:40 2013 [InM][D]: Host thost001 (1) successfully monitored.
Mon Nov 18 22:25:41 2013 [VMM][D]: VM 0 successfully monitored: ...
However, if in oned.log a host is being monitored actively periodically (every MONITORING_INTERVAL sec-
onds) then the UDP-push monitorization is not working correctly:
Mon Nov 18 22:22:30 2013 [InM][D]: Monitoring host thost087 (87)
Mon Nov 18 22:23:30 2013 [InM][D]: Monitoring host thost087 (87)
Mon Nov 18 22:24:30 2013 [InM][D]: Monitoring host thost087 (87)
If this is the case its probably because OpenNebula is receiving probes faster than it can process. See the Tuning
section to x this.
Monitoring Probes
For the troubleshooting of errors produced during the execution of the monitoring probes, please refer to the trou-
bleshooting section of the SSH-pull guide.
5.3.5 Tuning & Extending
Adjust Monitoring Interval Times
In order to tune your OpenNebula installation with appropriate values of the monitoring parameters you need to adjust
the -i option of the collectd IM_MAD (the monitoring push cycle).
If the system is not working healthily it will be due to the database throughput since OpenNebula will write the
monitoring information to a database, an amount of ~4KB per VM. If the number of virtual machines is too large and
the monitoring push cycle too low, OpenNebula will not be able to write that amount of data to the database.
Driver Files
The probes are specialized programs that obtain the monitor metrics. Probes are dened for each hypervisor, and are
located at /var/lib/one/remotes/im/<hypervisor>-probes.d for Xen and KVM.
5.3. KVM and Xen UDP-push Monitoring 89
OpenNebula 4.6 Administration Guide, Release 4.6
You can easily write your own probes or modify existing ones, please see the Information Manager Drivers guide.
Remember to synchronize the monitor probes in the hosts using onehost sync as described in the Managing Hosts
guide.
5.4 VMware VI API-pull Monitor
5.4.1 Requirements
VI API access to the ESX hosts.
ESX hosts congured to work with OpenNebula
5.4.2 OpenNebula Conguration
In order to congure VMware you need to:
Enable the VMware monitoring driver in /etc/one/oned.conf by uncommenting the following lines:
IM_MAD = [
name = "vmware",
executable = "one_im_sh",
arguments = "-c -t 15 -r 0 vmware" ]
Make sure that the conguration attributes for VMware drivers are set in /etc/one/vmwarerc, see the
VMware guide
Monitoring Conguration Parameters
OpenNebula allows to customize the general behaviour of the whole monitoring subsystem:
Parameter Description
MONITOR-
ING_INTERVAL
Time in seconds between host and VM monitorization. It must have a value greater
than the manager timer
HOST_PER_INTERVAL Number of hosts monitored in each interval.
VM_PER_INTERVAL Number of VMs monitored in each interval.
Warning: VM_PER_INTERVAL is only relevant in case of host failure when OpenNebula pro-actively monitors
each VM. You need to set VM_INDIVIDUAL_MONITORING to yes in oned.conf.
The information gathered by the probes is also stored in a monitoring table. This table is used by Sunstone to draw
monitoring graphics and can be queried using the OpenNebula API. The size of this table can be controlled with:
Parameter Description
HOST_MONITORING_EXPIRATION_TIME Time, in seconds, to expire monitoring information. Use 0 to disable
HOST monitoring recording.
VM_MONITORING_EXPIRATION_TIME Time, in seconds, to expire monitoring information. Use 0 to disable VM
monitoring recording.
5.4.3 Troubleshooting
In order to test the driver, add a host to OpenNebula using onehost, specifying the dened IM driver:
90 Chapter 5. Monitoring
OpenNebula 4.6 Administration Guide, Release 4.6
$ onehost create esx_node1 --im vmware --vm vmware --net dummy
Now give it time to monitor the host (this time is determined by the value of MONITORING_INTERVAL in
/etc/one/oned.conf). After one interval, check the output of onehost list, it should look like the following:
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 esx_node1 - 0 0 / 400 (0%) 0K / 7.7G (0%) on
Host management information is logged to /var/log/one/oned.log. Correct monitoring log lines look like
this:
Fri Nov 22 12:02:26 2013 [InM][D]: Monitoring host esx_node1 (0)
Fri Nov 22 12:02:30 2013 [InM][D]: Host esx1_node (0) successfully monitored.
Both lines have the ID of the host being monitored.
If there are problems monitoring the host you will get an err state:
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 esx_node1 - 0 0 / 400 (0%) 0K / 7.7G (0%) err
The way to get the error message for the host is using onehost show command, specifying the host id or name:
$ onehost show 0
[...]
MONITORING INFORMATION
ERROR=[
MESSAGE="Error monitoring host 0 : MONITOR FAILURE 0 Could not update remotes",
TIMESTAMP="Nov 22 12:02:30 2013" ]
The log le is also useful as it will give you even more information on the error.
5.4.4 Tuning & Extending
The probes are specialized programs that obtain the monitor metrics. VMware probes are obtained by querying the
ESX server through the VI API. The probe is located at /var/lib/one/remotes/im/vmware.d.
You can easily write your own probes or modify existing ones, please see the Information Manager Drivers guide.
5.4. VMware VI API-pull Monitor 91
OpenNebula 4.6 Administration Guide, Release 4.6
92 Chapter 5. Monitoring
CHAPTER
SIX
USERS AND GROUPS
6.1 Users & Groups Overview
OpenNebula includes a complete user & group management system. Users in an OpenNebula installation are classied
in four types:
Administrators, an admin user belongs to an admin group (oneadmin or otherwise) and can perform manage
operations
Regular users, that may access most OpenNebula functionality.
Public users, only basic functionality (and public interfaces) are open to public users.
Service users, a service user account is used by the OpenNebula services (i.e. cloud APIs like EC2 or GUIs
like Sunstone) to proxy auth requests.
The resources a user may access in OpenNebula are controlled by a permissions system that resembles the typical
UNIX one. By default, only the owner of a resource (e.g. a VM or an image) can use and manage it. Users can
easily share the resources by granting use or manage permissions to other users in her group or to any other user in the
system.
Upon group creation, an associated admin group can be also spawn, with a admin user belonging to both of them. By
default this user will be able to create users in both groups, and manage non owned resources for the regular group,
through the CLI and/or a special Sunstone view. This group can also be assigned with different Resource providers, in
practice OpenNebula clusters with all the associated resources of said cluster (hosts, datastores and virtual networks).
This allows for the management of virtual datacenters using group functionality.
Along with the users & groups the Auth Subsystem is responsible for the authentication and authorization of users
requests.
Any interface to OpenNebula (CLI, Sunstone, Ruby or Java OCA) communicates with the core using xml-rpc calls,
that contain the users session string, which is authenticated by the OpenNebula core comparing the username and
password with the registered users.
Each operation generates an authorization request that is checked against the registered ACL rules. The core then can
grant permission, or reject the request.
OpenNebula comes with a default set of ACL rules that enables a standard usage. You dont need to manage the ACL
rules unless you need the level of permission customization if offers.
Please proceed to the following guides to learn more:
Managing Users
Managing Groups & vDC
Managing Permissions
Managing ACL Rules
93
OpenNebula 4.6 Administration Guide, Release 4.6
Quota Management
By default, the authentication and authorization is handled by the OpenNebula Core as described above. Optionally,
you can delegate it to an external module, see the External Auth Setup guide for more information.
6.2 Managing Users
OpenNebula supports user accounts and groups. This guide shows how to manage users, groups are explained in their
own guide. To manage user rights, visit the Managing ACL Rules guide.
A user in OpenNebula is dened by a username and password. You dont need to create a new Unix account in the
front-end for each OpenNebula user, they are completely different concepts. OpenNebula users are authenticated using
a session string included in every operation, which is checked by the OpenNebula core.
Each user has a unique ID, and belongs to a group.
After the installation, you will have two administrative accounts, oneadmin and serveradmin; and two default
groups. You can check it using the oneuser list and onegroup list commands.
There are different user types in the OpenNebula system:
Cloud Administrators, the oneadmin account is created the rst time OpenNebula is started using the
ONE_AUTH data. oneadmin has enough privileges to perform any operation on any object. Any other
user in the oneadmin group has the same privileges as oneadmin
Infrastructure User accounts may access most of the functionality offered by OpenNebula to manage re-
sources.
vDC Administrators accounts manage a limited set of resources and users.
vDC Users access a simplied Sunstone view with limited actions to create new VMs, and perform basic life
cycle operations.
Public users can only access OpenNebula through a public API (e.g. OCCI, EC2), hence they can only use a
limited set of functionality and can not access the xml-rpc API directly (nor any application using it like the CLI
or SunStone )
User serveradmin is also created the rst time OpenNebula is started. Its password is created randomly, and
this account is used by the Sunstone, OCCI and EC2 servers to interact with OpenNebula.
Note: The complete OpenNebula approach to user accounts, groups and vDC is explained in more detain in the
Understanding OpenNebula guide.
6.2.1 Shell Environment
OpenNebula users should have the following environment variables set, you may want to place them in the .bashrc of
the users Unix account for convenience:
ONE_XMLRPC
URL where the OpenNebula daemon is listening. If it is not set, CLI tools will use the default:
http://localhost:2633/RPC2. See the PORT attribute in the Daemon conguration le for more information.
ONE_AUTH
Needs to point to a le containing just a single line stating username:password. If ONE_AUTH is not dened,
$HOME/.one/one_auth will be used instead. If no auth le is present, OpenNebula cannot work properly, as this is
needed by the core, the CLI, and the cloud components as well.
94 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
ONE_POOL_PAGE_SIZE
By default the OpenNebula Cloud API (CLI and Sunstone make use of it) paginates some pool responses. By default
this size is 2000 but it can be changed with this variable. A numeric value greater that 2 is the pool size. To disable it
you can use a non numeric value.
$ export ONE_POOL_PAGE_SIZE=5000 # Sets the page size to 5000
$ export ONE_POOL_PAGE_SIZE=disabled # Disables pool pagination
For instance, a user named regularuser may have the following environment:
$ tail ~/.bashrc
ONE_XMLRPC=http://localhost:2633/RPC2
export ONE_XMLRPC
$ cat ~/.one/one_auth
regularuser:password
Note: Please note that the example above is intended for a user interacting with OpenNebula from the front-end, but
you can use it from any other computer. Just set the appropriate hostname and port in the ONE_XMLRPC variable.
An alternative method to specify credentials and OpenNebula endpoint is using command line parameters. Most of
the commands can understand the following parameters:
--user name
User name used to connect to OpenNebula
--password password
Password to authenticate with OpenNebula
--endpoint endpoint
URL of OpenNebula xmlrpc frontend
If user is specied but not password the user will be prompted for the password. endpoint has the same meaning
and get the same value as ONE_XMLRPC. For example:
$ onevm list --user my_user --endpoint http://one.frontend.com:2633/RPC2
Password:
[...]
Warning: You should better not use --password parameter in a shared machine. Process parameters can be
seen by any user with the command ps so it is highly insecure.
Shell Environment for Self-Contained Installations
If OpenNebula was installed from sources in self-contained mode (this is not the default, and not recommended),
these two variables must be also set. These are not needed if you installed from packages, or performed a system-wide
installation from sources.
ONE_LOCATION
It must point to the installation <destination_folder>.
PATH
6.2. Managing Users 95
OpenNebula 4.6 Administration Guide, Release 4.6
The OpenNebula bin les must be added to the path
$ export PATH=$ONE_LOCATION/bin:$PATH
6.2.2 Adding and Deleting Users
User accounts within the OpenNebula system are managed by oneadmin with the oneuser create and
oneuser delete commands. This section will show you how to create the different account types supported
in OpenNebula
Administrators
Administrators can be easily added to the system like this:
$ oneuser create otheradmin password
ID: 2
$ oneuser chgrp otheradmin oneadmin
$ oneuser list
ID GROUP NAME AUTH PASSWORD
0 oneadmin oneadmin core 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
1 oneadmin serveradmin server_c 1224ff12545a2e5dfeda4eddacdc682d719c26d5
2 oneadmin otheradmin core 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
$ oneuser show otheradmin
USER 2 INFORMATION
ID : 2
NAME : otheradmin
GROUP : 0
PASSWORD : 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
AUTH_DRIVER : core
ENABLED : Yes
USER TEMPLATE
Regular Users
Simply create the usets with the create command:
$ oneuser create regularuser password
ID: 3
The enabled ag can be ignored as it doesnt provide any functionality. It may be used in future releases to temporarily
disable users instead of deleting them.
Public Users
Public users needs to dene a special authentication method that internally relies in the core auth method. First create
the public user as it was a regular one:
$ oneuser create publicuser password
ID: 4
96 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
and then change its auth method (see below for more info) to the public authentication method.
$ oneuser chauth publicuser public
Server Users
Server user accounts are used mainly as proxy authentication accounts for OpenNebula services. Any account that
uses the server_cipher or server_x509 auth methods are a server user. You will never use this account directly. To
create a user account just create a regular account
$ oneuser create serveruser password
ID: 5
and then change its auth method to server_cipher (for other auth methods please refer to the External Auth
guide):
$ oneuser chauth serveruser server_cipher
6.2.3 Managing Users
User Authentication
Each user has an authentication driver, AUTH_DRIVER. The default driver, core, is a simple user-password match
mechanism. Read the External Auth guide to improve the security of your cloud, enabling SSH or x509 authentication.
User Templates
The USER TEMPLATE section can hold any arbitrary data. You can use the oneuser update command to open
an editor and add, for instance, the following DEPARTMENT and EMAIL attributes:
$ oneuser show 2
USER 2 INFORMATION
ID : 2
NAME : regularuser
GROUP : 1
PASSWORD : 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
AUTH_DRIVER : core
ENABLED : Yes
USER TEMPLATE
DEPARTMENT=IT
EMAIL=user@company.com
These attributes can be later used in the Virtual Machine Contextualization. For example, using contextualization the
users public ssh key can be automatically installed in the VM:
ssh_key = "$USER[SSH_KEY]"
6.2.4 Manage your Own User
Regular users can see their account information, and change their password.
For instance, as regularuser you could do the following:
6.2. Managing Users 97
OpenNebula 4.6 Administration Guide, Release 4.6
$ oneuser list
[UserPoolInfo] User [2] not authorized to perform action on user.
$ oneuser show
USER 2 INFORMATION
ID : 2
NAME : regularuser
GROUP : 1
PASSWORD : 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
AUTH_DRIVER : core
ENABLED : Yes
USER TEMPLATE
DEPARTMENT=IT
EMAIL=user@company.com
$ oneuser passwd 1 abcdpass
As you can see, any user can nd out his ID using the oneuser show command without any arguments.
Regular users can retrieve their quota and user information in the settings section in the top right corner of the main
screen:
6.2.5 Managing Users in Sunstone
All the described functionality is available graphically using Sunstone:
98 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
6.3 Managing Groups & vDC
A group in OpenNebula makes it possible to isolate users and resources. A user can see and use the shared resources
from other users.
The group is an authorization boundary for the users, but you can also partition your cloud infrastructure and dene
what resources are available to each group. The vDC (Virtual Data Center) concept is not a different entity in Open-
Nebula, it is how we call groups that have some resources assigned to them. You can read more about OpenNebulas
approach to vDCs and the cloud from the perspective of different user roles in the Understanding OpenNebula guide.
6.3.1 Adding and Deleting Groups
There are two special groups created by default. The onedmin group allows any user in it to perform any operation,
allowing different users to act with the same privileges as the oneadmin user. The users group is the default group
where new users are created.
Your can use the onegroup command line tool to manage groups in OpenNebula. There are two groups created by
default, oneadmin and users.
To create new groups:
$ onegroup list
ID NAME
0 oneadmin
1 users
$ onegroup create "new group"
ID: 100
6.3. Managing Groups & vDC 99
OpenNebula 4.6 Administration Guide, Release 4.6
The new group has ID 100 to differentiate the special groups to the user-dened ones.
Note: When a new group is created, an ACL rule is also created to provide the default behaviour, allowing users to
create basic resources. You can learn more about ACL rules in this guide; but you dont need any further conguration
to start using the new group.
6.3.2 Adding Users to Groups
Use the oneuser chgrp command to assign users to groups.
$ oneuser chgrp -v regularuser "new group"
USER 1: Group changed
$ onegroup show 100
GROUP 100 INFORMATION
ID : 100
NAME : new group
USERS
ID NAME
1 regularuser
To delete a user from a group, just move it again to the default users group.
6.3.3 Admin Users and Allowed Resources
Upon group creation, an special admin user account can be dened. This admin user will have administrative privileges
only for the new group, not for all the resources in the OpenNebula cloud as the oneadmin group users have.
Another aspect that can be controlled on creation time is the type of resources that group users will be alowed to create.
This can be managed visually in Sunstone, and can also be managed through the CLI. In the latter, details of the group
are passed to the onegroup create command as arguments. This table lists the description of said arguments.
Argument M / O Value Description
-n, name name Manda-
tory
Any string Name for the new group
-u,
admin_user
Op-
tional
Any string Creates an admin user for the group with the given name
-p,
admin_password
Op-
tional
Any string Password for the admin user of the group
-d,
admin_driver
Op-
tional
Any string Auth driver for the admin user of the group
-r, resources Op-
tional
+
separated
list
Which resources can be created by group users
(VM+IMAGE+TEMPLATE by default)
-o,
admin_resources
Op-
tional
+
separated
list
Which resources can be created by the admin user
(VM+IMAGE+TEMPLATE by default)
An example:
$ onegroup create --name groupA \
--admin_user admin_userA --admin_password somestr \
--resources TEMPLATE+VM --admin_resources TEMPLATE+VM+IMAGE+NET
100 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
6.3.4 Managing vDC and Resource Providers
A vDC (Virtual Data Center) is how we call groups that have some resources assigned to them. A resource provider
is an OpenNebula cluster (set of physical hosts and associated datastores and virtual networks) from a particular zone
(an OpenNebula instance). A group can be assigned:
A particular resource provider, for instance cluster 7 of Zone 0
$ onegroup add_provider <group_id> 0 7
All resources from a particular zone (special cluster id ALL)
$ onegroup add_provider <group_id> 0 ALL
To remove resource providers within a group, use the symmetric operation del_provider.
Note: By default a group doesnt have any resource provider, so users wont be entitled to use any resource until
explicitly added a resource provider.
When you assign a Resource Provider to a group, users in that group will be able to use the Datastores and Virtual
Networks of that Cluster. The scheduler will also deploy VMs from that group into any of the Cluster Hosts.
If you are familiar with ACL rules, you can take a look at the rules that are created with oneacl list. These
rules are automatically added, and should not be manually edited. They will be removed by the onegroup
del_provider command.
6.3.5 Primary and Secondary Groups
With the commands oneuser addgroup and delgroup the administrator can add or delete secondary groups.
Users assigned to more than one group will see the resources from all their groups. e.g. a user in the groups testing
and production will see VMs from both groups.
The group set with chgrp is the primary group, and resources (Images, VMs, etc) created by a user will belong to
this primary group. Users can change their primary group to any of their secondary group without the intervention of
an administrator, using chgrp again.
6.3.6 Managing Groups in Sunstone
All the described functionality is available graphically using Sunstone:
6.3. Managing Groups & vDC 101
OpenNebula 4.6 Administration Guide, Release 4.6
6.4 Managing Permissions
Most OpenNebula resources have associated permissions for the owner, the users in her group, and others. For each
one of these groups, there are three rights that can be set: USE, MANAGE and ADMIN. These permissions are very
similar to those of UNIX le system.
The resources with associated permissions are Templates, VMs, Images and Virtual Networks. The exceptions are
Users, Groups and Hosts.
6.4.1 Managing Permission through the CLI
This is how the permissions look in the terminal:
$ onetemplate show 0
TEMPLATE 0 INFORMATION
ID : 0
NAME : vm-example
USER : oneuser1
GROUP : users
REGISTER TIME : 01/13 05:40:28
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
[...]
The previous output shows that for the Template 0, the owner user oneuser1 has USE and MANAGE rights. Users
in the group users have USE rights, and users that are not the owner or in the users group dont have any rights
over this Template.
102 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
You can check what operations are allowed with each of the USE, MANAGE and ADMIN rights in the xml-rpc
reference documentation. In general these rights are associated with the following operations:
USE: Operations that do not modify the resource like listing it or using it (e.g. using an image or a virtual
network). Typically you will grant USE rights to share your resources with other users of your group or with the
rest of the users.
MANAGE: Operations that modify the resource like stopping a virtual machine, changing the persistent at-
tribute of an image or removing a lease from a network. Typically you will grant MANAGE rights to users that
will manage your own resources.
ADMIN: Special operations that are typically limited to administrators, like updating the data of a host or
deleting an user group. Typically you will grant ADMIN permissions to those users with an administrator role.
Warning: By default every user can update any permission group (owner, group or other) with the exception of
the admin bit. There are some scenarios where it would be advisable to limit the other set (e.g. OpenNebula Zones
so users can not break the VDC limits). In these situations the ENABLE_OTHER_PERMISSIONS attribute can be
set to NO in /etc/one/oned.conf le
Changing Permissions with chmod
The previous permissions can be updated with the chmod command. This command takes an octet as a parameter,
following the octal notation of the Unix chmod command. The octet must be a three-digit base-8 number. Each digit,
with a value between 0 and 7, represents the rights for the owner, group and other, respectively. The rights are
represented by these values:
The USE bit adds 4 to its total (in binary 100)
The MANAGE bit adds 2 to its total (in binary 010)
The ADMIN bit adds 1 to its total (in binary 001)
Lets see some examples:
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
$ onetemplate chmod 0 664 -v
VMTEMPLATE 0: Permissions changed
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : um-
OTHER : u--
$ onetemplate chmod 0 644 -v
VMTEMPLATE 0: Permissions changed
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
6.4. Managing Permissions 103
OpenNebula 4.6 Administration Guide, Release 4.6
GROUP : u--
OTHER : u--
$ onetemplate chmod 0 607 -v
VMTEMPLATE 0: Permissions changed
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : uma
Setting Default Permissions with umask
The default permissions given to newly created resources can be set:
Globally, with the DEFAULT_UMASK attribute in oned.conf
Individually for each User, using the oneuser umask command.
These mask attributes work in a similar way to the Unix umask command. The expected value is a three-digit base-8
number. Each digit is a mask that disables permissions for the owner, group and other, respectively.
This table shows some examples:
umask permissions (octal) permissions
177 600 um- --- ---
137 640 um- u-- ---
113 664 um- um- u--
6.4.2 Managing Permissions in Sunstone
Sunstone offers a convenient way to manage resources permissions. This can be done by selecting resources from a
view (for example the templates view) and clicking on the update properties button. The update dialog lets the
user conveniently set the resources permissions.
104 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
6.5 Accounting Client
The accounting toolset visualizes and reports resource usage data, and allows their integration with chargeback and
billing platforms. The toolset generates accounting reports using the information retrieved from OpenNebula.
This accounting tool addresses the accounting of the virtual resources. It includes resource consumption of the virtual
machines as reported from the hypervisor.
6.5.1 Usage
oneacct - prints accounting information for virtual machines
Usage: oneacct [options]
-s, --start TIME Start date and time to take into account
-e, --end TIME End date and time
-u, --user user User id to filter the results
-g, --group group Group id to filter the results
-H, --host hostname Host id to filter the results
--xpath expression Xpath expression to filter the results. For example: oneacct --xpath HISTORY[ETIME>0]
-j, --json Output in json format
-x, --xml Output in xml format
--csv Write table in csv format
--split Split the output in a table for each VM
-h, --help Show this message
The time can be written as month/day/year hour:minute:second, or any other similar format, e.g
month/day hour:minute.
To integrate this tool with your billing system you can use -j, -x or --csv ags to get all the information in an easy
computer readable format.
6.5. Accounting Client 105
OpenNebula 4.6 Administration Guide, Release 4.6
6.5.2 Accounting Output
The oneacct command shows individual Virtual Machine history records. This means that for a single VM you may
get several accounting entries, one for each migration or stop/suspend action.
Each entry contains the complete information of the Virtual Machine, including the Virtual Machine monitoring infor-
mation. By default, only network consumption is reported, see the Tuning & Extending section for more information.
When the results are ltered with the -s and/or -e options, all the history records that were active during that time
interval are shown, but they may start or end outside that interval.
For example, if you have a VM that was running from 01/01/2012 to 05/15/2012, and you request the accounting
information with this command:
$ oneacct -s 02/01/2012 -e 01/03/2012
Showing active history records from Wed Feb 01 00:00:00 +0100 2012 to Tue Jan 03 00:00:00 +0100 2012
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
9 host01 none 01/01 14:03:27 05/15 16:38:05 1024K 2 1.5G 23G
The record shows the complete history record, and total network consumption. It will not reect the consumption
made only during the month of February.
Other important thing to pay attention to is that active history records, those with END_TIME -, refresh their moni-
toring information each time the VM is monitored. Once the VM is shut down, migrated or stopped, the END_TIME
is set and the monitoring information stored is frozen. The nal values reect the total for accumulative attributes, like
NET_RX/TX.
Sample Output
Obtaining all the available accounting information:
$ oneacct
# User 0 oneadmin
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
0 host02 user 06/04 14:55:49 06/04 15:05:02 1024M 1 0K 0K
# User 2 oneuser1
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
1 host01 stop 06/04 14:55:49 06/04 14:56:28 1024M 1 0K 0K
1 host01 user 06/04 14:56:49 06/04 14:58:49 1024M 1 0K 0.6K
1 host02 none 06/04 14:58:49 - 1024M 1 0K 0.1K
2 host02 erro 06/04 14:57:19 06/04 15:03:27 4G 2 0K 0K
3 host01 none 06/04 15:04:47 - 4G 2 0K 0.1K
The columns are:
106 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
Column Meaning
VID Virtual Machine ID
HOSTNAME Host name
REASON
VM state change reason: none: Normal termination
erro: The VM ended in error stop: Stop/resume
request user: Migration request canc: Cancel re-
quest
START_TIME Start time
END_TIME End time
MEMORY Assigned memory. This is the requested memory, not
the monitored memory consumption
CPU Number of CPUs. This is the requested number of Host
CPU share, not the monitored cpu usage
NETRX Data received from the network
NETTX Data sent to the network
Obtaining the accounting information for a given user
$ oneacct -u 2 --split
# User 2 oneuser1
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
1 host01 stop 06/04 14:55:49 06/04 14:56:28 1024M 1 0K 0K
1 host01 user 06/04 14:56:49 06/04 14:58:49 1024M 1 0K 0.6K
1 host02 none 06/04 14:58:49 - 1024M 1 0K 0.1K
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
2 host02 erro 06/04 14:57:19 06/04 15:03:27 4G 2 0K 0K
VID HOSTNAME REAS START_TIME END_TIME MEMORY CPU NET_RX NET_TX
3 host01 none 06/04 15:04:47 - 4G 2 0K 0.1K
In case you use CSV output (--csv) you will het a header with the neame of each column and then the data. For
example:
$ oneacct --csv
UID,VID,HOSTNAME,ACTION,REASON,START_TIME,END_TIME,MEMORY,CPU,NET_RX,NET_TX
3,68,esx2,none,none,02/17 11:16:06,-,512M,1,0K,0K
0,0,piscis,none,erro,09/18 15:57:55,09/18 15:57:57,1024M,1,0K,0K
0,0,piscis,shutdown-hard,user,09/18 16:01:55,09/18 16:19:57,1024M,1,0K,0K
0,1,piscis,none,none,09/18 16:20:25,-,1024M,1,2G,388M
0,2,esx1,shutdown-hard,user,09/18 19:27:14,09/19 12:23:45,512M,1,0K,0K
Output Reference
If you execute oneacct with the -x option, you will get an XML output dened by the following xsd:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="HISTORY_RECORDS">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="HISTORY" maxOccurs="unbounded" minOccurs="0"/>
6.5. Accounting Client 107
OpenNebula 4.6 Administration Guide, Release 4.6
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="HISTORY">
<xs:complexType>
<xs:sequence>
<xs:element name="OID" type="xs:integer"/>
<xs:element name="SEQ" type="xs:integer"/>
<xs:element name="HOSTNAME" type="xs:string"/>
<xs:element name="HID" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="VMMMAD" type="xs:string"/>
<xs:element name="VNMMAD" type="xs:string"/>
<xs:element name="TMMAD" type="xs:string"/>
<xs:element name="DS_ID" type="xs:integer"/>
<xs:element name="PSTIME" type="xs:integer"/>
<xs:element name="PETIME" type="xs:integer"/>
<xs:element name="RSTIME" type="xs:integer"/>
<xs:element name="RETIME" type="xs:integer"/>
<xs:element name="ESTIME" type="xs:integer"/>
<xs:element name="EETIME" type="xs:integer"/>
<!-- REASON values:
NONE = 0 Normal termination
ERROR = 1 The VM ended in error
STOP_RESUME = 2 Stop/resume request
USER = 3 Migration request
CANCEL = 4 Cancel request
-->
<xs:element name="REASON" type="xs:integer"/>
<xs:element name="VM">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="LAST_POLL" type="xs:integer"/>
108 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
<!-- STATE values,
see http://opennebula.org/documentation:documentation:api#actions_for_virtual_machine_management
INIT = 0
PENDING = 1
HOLD = 2
ACTIVE = 3 In this state, the Life Cycle Manager state is relevant
STOPPED = 4
SUSPENDED = 5
DONE = 6
FAILED = 7
POWEROFF = 8
-->
<xs:element name="STATE" type="xs:integer"/>
<!-- LCM_STATE values, this sub-state is relevant only when STATE is
ACTIVE (4)
LCM_INIT = 0
PROLOG = 1
BOOT = 2
RUNNING = 3
MIGRATE = 4
SAVE_STOP = 5
SAVE_SUSPEND = 6
SAVE_MIGRATE = 7
PROLOG_MIGRATE = 8
PROLOG_RESUME = 9
EPILOG_STOP = 10
EPILOG = 11
SHUTDOWN = 12
CANCEL = 13
FAILURE = 14
CLEANUP = 15
UNKNOWN = 16
HOTPLUG = 17
SHUTDOWN_POWEROFF = 18
BOOT_UNKNOWN = 19
BOOT_POWEROFF = 20
BOOT_SUSPENDED = 21
BOOT_STOPPED = 22
-->
<xs:element name="LCM_STATE" type="xs:integer"/>
<xs:element name="RESCHED" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="DEPLOY_ID" type="xs:string"/>
<!-- MEMORY consumption in kilobytes -->
<xs:element name="MEMORY" type="xs:integer"/>
<!-- Percentage of 1 CPU consumed (two fully consumed cpu is 200) -->
<xs:element name="CPU" type="xs:integer"/>
<!-- NET_TX: Sent bytes to the network -->
<xs:element name="NET_TX" type="xs:integer"/>
<!-- NET_RX: Received bytes from the network -->
6.5. Accounting Client 109
OpenNebula 4.6 Administration Guide, Release 4.6
<xs:element name="NET_RX" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="HISTORY_RECORDS">
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
6.5.3 Tuning & Extending
There are two kinds of monitoring values:
Instantaneous values: For example, VM/CPU or VM/MEMORY show the memory consumption last reported by
the monitoring probes.
Accumulative values: For example, VM/NET_TX and VM/NET_TX show the total network consumption since
the history record started.
Developers interacting with OpenNebula using the Ruby bindings can use the VirtualMachinePool.accounting method
to retrieve accounting information ltering and ordering by multiple parameters.
6.6 Managing ACL Rules
The ACL authorization system enables ne-tuning of the allowed operations for any user, or group of users. Each
operation generates an authorization request that is checked against the registered set of ACL rules. The core then can
grant permission, or reject the request.
This allows administrators to tailor the user roles according to their infrastructure needs. For instance, using ACL
rules you could create a group of users that can see and use existing virtual resources, but not create any new ones. Or
grant permissions to a specic user to manage Virtual Networks for some of the existing groups, but not to perform
any other operation in your cloud. Some examples are provided at the end of this guide.
Please note: the ACL rules is an advanced mechanism. For most use cases, you should be able to rely on the built-in
resource permissions and the ACL Rules created automatically when a group is created, and when a resource provider
is added.
6.6.1 Understanding ACL Rules
Lets start with an example:
#5 IMAGE+TEMPLATE/@103 USE+MANAGE #0
This rule grants the user with ID 5 the right to perform USE and MANAGE operations over all Images and Templates
in the group with id 103.
The rule is split in four components, separated by a space:
User component is composed only by an ID denition.
Resources is composed by a list of + separated resource types, / and an ID denition.
Rights is a list of Operations separated by the + character.
110 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
Zone is an ID denition of the zones where the rule applies. This last part is optional, and can be ignored unless
OpenNebula is congured in a federation.
The ID denition for User in a rule is written as:
#<id> : for individual IDs
@<id> : for a group ID

*
: for All
The ID denition for a Resource has the same syntax as the ones for Users, but adding:
%<id> : for cluster IDs
Some more examples:
This rule allows all users in group 105 to create new virtual resources:
@105 VM+NET+IMAGE+TEMPLATE/
*
CREATE
The next one allows all users in the group 106 to use the Virtual Network 47. That means that they can instantiate VM
templates that use this network.
@106 NET/#47 USE
Note: Note the difference between
*
NET/#47 USE" vs
*
NET/@47 USE
All Users can use NETWORK with ID 47 vs All Users can use NETWORKS belonging to the Group whose ID is 47
The following one allows users in group 106 to deploy VMs in Hosts assigned to the cluster 100
@106 HOST/%100 MANAGE
6.6.2 Managing ACL Rules via Console
The ACL rules are managed using the oneacl command. The oneacl list output looks like this:
$ oneacl list
ID USER RES_VHNIUTGDCOZ RID OPE_UMAC ZONE
0 @1 V-NI-T---O-
*
---c #0
1
*
----------Z
*
u---
*
2 @1 -H---------
*
-m-- #0
3 @1 --N----D---
*
u--- #0
4 #5 --NI-T----- @104 u--- #0
5 @106 ---I------- #31 u--- #0
The rules shown correspond to the following ones:
@1 VM+NET+IMAGE+TEMPLATE+DOCUMENT/
*
CREATE #0
*
ZONE/
*
USE
*
@1 HOST/
*
MANAGE #0
@1 NET+DATASTORE/
*
USE #0
#5 NET+IMAGE+TEMPLATE/@104 USE #0
@106 IMAGE/#31 USE #0
The rst four were created on bootstrap by OpenNebula, and the last two were created using oneacl:
$ oneacl create "#5 NET+IMAGE+TEMPLATE/@104 USE"
ID: 4
6.6. Managing ACL Rules 111
OpenNebula 4.6 Administration Guide, Release 4.6
$ oneacl create "@106 IMAGE/#31 USE"
ID: 5
The ID column identies each rules ID. This ID is needed to delete rules, using oneacl delete <id>.
Next column is USER, which can be an individual user (#) or group (@) id; or all (*) users.
The Resources column lists the existing Resource types initials. Each rule lls the initials of the resource types it
applies to.
V : VM
H : HOST
N : NET
I : IMAGE
U : USER
T : TEMPLATE
G : GROUP
D : DATASTORE
C : CLUSTER
O : DOCUMENT
O : ZONE
RID stands for Resource ID, it can be an individual object (#), group (@) or cluster (%) id; or all (*) objects.
The next Operations column lists the allowed operations initials.
U : USE
M : MANAGE
A : ADMIN
C : CREATE
And the last column, Zone, shows the zone(s) where the rule applies. It can be an individual zone id (#), or all (*)
zone.
6.6.3 Managing ACLs via Sunstone
Sunstone offers a very intuitive and easy way of managing ACLs.
Select ACLs in the left-side menu to access a view of the current ACLs dened in OpenNebula:
112 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
This view is designed to easily undestand what the purpose of each ACL is. You can create new ACLs by clicking on
the New button at the top. A dialog will pop up:
6.6. Managing ACL Rules 113
OpenNebula 4.6 Administration Guide, Release 4.6
In the creation dialog you can easily dene the resouces affected by the rule and the permissions that are granted upon
them.
6.6.4 How Permission is Granted or Denied
Note: Visit the XML-RPC API reference documentation for a complete list of the permissions needed by each
OpenNebula command.
For the internal Authorization in OpenNebula, there is an implicit rule:
The oneadmin user, or users in the oneadmin group are authorized to perform any operation.
If the resource is one of type VM, NET, IMAGE, TEMPLATE, or DOCUMENT the objects permissions are checked. For
instance, this is an example of the oneimage show output:
$ oneimage show 2
IMAGE 2 INFORMATION
ID : 2
[...]
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
The output above shows that the owner of the image has USE and MANAGE rights.
If none of the above conditions are true, then the set of ACL rules is iterated until one of the rules allows the operation.
An important concept about the ACL set is that each rule adds new permissions, and they cant restrict existing ones:
if any rule grants permission, the operation is allowed.
This is important because you have to be aware of the rules that apply to a user and his group. Consider the following
example: if a user #7 is in the group @108, with the following existing rule:
@108 IMAGE/#45 USE+MANAGE
Then the following rule wont have any effect:
#7 IMAGE/#45 USE
6.7 Managing Quotas
This guide will show you how to set the usage quotas for users and groups.
6.7.1 Overview
The quota system tracks user and group usage of system resources, and allows the system administrator to set limits
on the usage of these resources. Quota limits can be set for:
users, to individually limit the usage made by a given user.
groups, to limit the overall usage made by all the users in a given group. This can be of special interest for the
OpenNebula Zones and Virtual Data Center (VDC) components.
114 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
6.7.2 Which Resource can be limited?
The quota system allows you to track and limit usage on:
Datastores, to control the amount of storage capacity allocated to each user/group for each datastore.
Compute, to limit the overall memory, cpu or VM instances.
Network, to limit the number of IPs a user/group can get from a given network. This is specially interesting for
networks with public IPs, which usually are a limited resource.
Images, you can limit the how many VM instances from a given user/group are using a given image. You can
take advantage of this quota when the image contains consumable resources (e.g. software licenses).
6.7.3 Dening User/Group Quotas
Usage quotas are set in a traditional template syntax (either plain text or XML). The following table explains the
attributes needed to set each quota:
Datastore Quotas. Attribute name: DATASTORE
DATASTORE
Attribute
Description
ID SIZE
IMAGE
ID of the Datastore to set the quota for Maximum size in MB that can be used in the datastore
Maximum number of images that can be created in the datastore
Compute Quotas. Attribute name: VM
VM Attribute Description
VMS MEMORY
CPU
VOLATILE_SIZE
Maximum number of VMs that can be created Maximum memory in MB that can be
requested by user/group VMs Maximum CPU capacity that can be requested by
user/group VMs Maximum volatile disks size (in MB) that can be requested by
user/group VMs
Network Quotas. Attribute name: NETWORK
NETWORK Attribute Description
ID LEASES ID of the Network to set the quota for Maximum IPs that can be leased from the
Network
Image Quotas. Attribute name: IMAGE
IMAGE Attribute Description
ID RVMS ID of the Image to set the quota for Maximum VMs that can used this image at the same time
For each quota, there are two special limits:
0 means unlimited
-1 means that the default quota will be used
6.7. Managing Quotas 115
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: Each quota has an usage counter associated named <QUOTA_NAME>_USED. For example
MEMORY_USED means the total memory used by user/group VMs, and its associated quota is MEMORY.
The following template shows a quota example for a user in plain text. It limits the overall usage in Datastore 0 to
20Gb (for an unlimited number of images); the number of VMs that can be created to 4 with a maximum memory to
2G and 5 CPUs; the number of leases from network 1 to 4; and image 1 can only be used by 3 VMs at the same time:
DATASTORE=[
ID="1",
IMAGES="0",
SIZE="20480"
]
VM=[
CPU="5",
MEMORY="2048",
VMS="4",
VOLATILE_SIZE="-1"
]
NETWORK=[
ID="1",
LEASES="4"
]
IMAGE=[
ID="1",
RVMS="3"
]
IMAGE=[
ID="2",
RVMS="0"
]
Warning: Note that whenever a network, image, datastore or VM is used the corresponding quota counters are
created for the user with an unlimited value. This allows to track the usage of each user/group even when quotas
are not used.
6.7.4 Setting User/Group Quotas
User/group quotas can be easily set up either trough the command line interface or sunstone. Note that you need
MANAGE permissions to set a quota of user, and ADMIN permissions to set the quota of a group. In this way, by
default, only oneadmin can set quotas for a group, but if you dene a group manager (as in a VDC) she can set specic
usage quotas for the users on her group (so distributing resources as required). You can always change this behaviour
setting the appropriate ACL rules.
To set the quota for a user, e.g. userA, just type:
$ oneuser quota userA
This will open an editor session to edit a quota template (with some tips about the syntax).
Warning: Usage metrics are included for information purposes (e.g. CPU_USED, MEMORY_USED,
LEASES_USED...) you cannot modify them
116 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
Warning: You can add as many resource quotas as needed even if they have not been automatically initialized.
Similarly, you can set the quotas for group A with:
$ onegroup quota groupA
There is a batchquota command that allows you to set the same quotas for several users or groups:
$ oneuser batchquota userA,userB,35
$ onegroup batchquota 100..104
You can also set the user/group quotas in Sunstone through the user/group tab.
6.7. Managing Quotas 117
OpenNebula 4.6 Administration Guide, Release 4.6
6.7.5 Setting Default Quotas
There are two default quota limit templates, one for users and another for groups. This template applies to all
users/groups, unless they have an individual limit set.
Use the oneuser/onegroup defaultquota command.
$ oneuser defaultquota
6.7.6 Checking User/Group Quotas
Quota limits and usage for each user/group is included as part of its standard information, so it can be easily check
with the usual commands. Check the following examples:
$ oneuser show uA
USER 2 INFORMATION
ID : 2
NAME : uA
GROUP : gA
PASSWORD : a9993e364706816aba3e25717850c26c9cd0d89d
AUTH_DRIVER : core
ENABLED : Yes
USER TEMPLATE
RESOURCE USAGE & QUOTAS
DATASTORE ID IMAGES (used) IMAGES (limit) SIZE (used) SIZE (limit)
1 1 0 1024 0
118 Chapter 6. Users and Groups
OpenNebula 4.6 Administration Guide, Release 4.6
VMS MEMORY (used) MEMORY (limit) CPU (used) CPU (limit)
0 1024 0 1 0
NETWORK ID LEASES (used) LEASES (limit)
1 1 0
IMAGE ID RVMS (used) RVMS (limit)
1 0 0
2 0 0
And for the group:
$ onegroup show gA
GROUP 100 INFORMATION
ID : 100
NAME : gA
USERS
ID
2
3
RESOURCE USAGE & QUOTAS
DATASTORE ID IMAGES (used) IMAGES (limit) SIZE (used) SIZE (limit)
1 2 0 2048 0
VMS MEMORY (used) MEMORY (limit) CPU (used) CPU (limit)
0 2048 0 2 0
NETWORK ID LEASES (used) LEASES (limit)
1 1 0
2 1 0
IMAGE ID RVMS (used) RVMS (limit)
1 0 0
2 0 0
5 1 0
6 1 0
This information is also available through Sunstone as part of the user/group information.
6.7. Managing Quotas 119
OpenNebula 4.6 Administration Guide, Release 4.6
120 Chapter 6. Users and Groups
CHAPTER
SEVEN
AUTHENTICATION
7.1 External Auth Overview
OpenNebula comes by default with an internal user/password authentication system, see the Users & Groups Subsys-
tem guide for more information. You can enable an external Authentication driver.
7.1.1 Authentication
In the gure to the right of this text you can see three authentication congurations you can customize in OpenNebula.
a) CLI Authentication
You can choose from the following authentication drivers to access OpenNebula from the command line:
Built-in User/Password
SSH Authentication
X509 Authentication
LDAP Authentication
b) Sunstone Authentication
By default, users with the core authentication driver (user/password) can login in Sunstone. You can enable users
with the x authentication driver to login using an external SSL proxy (e.g. Apache).
Proceed to the Sunstone documentation to congure the x509 access:
Sunstone Authentication Methods
121
OpenNebula 4.6 Administration Guide, Release 4.6
c) Servers Authentication
OpenNebula ships with three servers: Sunstone, EC2 and OCCI. When a user interacts with one of them, the server
authenticates the request and then forwards the requested operation to the OpenNebula daemon.
The forwarded requests are encrypted by default using a Symmetric Key mechanism. The following guide shows how
to strengthen the security of these requests using x509 certicates. This is specially relevant if you are running your
server in a machine other than the frontend.
Cloud Servers Authentication
7.2 SSH Auth
This guide will show you how to enable and use the SSH authentication for the OpenNebula CLI. Using this authenti-
cation method, users login to OpenNebula with a token encrypted with their private ssh keys.
7.2.1 Requirements
You dont need to install any additional software.
7.2.2 Considerations & Limitations
With the current release, this authentication method is only valid to interact with OpenNebula using the CLI.
7.2.3 Conguration
OpenNebula Conguration
The Auth MAD and ssh authentication is enabled by default. In case it does not work make sure that the authentication
method is in the list of enabled methods.
AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509"
]
There is an external plain user/password authentication driver, and existing accounts will keep working as usual.
7.2.4 Usage
Create New Users
This authentication method uses standard ssh rsa keypairs for authentication. Users can create these les if they dont
exist using this command:
newuser@frontend $ ssh-keygen -t rsa
OpenNebula commands look for the les generated at the standard location ($HOME/.ssh/id_rsa) so it is a good
idea not to change the default path. It is also a good idea to protect the private key with a password.
122 Chapter 7. Authentication
OpenNebula 4.6 Administration Guide, Release 4.6
The users requesting a new account have to generate a public key and send it to the administrator. The way to extract
it is the following:
newuser@frontend $ oneuser key
Enter PEM pass phrase:
MIIBCAKCAQEApUO+JISjSf02rFVtDr1yar/34EoUoVETx0n+RqWNav+5wi+gHiPp3e03AfEkXzjDYi8F
voS4a4456f1OUQlQddfyPECn59OeX8Zu4DH3gp1VUuDeeE8WJWyAzdK5hg6F+RdyP1pT26mnyunZB8Xd
bll8seoIAQiOS6tlVfA8FrtwLGmdEETfttS9ukyGxw5vdTplse/fcam+r9AXBR06zjc77x+DbRFbXcgI
1XIdpVrjCFL0fdN53L0aU7kTE9VNEXRxK8sPv1Nfx+FQWpX/HtH8ICs5WREsZGmXPAO/IkrSpMVg5taS
jie9JAQOMesjFIwgTWBUh6cNXuYsQ/5wIwIBIw==
The string written to the console must be sent to the administrator, so the can create the new user in a similar way as
the default user/password authentication users.
The following command will create a new user with username newuser, assuming that the previous public key is
saved in the text le /tmp/pub_key:
oneadmin@frontend $ oneuser create newuser --ssh --read-file /tmp/pub_key
Instead of using the -read-file option, the public key could be specied as the second parameter.
If the administrator has access to the users private ssh key, he can create new users with the following command:
oneadmin@frontend $ oneuser create newuser --ssh --key /home/newuser/.ssh/id_rsa
Update Existing Users to SSH
You can change the authentication method of an existing user to SSH with the following commands:
oneadmin@frontend $ oneuser chauth <id|name> ssh
oneadmin@frontend $ oneuser passwd <id|name> --ssh --read-file /tmp/pub_key
As with the create command, you can specify the public key as the second parameter, or use the users private key
with the -key option.
User Login
Users must execute the oneuser login command to generate a login token, and export the new ONE_AUTH envi-
ronment variable. The command requires the OpenNebula username, and the authentication method (-ssh in this
case).
newuser@frontend $ oneuser login newuser --ssh
export ONE_AUTH=/home/newuser/.one/one_ssh
newuser@frontend $ export ONE_AUTH=/home/newuser/.one/one_ssh
The default ssh key is assumed to be in ~/.ssh/id_rsa, otherwise the path can be specied with the -key option.
The generated token has a default expiration time of 1 hour. You can change that with the -time option.
7.3 x509 Authentication
This guide will show you how to enable and use the x509 certicates authentication with OpenNebula. The x509
certicates can be used in two different ways in OpenNebula.
7.3. x509 Authentication 123
OpenNebula 4.6 Administration Guide, Release 4.6
The rst option that is explained in this guide enables us to use certicates with the CLI. In this case the user will gen-
erate a login token with his private key, OpenNebula will validate the certicate and decrypt the token to authenticate
the user.
The second option enables us to use certicates with Sunstone and the Public Cloud servers included in OpenNebula.
In this case the authentication is leveraged to Apache or any other SSL capable http proxy that has to be congured by
the administrator. If this certicate is validated the server will encrypt those credentials using a server certicate and
will send the token to OpenNebula.
7.3.1 Requirements
If you want to use the x509 certicates with Sunstone or one of the Public Clouds, you must deploy a SSL capable
http proxy on top of them in order to handle the certicate validation.
7.3.2 Considerations & Limitations
The X509 driver uses the certicate DN as user passwords. The x509 driver will remove any space in the certicate
DN. This may cause problems in the unlikely situation that you are using a CA signing certicate subjects that only
differ in spaces.
7.3.3 Conguration
The following table summarizes the available options for the x509 driver (/etc/one/auth/x509_auth.conf):
VARI-
ABLE
VALUE
:ca_dir Path to the trusted CA directory. It should contain the trusted CAs for the server, each CA certicate
shoud be name CA_hash.0
:check_crl By default, if you place CRL les in the CA directory in the form CA_hash.r0, OpenNebula will check
them. You can enforce CRL checking by dening :check_crl, i.e. authentication will fail if no CRL le
is found. You can always disable this feature by moving or renaming .r0 les
Follow these steps to change oneadmins authentication method to x509:
Warning: You should have another account in the oneadmin group, so you can revert these steps if the process
fails.
Change the oneadmin password to the oneadmin certicate DN.
oneadmin@frontend $ oneuser chauth 0 x509 --x509 --cert /tmp/newcert.pem
Add trusted CA certicates to the certicates directory
$ openssl x509 -noout -hash -in cacert.pem
78d0bbd8
$ sudo cp cacert.pem /etc/one/auth/certificates/78d0bbd8.0
Create a login for oneadmin using the x509 option. This token has a default expiration time set to 1 hour, you
can change this value using the option time.
oneadmin@frontend $ oneuser login oneadmin --x509 --cert newcert.pem --key newkey.pem
Enter PEM pass phrase:
export ONE_AUTH=/home/oneadmin/.one/one_x509
124 Chapter 7. Authentication
OpenNebula 4.6 Administration Guide, Release 4.6
Set ONE_AUTH to the x509 login le
oneadmin@frontend $ export ONE_AUTH=/home/oneadmin/.one/one_x509
7.3.4 Usage
Add and Remove Trusted CA Certicates
You need to copy all trusted CA certicates to the certicates directory, renaming each of them as <CA_hash>.0.
The hash can be obtained with the openssl command:
$ openssl x509 -noout -hash -in cacert.pem
78d0bbd8
$ sudo cp cacert.pem /etc/one/auth/certificates/78d0bbd8.0
To stop trusting a CA, simply remove its certicate from the certicates directory.
This process can be done without restarting OpenNebula, the driver will look for the certicates each time an authen-
tication request is made.
Create New Users
The users requesting a new account have to send their certicate, signed by a trusted CA, to the administrator. The
following command will create a new user with username newuser, assuming that the users certicate is saved in
the le /tmp/newcert.pem:
oneadmin@frontend $ oneuser create newuser --x509 --cert /tmp/newcert.pem
This command will create a new user whose password contains the subject DN of his certicate. Therefore if the
subject DN is known by the administrator the user can be created as follows:
oneadmin@frontend $ oneuser create newuser --x509 "user_subject_DN"
Update Existing Users to x509 & Multiple DN
You can change the authentication method of an existing user to x509 with the following command:
Using the user certicate:
oneadmin@frontend $ oneuser chauth <id|name> x509 --x509 --cert /tmp/newcert.pem
Using the user certicate subject DN:
oneadmin@frontend $ oneuser chauth <id|name> x509 --x509 "user_subject_DN"
You can also map multiple certicates to the same OpenNebula account. Just add each certicate DN separated with
| to the password eld.
oneadmin@frontend $ oneuser passwd <id|name> --x509 "/DC=es/O=one/CN=user|/DC=us/O=two/CN=user"
7.3. x509 Authentication 125
OpenNebula 4.6 Administration Guide, Release 4.6
User Login
Users must execute the oneuser login command to generate a login token, and export the new ONE_AUTH envi-
ronment variable. The command requires the OpenNebula username, and the authentication method (-x509 in this
case).
newuser@frontend $ oneuser login newuser --x509 --cert newcert.pem --key newkey.pem
Enter PEM pass phrase:
export ONE_AUTH=/home/user/.one/one_x509
newuser@frontend $ export ONE_AUTH=/home/user/.one/one_x509
The generated token has a default expiration time of 1 hour. You can change that with the -time option.
7.3.5 Tuning & Extending
The x509 authentication method is just one of the drivers enabled in AUTH_MAD. All drivers are located in
/var/lib/one/remotes/auth.
OpenNebula is congured to use x509 authentication by default. You can customize the enabled drivers in the
AUTH_MAD attribute of oned.conf . More than one authentication method can be dened:
AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509"
]
7.3.6 Enabling x509 auth in Sunstone
Update the /etc/one/sunstone-server.conf :auth parameter to use the x509 auth:
:auth: x509
7.4 LDAP Authentication
The LDAP Authentication addon permits users to have the same credentials as in LDAP, so effectively centralizing
authentication. Enabling it will let any correctly authenticated LDAP user to use OpenNebula.
7.4.1 Prerequisites
Warning: This Addon requires the net/ldap ruby library provided by the net-ldap gem.
This Addon will not install any Ldap server or congure it in any way. It will not create, delete or modify any entry
in the Ldap server it connects to. The only requirement is the ability to connect to an already running Ldap server and
being able to perform a successful ldapbind operation and have a user able to perform searches of users, therefore no
special attributes or values are required in the LDIF entry of the user authenticating.
126 Chapter 7. Authentication
OpenNebula 4.6 Administration Guide, Release 4.6
7.4.2 Considerations & Limitations
LDAP auth driver has a bug that does not let it connect to TLS LDAP instances. A patch is available in the bug issue
to x this. The x will be applied in future releases.
7.4.3 Conguration
Conguration le for auth module is located at /etc/one/auth/ldap_auth.conf. This is the default cong-
uration:
server 1:
# Ldap user able to query, if not set connects as anonymous. For
# Active Directory append the domain name. Example:
# Administrator@my.domain.com
#:user: admin
#:password: password
# Ldap authentication method
:auth_method: :simple
# Ldap server
:host: localhost
:port: 389
# base hierarchy where to search for users and groups
:base: dc=domain
# group the users need to belong to. If not set any user will do
#:group: cn=cloud,ou=groups,dc=domain
# field that holds the user name, if not set cn will be used
:user_field: cn
# for Active Directory use this user_field instead
#:user_field: sAMAccountName
# this example server wont be called as it is not in the :order list
server 2:
:auth_method: :simple
:host: localhost
:port: 389
:base: dc=domain
#:group: cn=cloud,ou=groups,dc=domain
:user_field: cn
# List the order the servers are queried
:order:
- server 1
#- server 2
The structure is a hash where any key different to :order will contain the conguration of one ldap server we want
to query. The special key :order holds an array with the order we want to query the congured servers. Any server
not listed in :order wont be queried.
7.4. LDAP Authentication 127
OpenNebula 4.6 Administration Guide, Release 4.6
VARIABLE DESCRIPTION
:user Name of the user that can query ldap. Do not set it if you can perform queries anonymously
:password Password for the user dened in :user. Do not set if anonymous access is enabled
:auth_method Can be set to :simple_tls if ssl connection is needed
:host Host name of the ldap server
:port Port of the ldap server
:base Base leaf where to perform user searches
:group If set the users need to belong to this group
:user_field Field in ldap that holds the user name
To enable ldap authentication the described parameters should be congured. OpenNebula must be also congured
to enable external authentication. Uncomment these lines in /etc/one/oned.conf and add ldap and default
(more on this later) as an enabled authentication method.
AUTH_MAD = [
executable = "one_auth_mad",
authn = "default,ssh,x509,ldap,server_cipher,server_x509"
]
To be able to use this driver for users that are still not in the user database you must set it to the default driver. To
do this go to the auth drivers directory and copy the directory ldap to default. In system-wide installations you
can do this using this command:
$ cp -R /var/lib/one/remotes/auth/ldap /var/lib/one/remotes/auth/default
7.4.4 User Management
Using LDAP authentication module the administrator doesnt need to create users with oneuser com-
mand as this will be automatically done. The user should add its credentials to $ONE_AUTH le (usually
$HOME/.one/one_auth) in this fashion:
<user_dn>:ldap_password
where
<user_dn> the DN of the user in the LDAP service
ldap_password is the password of the user in the LDAP service
DNs With Special Characters
When the user dn or password contains blank spaces the LDAP driver will escape them so they can be used to create
OpenNebula users. Therefore, users needs to set up their $ONE_AUTH le accordingly.
Users can easily create escaped $ONE_AUTH tokens with the command oneuser encode <user>
[<password>], as an example:
$ oneuser encode cn=First Name,dc=institution,dc=country pass word
cn=First%20Name,dc=institution,dc=country:pass%20word
The output of this command should be put in the $ONE_AUTH le.
7.4.5 Active Directory
LDAP Auth drivers are able to connect to Active Directory. You will need:
128 Chapter 7. Authentication
OpenNebula 4.6 Administration Guide, Release 4.6
Active Directory server with support for simple user/password authentication.
User with read permissions in the Active Directory users tree.
You will need to change the following values in the conguration le (/etc/one/auth/ldap_auth.conf):
:user: the Active Directory user with read permissions in the users tree plus the do-
main. For example for user Administrator at domain win.opennebula.org you specify it as
Administrator@win.opennebula.org
:password: password of this user
:host: hostname or IP of the Domain Controller
:base: base DN to search for users. You need to decompose the full domain name and use
each part as DN component. Example, for win.opennebula.org you will get te base DN:
DN=win,DN=opennebula,DN=org
:user_field: set it to sAMAccountName
:group parameter is still not supported for Active Directory, leave it commented.
7.4.6 Enabling LDAP auth in Sunstone
Update the /etc/one/sunstone-server.conf :auth parameter to use the opennebula:
:auth: opennebula
Using this method the credentials provided in the login screen will be sent to the OpenNebula core and the authen-
tication will be delegated to the OpenNebula auth system, using the specied driver for that user. Therefore any
OpenNebula auth driver can be used through this method to authenticate the user (i.e: LDAP).
To automatically encode credentials as explained in DNs with special characters section also add this parameter to
sunstone conguration:
:encode_user_password: true
7.4. LDAP Authentication 129
OpenNebula 4.6 Administration Guide, Release 4.6
130 Chapter 7. Authentication
CHAPTER
EIGHT
SUNSTONE GUI
8.1 OpenNebula Sunstone: The Cloud Operations Center
OpenNebula Sunstone is the OpenNebula Cloud Operations Center, a Graphical User Interface (GUI) intended for
regular users and administrators that simplies the typical management operations in private and hybrid cloud infras-
tructures. OpenNebula Sunstone allows to easily manage all OpenNebula resources and perform typical operations on
them.
OpenNebula Sunstone can be adapted to different user roles. For example, it will only show the resources the users
have access to. Its behaviour can be customized and extended via views.
8.1.1 Requirements
You must have an OpenNebula site properly congured and running to use OpenNebula Sunstone, be sure to check
the OpenNebula Installation and Conguration Guides to set up your private cloud rst. This guide also assumes that
you are familiar with the conguration and use of OpenNebula.
OpenNebula Sunstone was installed during the OpenNebula installation. If you followed the installation guide then
you already have all ruby gem requirements. Otherwise, run the install_gem script as root:
131
OpenNebula 4.6 Administration Guide, Release 4.6
# /usr/share/one/install_gems sunstone
The Sunstone Operation Center offers the possibility of starting a VNC session to a Virtual Machine. This is done
by using a VNC websocket-based client (noVNC) on the client side and a VNC proxy translating and redirecting the
connections on the server-side.
Requirements:
Websockets-enabled browser (optional): Firefox and Chrome support websockets. In some versions of Firefox
manual activation is required. If websockets are not enabled, ash emulation will be used.
Installing the python-numpy package is recommended for a better vnc performance.
8.1.2 Considerations & Limitations
OpenNebula Sunstone supports Firefox (> 3.5) and Chrome browsers. Internet Explorer, Opera and others are not
supported and may not work well.
8.1.3 Conguration
Cannot connect to OneFlow server
The last two tabs, OneFlow Services and Templates, will show the following message:
Cannot connect to OneFlow server
You need to start the OneFlow component following this guide, or disable these two menu entries in the admin.yaml
and user.yaml sunstone views.
sunstone-server.conf
Sunstone conguration le can be found at /etc/one/sunstone-server.conf. It uses YAML syntax to dene
some options:
Available options are:
132 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
Option Description
:tmpdir Uploaded images will be temporally stored in this folder before being copied to OpenNebula
:one_xmlrpc OpenNebula daemon host and port
:host IP address on which the server will listen on. 0.0.0.0 for everyone. 127.0.0.1 by default.
:port Port on which the server will listen. 9869 by default.
:sessions Method of keeping user sessions. It can be memory or memcache. For server that spawn more
than one process (like Passenger or Unicorn) memcache should be used
:mem-
cache_host
Host where memcached server resides
:mem-
cache_port
Port of memcached server
:mem-
cache_namespace
memcache namespace where to store sessions. Useful when memcached server is used by
more services
:debug_level Log debug level: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
:auth Authentication driver for incoming requests. Possible values are sunstone, opennebula
and x509. Check authentication methods for more info
:core_auth Authentication driver to communicate with OpenNebula core. Possible values are x509 or
cipher. Check cloud_auth for more information
:lang Default language for the Sunstone interface. This is the default language that will be used if user
has not dened a variable LANG with a different valid value its user template
:vnc_proxy_port Base port for the VNC proxy. The proxy will run on this port as long as Sunstone server does.
29876 by default.
:vnc_proxy_support_wss yes, no, only. If enabled, the proxy will be set up with a certicate and a key to use secure
websockets. If set to only the proxy will only accept encrypted connections, otherwise it will
accept both encrypted or unencrypted ones.
:vnc_proxy_cert Full path to certicate le for wss connections.
:vnc_proxy_key Full path to key le. Not necessary if key is included in certicate.
:vnc_proxy_ipv6 Enable ipv6 for novnc. (true or false)
:table_order Default table order, resources get ordered by ID in asc or desc order.
:market-
place_username
Username credential to connect to the Marketplace.
:market-
place_password
Password to connect to the Marketplace.
:market-
place_url
Endpoint to connect to the Marketplace. If commented, a 503 service unavailable error
will be returned to clients.
:one-
ow_server
Endpoint to connect to the OneFlow server.
:routes List of les containing custom routes to be loaded. Check server plugins for more info.
Warning: In order to access Sunstone from other place than localhost you need to set the servers public IP
in the :host option. Otherwise it will not be reachable from the outside.
Starting Sunstone
To start Sunstone just issue the following command as oneadmin
$ sunstone-server start
You can nd the Sunstone server log le in /var/log/one/sunstone.log. Errors are logged in
/var/log/one/sunstone.error.
To stop the Sunstone service:
8.1. OpenNebula Sunstone: The Cloud Operations Center 133
OpenNebula 4.6 Administration Guide, Release 4.6
$ sunstone-server stop
VNC Troubleshooting
There can be multiple reasons that may prevent noVNC from correctly connecting to the machines. Heres a checklist
of common problems:
noVNC requires Python >= 2.5 for the websockets proxy to work. You may also need additional modules as
python2<version>-numpy.
You can retrieve useful information from /var/log/one/novnc.log
You must have a GRAPHICS section in the VM template enabling VNC, as stated in the documentation. Make
sure the attribute IP is set correctly (0.0.0.0 to allow connections from everywhere), otherwise, no connec-
tions will be allowed from the outside.
Your browser must support websockets, and have them enabled. This is the default in latest Chrome and Firefox,
but former versions of Firefox (i.e. 3.5) required manual activation. Otherwise Flash emulation will be used.
Make sure there are not rewalls blocking the connections. The proxy will redirect the websocket data from
the VNC proxy port to the VNC port stated in the template of the VM. The value of the proxy port is dened in
sunstone-server.conf.
Make sure that you can connect directly from Sunstone frontend to the VM using a normal VNC client tools
such as vncviewer.
When using secure websockets, make sure that your certicate and key (if not included in certicate), are
correctly set in Sunstone conguration les. Note that your certicate must be valid and trusted for the wss
connection to work. If you are working with a certicicate that it is not accepted by the browser, you can manu-
ally add it to the browser trust-list visiting https://sunstone.server.address:vnc_proxy_port.
The browser will warn that the certicate is not secure and prompt you to manually trust it.
Make sure that you have not checked the Secure websockets connection in the Conguration dialog
if your proxy has not been congured to support them. Connection will fail if so.
If your connection is very, very, very slow, there might be a token expiration issue. Please try the manual proxy
launch as described below to check it.
Doesnt work yet? Try launching Sunstone, killing the websockify proxy and relaunching the proxy manually
in a console window with the command that is logged at the beginning of /var/log/one/novnc.log.
You must generate a lock le containing the PID of the python process in /var/lock/one/.novnc.lock
Leave it running and click on the VNC icon on Sunstone for the same VM again. You should see some output
from the proxy in the console and hopefully the cause of why the connection does not work.
Please contact the user list only when you have gone through the suggestion above and provide full sunstone
logs, shown errors and any relevant information of your infraestructure (if there are Firewalls etc)
8.1.4 Tuning & Extending
For more information on how to customize and extend you Sunstone deployment use the following links:
Sunstone Views, different roles different views.
Security & Authentication Methods, improve security with x509 authentication and SSL
Advanced Deployments, improving scalability and isolating the server
134 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.2 Sunstone Views
Using the new OpenNebula Sunstone Views you will be able to provide a simplied UI aimed at end-users of an
OpenNebula cloud. The OpenNebula Sunstone Views are fully customizable, so you can easily enable or disable
specic information tabs or action buttons. You can dene multiple cloud views for different user groups. Each view
denes a set of UI components so each user just access and view the relevant parts of the cloud for her role.
8.2.1 Default Views
OpenNebula provides a default admin, vdcadmin, user and cloud view that implements four common views.
By default, the admin view is only available to the oneadmin group. New users will be included in the users group
and will use the deafault user view.
Admin View
This view provides full control of the cloud.
VDCAdmin View
This view provides control of all the resources belonging to a Virtual DataCenter (VDC), but with no access to re-
sources outside that VDC. It is basically and Admin view restricted to the physical and virtual resources of the VDC,
with the ability to create new users within the VDC.
8.2. Sunstone Views 135
OpenNebula 4.6 Administration Guide, Release 4.6
User View
In this view users will not be able to manage nor retrieve the hosts and clusters of the cloud. They will be
able to see Datastores and Virtual Networks in order to use them when creating a new Image or Virtual Ma-
chine, but they will not be able to create new ones. For more information about this view, please check the
/etc/one/sunstone-views/user.yaml le.
136 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
Cloud View
This is a simplied view mainly intended for user that just require a portal where they can provision new virtual ma-
chines easily. They just have to select one of the available templates and the operating systemthat will run in this virtual
machine. For more information about this view, please check the /etc/one/sunstone-views/cloud.yaml
le.
In this scenario the cloud administrator must prepare a set of templates and images and make them available to the
cloud users. Templates must dene all the required parameters and just leave the DISK section empty, so the user can
select any of the available images. New virtual machines will be created merging the information provided by the user
(image, vm_name...) and the base template. Thereby, the user doesnt have to know any details of the infrastructure
such as networking, storage. For more information on how to congure this scenario see this guide
8.2. Sunstone Views 137
OpenNebula 4.6 Administration Guide, Release 4.6
8.2.2 Requirements
OpenNebula Sunstone Views does not require any additional service to run. You may want to review the Sunstone
conguration to deploy advanced setups, to scale the access to the web interface or to use SSL security.
8.2.3 Usage
Sunstone users can congure several options from the conguration tab:
Language: select the language that they want to use for the UI.
Use secure websockets for VNC: Try to connect using secure websockets when starting VNC sessions.
Views: change between the different available views for the given user/group
Display Name: If the user wishes to customize the username that is shown in Sunstone it is possible to so by
adding a special parameter named SUNSTONE_DISPLAY_NAME with the desired value. It is worth noting that
Cloud Administrators may want to automate this with a hook on user create in order to fetch the user name from
outside OpenNebula.
This options are saved in the user template. If not dened, defaults from sunstone-server.conf are taken.
138 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
Changing your View
If more than one view are available for this user, she can easily change between them in the settings window, along
with other settings (e.g. language).
Warning: By default users in the oneadmin group have access to all the views; users in the users group can only
use the users view. If you want to expose the cloud view to a given group of users, you have to modify the
sunstone-views.yaml. For more information check the conguring access to views section
Internationalization and Languages
Sunstone support multiple languages. If you want to contribute a new language, make corrections or complete a
translation, you can visit our:
Transifex poject page
Translating through Transifex is easy and quick. All translations should be submitted via Transifex.
Users can update or contribute translations anytime. Prior to every release, normally after the beta release, a call for
translations will be made in the user list. Then the source strings will be updated in Transifex so all the translations
can be updated to the latest OpenNebula version. Translation with an acceptable level of completeness will be added
to the nal OpenNebula release.
8.2. Sunstone Views 139
OpenNebula 4.6 Administration Guide, Release 4.6
8.2.4 Advanced Conguration
There are three basic areas that can be tuned to adapt the default behavior to your provisioning needs:
Dene views, the set of UI components that will be enabled.
Dene the users and groups that may access to each view.
Brand your OpenNebula Sunstone portal.
Dening a New OpenNebula Sunstone View or Customizing an Existing one
View denitions are placed in the /etc/one/sunstone-views directory. Each view is dened by a conguration
le, in the form:
<view_name>.yaml
The name of the view is the the lename without the yaml extension. The default views are dened by the user.yaml
and admin.yaml les, as shown below:
etc/
...
|-- sunstone-views/
| |-- admin.yaml <--- the admin view
| -- user.yaml
-- sunstone-views.yaml
...
The content of a view le species the tabs available in the view (note: tab is on of the main sections of the UI, those
in the left-side menu). Each tab can be enabled or disabled by updating the enabled_tabs: attribute. For example
to disable the Clusters tab, just set clusters-tab value to false:
enabled_tabs:
dashboard-tab: true
system-tab: true
users-tab: true
groups-tab: true
acls-tab: true
vresources-tab: true
vms-tab: true
templates-tab: true
images-tab: true
files-tab: true
infra-tab: true
clusters-tab: false
hosts-tab: true
datastores-tab: true
vnets-tab: true
marketplace-tab: true
oneflow-dashboard: tru
oneflow-services: true
oneflow-templates: true
Each tab, can be tuned by selecting:
The bottom tabs available (panel_tabs: attribute) in the tab, these are the tabs activated when an object is
selected (e.g. the information, or capacity tabs in the Virtual Machines tab).
The columns shown in the main information table (table_columns: attribute).
140 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
The action buttons available to the view (actions: attribute).
The attributes in each of the above sections should be self-explanatory. As an example, the following section, denes
a simplied datastore tab, without the info panel_tab and no action buttons:
datastores-tab:
panel_tabs:
datastore_info_tab: false
datastore_image_tab: true
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Cluster
#- 6 # Basepath
#- 7 # TM
#- 8 # DS
#- 9 # Type
actions:
Datastore.refresh: true
Datastore.create_dialog: false
Datastore.addtocluster: false
Datastore.chown: false
Datastore.chgrp: false
Datastore.chmod: false
Datastore.delete: false
Warning: The easiest way to create a custom view is to copy the admin.yaml le to the new view then harden
it as needed.
Conguring Access to the Views
Once you have dened and customized the UI views for the different roles, you need to dene which user groups or
users may access to each view. This information is dened in the /etc/one/sunstone-views.yaml.
The views can be dened for:
Each user (users: section), list each user and the set of views available for her.
Each group (groups: section), list the set of views for the group.
The default view, if a user is not listed in the users: section, nor its group in the groups: section, the default
views will be used.
For example the following enables the user (user.yaml) and the cloud (cloud.yaml) views for helen and the cloud
(cloud.yaml) view for group cloud-users. If more than one view for a given user the rst one is the default:
...
users:
helen:
- cloud
- user
groups:
cloud-users:
- cloud
default:
- user
8.2. Sunstone Views 141
OpenNebula 4.6 Administration Guide, Release 4.6
A Different Endpoint for Each View
OpenNebula Sunstone views can be adapted to deploy a different endpoint for each kind of user. For example if you
want an endpoint for the admins and a different one for the cloud users. You will just have to deploy a new sunstone
server (TODO deploy in a different machine link) and set a default view for each sunstone instance:
# Admin sunstone
cat /etc/one/sunstone-server.conf
...
:host: admin.sunstone.com
...
cat /etc/one/sunstone-views.yaml
...
users:
groups:
default:
- admin
# Users sunstone
cat /etc/one/sunstone-server.conf
...
:host: user.sunstone.com
...
cat /etc/one/sunstone-views.yaml
...
users:
groups:
default:
- user
Branding the Sunstone Portal
You can easily add you logos to the login and main screens by updating the logo: attribute as follows:
The login screen is dened in the /etc/one/sunstone-views.yaml.
The logo of the main UI screen is dened for each view in the view le.
8.3 Self-service Cloud View
This is a simplied view intended for cloud consumers that just require a portal where they can provision new vir-
tual machines easily. To create new VMs, they just have to select one of the available templates prepared by the
administrators.
In this scenario the cloud administrator, or the vDC administrator, must prepare a set of templates and images and make
them available to the cloud users. These Templates must be ready to be instantiated, i.e. they dene all the mandatory
attributes. Before using them, users can optinally customize the VM capacity, and add new network interfaces.
142 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.3.1 How to Prepare the Templates
When launching a new VM, users are required to select a Template. These templates should be prepared by the cloud
or vDC administrator. Make sure that any Image or Network referenced by the Template can be also used by the users.
Read more about how to prepare resources for end users in the Adding Content to Your Cloud guide.
8.3. Self-service Cloud View 143
OpenNebula 4.6 Administration Guide, Release 4.6
8.3.2 How to Enable
The cloud view is enabled by default for all users. If you want to disable it, or enable just for certain groups, proceed
to the Sunstone Views documentation.
Note: Any user can change the current view in the Sunstone settings. Administrators can use this view without any
problem if they nd it easier to manage their VMs.
8.4 vDC Admin View
The role of a vDC Admin is to manage all the virtual resources of the vDC, including the creation of new users. When
one of these vDC Admin users access Sunstone, they get a limited view of the cloud, but more complete than the one
end users get with the Cloud View.
144 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
You can read more about OpenNebulas approach to vDCs and the cloud from the perspective of different user roles
in the Understanding OpenNebula guide.
8.4.1 Manage Users
The vDC Admin can create new user accounts, that will belong to the same vDC group. They can also see the current
resource usage of other users, and set quota limits for each one of them.
8.4. vDC Admin View 145
OpenNebula 4.6 Administration Guide, Release 4.6
146 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.4. vDC Admin View 147
OpenNebula 4.6 Administration Guide, Release 4.6
8.4.2 Manage Resources
Admins can manage the VMs and Images of other users in the vDC.
8.4.3 Create Machines
To create new Virtual Machines, the vDC Admin must change his current view to the cloud view. This can be done
in the settings menu, accesible from the upper username button.
148 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
The Cloud View is self explanatory.
8.4. vDC Admin View 149
OpenNebula 4.6 Administration Guide, Release 4.6
8.4.4 Prepare Resources for Other Users
Any user of the Cloud View can save the changes made to a VM back to a new Template. vDC Admins can for example
instantiate a clean VM prepared by the cloud administrator, install software needed by other users in his vDC, and
make it available for the rest of the group.
The save operation from the Cloud View will create a new Template and Image. These can be managed changing back
to the vdcadmin view from the settings.
150 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.4. vDC Admin View 151
OpenNebula 4.6 Administration Guide, Release 4.6
The admin must change the Group Use permission checkbox for both the new Template and Image.
Alternately, the new template & image could be assigned to a specic user. This can be done changing the owner.
152 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.4.5 Manage the Infrastructure
Although vDC admins cant manage the physical infrastructure, they have a limited amount of information about the
storage and the networks assigned to the vDC.
8.4. vDC Admin View 153
OpenNebula 4.6 Administration Guide, Release 4.6
154 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
8.5 User Security and Authentication
By default Sunstone works with the core authentication method (user and password) although you can congure any
authentication mechanism supported by OpenNebula. In this guide you will learn how to enable other authentication
methods and how to secure the Sunstone connections through SSL.
8.5.1 Authentication Methods
Authentication is two-folded:
Web client and Sunstone server. Authentication is based on the credentials store in the OpenNebula database
for the user. Depending on the type of this credentials the authentication method can be: basic, x509 and
opennebula (supporting LDAP or other custom methods).
Sunstone server and OpenNebula core. The requests of a user are forwarded to the core daemon, including the
original user name. Each request is signed with the credentials of an special server user. This authentication
mechanism is based either in symmetric key cryptography (default) or x509 certicates. Details on how to
congure these methods can be found in the Cloud Authentication guide.
The following sections details the client-to-Sunstone server authentication methods.
Basic Auth
In the basic mode, username and password are matched to those in OpenNebulas database in order to authorize the
user at the time of login. Rack cookie-based sessions are then used to authenticate and authorize the requests.
To enable this login method, set the :auth: option of /etc/one/sunstone-server.conf to sunstone:
:auth: sunstone
OpenNebula Auth
Using this method the credentials included in the header will be sent to the OpenNebula core and the authentication
will be delegated to the OpenNebula auth system, using the specied driver for that user. Therefore any OpenNebula
auth driver can be used through this method to authenticate the user (i.e: LDAP). The sunstone conguration is:
:auth: opennebula
x509 Auth
This method performs the login to OpenNebula based on a x509 certicate DN (Distinguished Name). The DN is
extracted from the certicate and matched to the password value in the user database.
The user password has to be changed running one of the following commands:
oneuser chauth new_user x509 "/C=ES/O=ONE/OU=DEV/CN=clouduser"
or the same command using a certicate le:
oneuser chauth new_user --x509 --cert /tmp/my_cert.pem
New users with this authentication method should be created as follows:
8.5. User Security and Authentication 155
OpenNebula 4.6 Administration Guide, Release 4.6
oneuser create new_user "/C=ES/O=ONE/OU=DEV/CN=clouduser" --driver x509
or using a certicate le:
oneuser create new_user --x509 --cert /tmp/my_cert.pem
To enable this login method, set the :auth: option of /etc/one/sunstone-server.conf to x509:
:auth: x509
The login screen will not display the username and password elds anymore, as all information is fetched from the
user certicate:
Note that OpenNebula will not verify that the user is holding a valid certicate at the time of login: this is expected
to be done by the external container of the Sunstone server (normally Apache), whose job is to tell the users browser
that the site requires a user certicate and to check that the certicate is consistently signed by the chosen Certicate
Authority (CA).
Warning: Sunstone x509 auth method only handles the authentication of the user at the time of login. Authenti-
cation of the user certicate is a complementary setup, which can rely on Apache.
8.5.2 Conguring a SSL Proxy
OpenNebula Sunstone runs natively just on normal HTTP connections. If the extra security provided by SSL is needed,
a proxy can be set up to handle the SSL connection that forwards the petition to the Sunstone server and takes back
the answer to the client.
156 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
This set up needs:
A server certicate for the SSL connections
An HTTP proxy that understands SSL
OpenNebula Sunstone conguration to accept petitions from the proxy
If you want to try out the SSL setup easily, you can nd in the following lines an example to set a self-signed certicate
to be used by a web server congured to act as an HTTP proxy to a correctly congured OpenNebula Sunstone.
Lets assume the server where the proxy is going to be started is called cloudserver.org. Therefore, the steps
are:
Step 1: Server Certicate (Snakeoil)
We are going to generate a snakeoil certicate. If using an Ubuntu system follow the next steps (otherwise your
milleage may vary, but not a lot):
Install the ssl-cert package
$ sudo apt-get install ssl-cert
Generate the certicate
$ sudo /usr/sbin/make-ssl-cert generate-default-snakeoil
As we are using lighttpd, we need to append the private key with the certicate to obtain a server certicate valid
to lighttpd
$ sudo cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem
Step 2: SSL HTTP Proxy
lighttpd
You will need to edit the /etc/lighttpd/lighttpd.conf conguration le and
Add the following modules (if not present already)
mod_access
mod_alias
mod_proxy
mod_accesslog
mod_compress
Change the server port to 443 if you are going to run lighttpd as root, or any number above 1024 otherwise:
server.port = 8443
Add the proxy module section:
#### proxy module
## read proxy.txt for more info
proxy.server = ( "" =>
("" =>
(
8.5. User Security and Authentication 157
OpenNebula 4.6 Administration Guide, Release 4.6
"host" => "127.0.0.1",
"port" => 9869
)
)
)
#### SSL engine
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/server.pem"
The host must be the server hostname of the computer running the Sunstone server, and the port the one that the
Sunstone Server is running on.
nginx
You will need to congure a new virtual host in nginx. Depending on the operating system and
the method of installation, nginx loads virtual host congurations from either /etc/nginx/conf.d or
/etc/nginx/sites-enabled.
A sample cloudserver.org virtual host is presented next:
#### OpenNebula Sunstone upstream
upstream sunstone {
server 127.0.0.1:9869;
}
#### cloudserver.org HTTP virtual host
server {
listen 80;
server_name cloudserver.org;
### Permanent redirect to HTTPS (optional)
return 301 https://$server_name:8443;
}
#### cloudserver.org HTTPS virtual host
server {
listen 8443;
server_name cloudserver.org;
### SSL Parameters
ssl on;
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
### Proxy requests to upstream
location / {
proxy_pass http://sunstone;
}
}
The IP address and port number used in upstream must be the ones of the server Sunstone is running on. On typical
installations the nginx master process is run as user root so you dont need to modify the HTTPS port.
158 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
Step 3: Sunstone Conguration
Start the Sunstone server using the default values, this way the server will be listening at localhost:9869.
Once the proxy server is started, OpenNebula Sunstone requests using HTTPS URIs can be directed to
https://cloudserver.org:8443, that will then be unencrypted, passed to localhost, port 9869, satised
(hopefully), encrypted again and then passed back to the client.
8.6 Cloud Servers Authentication
OpenNebula ships with three servers: Sunstone, EC2 and OCCI. When a user interacts with one of them, the server
authenticates the request and then forwards the requested operation to the OpenNebula daemon.
The forwarded requests between the servers and the core daemon include the original user name, and are signed with
the credentials of an special server user.
In this guide this request forwarding mechanism is explained, and how it is secured with a symmetric-key algorithm
or x509 certicates.
8.6.1 Server Users
The Sunstone, EC2 and OCCI services communicate with the core using a server user. OpenNebula creates the
serveradmin account at bootstrap, with the authentication driver server_cipher (symmetric key).
This server user uses a special authentication mechanism that allows the servers to perform an operation on behalf
of other user.
You can strengthen the security of the requests from the servers to the core daemon changing the serverusers driver to
server_x509. This is specially relevant if you are running your server in a machine other than the frontend.
Please note that you can have as many users with a server_* driver as you need. For example, you may want to have
Sunstone congured with a user with server_x509 driver, and EC2 with server_cipher.
8.6.2 Symmetric Key
Enable
This mechanism is enabled by default, you will have a user named serveradmin with driver server_cipher.
To use it, you need a user with the driver server_cipher. Enable it in the relevant conguration le in /etc/one:
Sunstone: /etc/one/sunstone-server.conf
EC2: /etc/one/econe.conf
OCCI: /etc/one/occi-server.conf
:core_auth: cipher
Congure
You must update the conguration les in /var/lib/one/.one if you change the serveradmins password, or
create a different user with the server_cipher driver.
8.6. Cloud Servers Authentication 159
OpenNebula 4.6 Administration Guide, Release 4.6
$ ls -1 /var/lib/one/.one
ec2_auth
occi_auth
sunstone_auth
$ cat /var/lib/one/.one/sunstone_auth
serveradmin:1612b78a4843647a4b541346f678f9e1b43bbcf9
Warning: serveradmin password is hashed in the database. You can use the -sha1 ag when issuing
oneuser passwd command for this user.
Warning: When Sunstone is running in a different machine than oned you should use an SSL connection. This
can be archived with an SSL proxy like stunnel or apache/nginx acting as proxy. After securing OpenNebula
XMLRPC connection congure Sunstone to use https with the proxy port:
:one_xmlrpc: https://frontend:2634/RPC2
8.6.3 x509 Encryption
Enable
To enable it, change the authentication driver of the serveradmin user, or create a new user with the driver
server_x509:
$ oneuser chauth serveradmin server_x509
$ oneuser passwd serveradmin --x509 --cert usercert.pem
The serveradmin account should look like:
$ oneuser list
ID GROUP NAME AUTH PASSWORD
0 oneadmin oneadmin core c24783ba96a35464632a624d9f829136edc0175e
1 oneadmin serveradmin server_x /C=ES/O=ONE/OU=DEV/CN=server
You need to edit /etc/one/auth/server_x509_auth.conf and uncomment all the elds. The defaults
should work:
# User to be used for x509 server authentication
:srv_user: serveradmin
# Path to the certificate used by the OpenNebula Services
# Certificates must be in PEM format
:one_cert: "/etc/one/auth/cert.pem"
:one_key: "/etc/one/auth/pk.pem"
Copy the certicate and the private key to the paths set in :one_cert: and :one_key:, or simply update the
paths.
Then edit the relevant conguration le in /etc/one:
Sunstone: /etc/one/sunstone-server.conf
EC2: /etc/one/econe.conf
OCCI: /etc/one/occi-server.conf
160 Chapter 8. Sunstone GUI
OpenNebula 4.6 Administration Guide, Release 4.6
:core_auth: x509
Congure
To trust the serveradmin certicate, /etc/one/auth/cert.pem if you used the default path, the CAs certicate
must be added to the ca_dir dened in /etc/one/auth/x509_auth.conf. See the x509 Authentication
guide for more information.
$ openssl x509 -noout -hash -in cacert.pem
78d0bbd8
$ sudo cp cacert.pem /etc/one/auth/certificates/78d0bbd8.0
8.6.4 Tuning & Extending
Files
You can nd the drivers in these paths: /var/lib/one/remotes/auth/server_cipher/authenticate
/var/lib/one/remotes/auth/server_server/authenticate
Authentication Session String
OpenNebula users with the driver server_cipher or server_x509 use a special authentication session string (the rst
parameter of the XML-RPC calls). A regular authentication token is in the form:
username:secret
Whereas a user with a server_* driver must use this token format:
username:target_username:secret
The core daemon understands a request with this authentication session token as perform this operation on
behalf of target\_user. The secret part of the token is signed with one of the two mechanisms explained
below.
8.6. Cloud Servers Authentication 161
OpenNebula 4.6 Administration Guide, Release 4.6
162 Chapter 8. Sunstone GUI
CHAPTER
NINE
OTHER SUBSYSTEMS
9.1 MySQL Backend
The MySQL backend was introduced in OpenNebula 2.0 as an alternative to the Sqlite backend available in previous
releases.
Either of them can be used seamlessly to the upper layers and ecosystem tools. These high level components do not
need to be modied or congured.
The two backends cannot coexist, and you will have to decide which one is going to be used while planning your
OpenNebula installation.
9.1.1 Building OpenNebula with MySQL Support
This section is only relevant if you are building OpenNebula from source. If you downloaded our compiled packages,
you can skip to Installation.
Requirements
An installation of the mysql server database is required. For an Ubuntu distribution, the packages to install are:
libmysql++-dev
libxml2-dev
Also, you will need a working mysql server install. For Ubuntu again, you can install the mysql server with:
mysql-server-5.1
Compilation
To compile OpenNebula from source with mysql support, you need the following option passed to the scons:
$ scons mysql=yes
Afterwards, installation proceeds normally, conguration needs to take into account the mysql server details, and for
users of OpenNebula the DB backend is fully transparent.
163
OpenNebula 4.6 Administration Guide, Release 4.6
9.1.2 Installation
First of all, you need a working MySQL server.
Of course, you can use any instance you have already deployed. OpenNebula will connect to other machines different
from localhost.
You can also congure two different OpenNebula installations to use the same MySQL server. In this case, you have
to make sure that they use different database names.
Conguring MySQL
In order to let OpenNebula use your existing MySQL database server, you need to add a new user and grant it privileges
on one new database. This new database doesnt need to exist, OpenNebula will create it the rst time you run it.
Assuming you are going to use the default values, log in to your MySQL server and issue the following commands:
$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. [...]
mysql> GRANT ALL PRIVILEGES ON opennebula.
*
TO oneadmin IDENTIFIED BY oneadmin;
Query OK, 0 rows affected (0.00 sec)
Warning: Remember to choose different values, at least for the password.
Warning: GRANT ALL PRIVILEGES ON <db_name>.* TO <user> IDENTIFIED BY <passwd>
Visit the MySQL documentation to learn how to manage accounts.
Conguring OpenNebula
Before you run OpenNebula, you need to set in oned.conf the connection details, and the database you have granted
privileges on.
# Sample configuration for MySQL
DB = [ backend = "mysql",
server = "localhost",
port = 0,
user = "oneadmin",
passwd = "oneadmin",
db_name = "opennebula" ]
Fields:
server: url of the machine running the MySQL server
port: port for the connection to the server. If set to 0, the default port is used.
user: MySQL user-name
passwd: MySQL password
db_name: Name of the MySQL database OpenNebula will use
164 Chapter 9. Other Subsystems
OpenNebula 4.6 Administration Guide, Release 4.6
9.1.3 Using OpenNebula with MySQL
After this installation and conguration process you can use OpenNebula as usual.
9.1. MySQL Backend 165
OpenNebula 4.6 Administration Guide, Release 4.6
166 Chapter 9. Other Subsystems
CHAPTER
TEN
REFERENCES
10.1 ONED Conguration
The OpenNebula daemon oned manages the cluster nodes, virtual networks, virtual machines, users, groups and
storage datastores. The conguration le for the daemon is called oned.conf and it is placed inside the /etc/one
directory. In this reference document we describe all the format and options that can be specied in oned.conf.
10.1.1 Daemon Conguration Attributes
MANAGER_TIMER : Time in seconds the core uses to evaluate periodical functions. MONITOR-
ING_INTERVAL cannot have a smaller value than MANAGER_TIMER.
MONITORING_INTERVAL : Time in seconds between each monitorization.
MONITORING_THREADS : Max. number of threads used to process monitor messages
HOST_PER_INTERVAL: Number of hosts monitored in each interval.
HOST_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring information. Use 0 to
disable HOST monitoring recording.
VM_INDIVIDUAL_MONITORING: VM monitoring information is obtained along with the host information.
For some custom monitor drivers you may need activate the individual VM monitoring process.
VM_PER_INTERVAL: Number of VMs monitored in each interval.
VM_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring information. Use 0 to dis-
able VM monitoring recording.
SCRIPTS_REMOTE_DIR: Remote path to store the monitoring and VM management script.
PORT : Port where oned will listen for xml-rpc calls.
DB : Vector of conguration attributes for the database backend.
backend : Set to sqlite or mysql. Please visit the MySQL conguration guide for more information.
server (MySQL only): Host name or an IP address for the MySQL server.
user (MySQL only): MySQL users login ID.
passwd (MySQL only): MySQL users password.
db_name (MySQL only): MySQL database name.
VNC_BASE_PORT : VNC ports for VMs can be automatically set to VNC_BASE_PORT + VMID. Refer to the
VM template reference for further information.
VM_SUBMIT_ON_HOLD : Forces VMs to be created on hold state instead of pending. Values: YES or NO.
167
OpenNebula 4.6 Administration Guide, Release 4.6
LOG : Congure the logging system
SYSTEM : Can be either file (default) or syslog.
DEBUG_LEVEL : Sets the level of verbosity of the log messages. Possible values are:
DEBUG_LEVEL Meaning
0 ERROR
1 WARNING
2 INFO
3 DEBUG
Example of this section:
#
*******************************************************************************
# Daemon configuration attributes
#
*******************************************************************************
LOG = [
system = "file",
debug_level = 3
]
#MANAGER_TIMER = 30
MONITORING_INTERVAL = 60
MONITORING_THREADS = 50
#HOST_PER_INTERVAL = 15
#HOST_MONITORING_EXPIRATION_TIME = 43200
#VM_INDIVIDUAL_MONITORING = "no"
#VM_PER_INTERVAL = 5
#VM_MONITORING_EXPIRATION_TIME = 14400
SCRIPTS_REMOTE_DIR=/var/tmp/one
PORT = 2633
DB = [ backend = "sqlite" ]
# Sample configuration for MySQL
# DB = [ backend = "mysql",
# server = "localhost",
# port = 0,
# user = "oneadmin",
# passwd = "oneadmin",
# db_name = "opennebula" ]
VNC_BASE_PORT = 5900
#VM_SUBMIT_ON_HOLD = "NO"
10.1.2 Federation Conguration Attributes
Control the federation capabilities of oned. Operation in a federated setup requires a special DB conguration.
FEDERATION : Federation attributes.
168 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
MODE : Operation mode of this oned.
*
STANDALONE: not federated. This is the default operational mode
*
MASTER: this oned is the master zone of the federation
*
SLAVE: this oned is a slave zone
ZONE_ID : The zone ID as returned by onezone command.
MASTER_ONED : The xml-rpc endpoint of the master oned, e.g. http://master.one.org:2633/RPC2
#
*******************************************************************************
# Federation configuration attributes
#
*******************************************************************************
FEDERATION = [
MODE = "STANDALONE",
ZONE_ID = 0,
MASTER_ONED = ""
]
10.1.3 XML-RPC Server Conguration
MAX_CONN: Maximum number of simultaneous TCP connections the server will maintain
MAX_CONN_BACKLOG: Maximum number of TCP connections the operating system will accept on the servers
behalf without the server accepting them from the operating system
KEEPALIVE_TIMEOUT: Maximum time in seconds that the server allows a connection to be open between
RPCs
KEEPALIVE_MAX_CONN: Maximum number of RPCs that the server will execute on a single connection
TIMEOUT: Maximum time in seconds the server will wait for the client to do anything while processing an RPC
RPC_LOG: Create a separated log le for xml-rpc requests, in /var/log/one/one_xmlrpc.log.
MESSAGE_SIZE: Buffer size in bytes for XML-RPC responses. Only relevant for federation slave zones.
#
*******************************************************************************
# XML-RPC server configuration
#
*******************************************************************************
#MAX_CONN = 15
#MAX_CONN_BACKLOG = 15
#KEEPALIVE_TIMEOUT = 15
#KEEPALIVE_MAX_CONN = 30
#TIMEOUT = 15
#RPC_LOG = NO
#MESSAGE_SIZE = 1073741824
Warning: This functionality is only available when compiled with xmlrpc-c libraires >= 1.32. Currently only the
packages distributed by OpenNebula are linked with this library.
10.1.4 Virtual Networks
NETWORK_SIZE: Default size for virtual networks
10.1. ONED Conguration 169
OpenNebula 4.6 Administration Guide, Release 4.6
MAC_PREFIX: Default MAC prex to generate virtual network MAC addresses
Sample conguration:
#
*******************************************************************************
# Physical Networks configuration
#
*******************************************************************************
NETWORK_SIZE = 254
MAC_PREFIX = "02:00"
10.1.5 Datastores
The Storage Subsystem allows users to set up images, which can be operative systems or data, to be used in Virtual
Machines easily. These images can be used by several Virtual Machines simultaneously, and also shared with other
users.
Here you can congure the default values for the Datastores and Image templates. You have more information about
the templates syntax here.
DATASTORE_LOCATION: Path for Datastores in the hosts. It is the same for all the hosts in the cluster.
DATASTORE_LOCATION is only for the hosts and not the front-end. It defaults to /var/lib/one/datastores (or
$ONE_LOCATION/var/datastores in self-contained mode)
DATASTORE_BASE_PATH: This is the base path for the SOURCE attribute of the images registered in a
Datastore. This is a default value, that can be changed when the datastore is created.
DATASTORE_CAPACITY_CHECK: Checks that there is enough capacity before creating a new imag. Defaults
to Yes.
DEFAULT_IMAGE_TYPE : Default value for TYPE eld when it is omitted in a template. Values accepted are
OS, CDROM, DATABLOCK.
DEFAULT_DEVICE_PREFIX : Default value for DEV_PREFIX eld when it is omitted in a template. The
missing DEV_PREFIX attribute is lled when Images are created, so changing this prex wont affect existing
Images. It can be set to:
Prex Device type
hd IDE
sd SCSI
xvd XEN Virtual Disk
vd KVM virtual disk
DEFAULT_CDROM_DEVICE_PREFIX: Same as above but for CDROM devices.
More information on the image repository can be found in the Managing Virtual Machine Images guide.
Sample conguration:
#
*******************************************************************************
# Image Repository Configuration
#
*******************************************************************************
#DATASTORE_LOCATION = /var/lib/one/datastores
#DATASTORE_BASE_PATH = /var/lib/one/datastores
DATASTORE_CAPACITY_CHECK = "yes"
DEFAULT_IMAGE_TYPE = "OS"
DEFAULT_DEVICE_PREFIX = "hd"
170 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
DEFAULT_CDROM_DEVICE_PREFIX = "hd"
10.1.6 Information Collector
This driver CANNOT BE ASSIGNED TO A HOST, and needs to be used with KVM or Xen drivers Options that can
be set:
-a: Address to bind the collectd sockect (defults 0.0.0.0)
-p: UDP port to listen for monitor information (default 4124)
-f: Interval in seconds to ush collected information (default 5)
-t: Number of threads for the server (defult 50)
-i: Time in seconds of the monitorization push cycle. This parameter must be smaller than MONITOR-
ING_INTERVAL, otherwise push monitorization will not be effective.
Sample conguration:
IM_MAD = [
name = "collectd",
executable = "collectd",
arguments = "-p 4124 -f 5 -t 50 -i 20" ]
10.1.7 Information Drivers
The information drivers are used to gather information from the cluster nodes, and they depend on the virtualizer you
are using. You can dene more than one information manager but make sure it has different names. To dene it, the
following needs to be set:
name: name for this information driver.
executable: path of the information driver executable, can be an absolute path or relative to
/usr/lib/one/mads/
arguments: for the driver executable, usually a probe conguration le, can be an absolute path or relative to
/etc/one/.
For more information on conguring the information and monitoring system and hints to extend it please check the
information driver conguration guide.
Sample conguration:
#-------------------------------------------------------------------------------
# KVM Information Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
#-------------------------------------------------------------------------------
IM_MAD = [
name = "kvm",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 kvm" ]
#-------------------------------------------------------------------------------
10.1. ONED Conguration 171
OpenNebula 4.6 Administration Guide, Release 4.6
10.1.8 Virtualization Drivers
The virtualization drivers are used to create, control and monitor VMs on the hosts. You can dene more than one
virtualization driver (e.g. you have different virtualizers in several hosts) but make sure they have different names. To
dene it, the following needs to be set:
name: name of the virtualization driver.
executable: path of the virtualization driver executable, can be an absolute path or relative to
/usr/lib/one/mads/
arguments: for the driver executable
type: driver type, supported drivers: xen, kvm or xml
default: default values and conguration parameters for the driver, can be an absolute path or relative to
/etc/one/
For more information on conguring and setting up the virtualizer please check the guide that suits you:
Xen Adaptor
KVM Adaptor
VMware Adaptor
Sample conguration:
#-------------------------------------------------------------------------------
# Virtualization Driver Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
name = "kvm",
executable = "one_vmm_ssh",
arguments = "-t 15 -r 0 kvm",
default = "vmm_ssh/vmm_ssh_kvm.conf",
type = "kvm" ]
10.1.9 Transfer Driver
The transfer drivers are used to transfer, clone, remove and create VM images. The default TM_MAD driver includes
plugins for all supported storage modes. You may need to modify the TM_MAD to add custom plugins.
executable: path of the transfer driver executable, can be an absolute path or relative to
/usr/lib/one/mads/
arguments: for the driver executable:
-t: number of threads, i.e. number of transfers made at the same time
-d: list of transfer drivers separated by commas, if not dened all the drivers available will be enabled
For more information on conguring different storage alternatives please check the storage conguration guide.
Sample conguration:
#-------------------------------------------------------------------------------
# Transfer Manager Driver Configuration
#-------------------------------------------------------------------------------
TM_MAD = [
172 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
executable = "one_tm",
arguments = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,vmfs,ceph" ]
The conguration for each driver is dened in the TM_MAD_CONF section. These values are used when creating a
new datastore and should not be modied since they dene the datastore behaviour.
name : name of the transfer driver, listed in the -d option of the TM_MAD section
ln_target : determines how the persistent images will be cloned when a new VM is instantiated.
NONE: The image will be linked and no more storage capacity will be used
SELF: The image will be cloned in the Images datastore
SYSTEM: The image will be cloned in the System datastore
clone_target : determines how the non persistent images will be cloned when a new VM is instantiated.
NONE: The image will be linked and no more storage capacity will be used
SELF: The image will be cloned in the Images datastore
SYSTEM: The image will be cloned in the System datastore
shared : determines if the storage holding the system datastore is shared among the different hosts or not. Valid
values: yes or no.
Sample conguration:
TM_MAD_CONF = [
name = "lvm",
ln_target = "NONE",
clone_target= "SELF",
shared = "yes"
]
TM_MAD_CONF = [
name = "shared",
ln_target = "NONE",
clone_target= "SYSTEM",
shared = "yes"
]
10.1.10 Datastore Driver
The Datastore Driver denes a set of scripts to manage the storage backend.
executable: path of the transfer driver executable, can be an absolute path or relative to
/usr/lib/one/mads/
arguments: for the driver executable
-t number of threads, i.e. number of repo operations at the same time
-d datastore mads separated by commas
Sample conguration:
DATASTORE_MAD = [
executable = "one_datastore",
arguments = "-t 15 -d dummy,fs,vmfs,lvm,ceph"
]
10.1. ONED Conguration 173
OpenNebula 4.6 Administration Guide, Release 4.6
For more information on this Driver and how to customize it, please visit its reference guide.
10.1.11 Hook System
Hooks in OpenNebula are programs (usually scripts) which execution is triggered by a change in state in Virtual
Machines or Hosts. The hooks can be executed either locally or remotely in the node where the VM or Host is
running. To congure the Hook System the following needs to be set in the OpenNebula conguration le:
executable: path of the hook driver executable, can be an absolute path or relative to /usr/lib/one/mads/
arguments : for the driver executable, can be an absolute path or relative to /etc/one/
Sample conguration:
HM_MAD = [
executable = "one_hm" ]
Virtual Machine Hooks (VM_HOOK) dened by:
name: for the hook, useful to track the hook (OPTIONAL).
on: when the hook should be executed,
CREATE, when the VM is created (onevm create)
PROLOG, when the VM is in the prolog state
RUNNING, after the VM is successfully booted
UNKNOWN, when the VM is in the unknown state
SHUTDOWN, after the VM is shutdown
STOP, after the VM is stopped (including VM image transfers)
DONE, after the VM is deleted or shutdown
FAILED, when the VM enters the failed state
CUSTOM, user dened specic STATE and LCM_STATE combination of states to trigger the hook
command: path can be absolute or relative to /usr/share/one/hooks
arguments: for the hook. You can access to VM information with $
$ID, the ID of the virtual machine
$TEMPLATE, the VM template in xml and base64 encoded multiple
PREV_STATE, the previous STATE of the Virtual Machine
PREV_LCM_STATE, the previous LCM STATE of the Virtual Machine
remote: values,
YES, The hook is executed in the host where the VM was allocated
NO, The hook is executed in the OpenNebula server (default)
174 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
Host Hooks (HOST_HOOK) dened by:
name: for the hook, useful to track the hook (OPTIONAL)
on: when the hook should be executed,
CREATE, when the Host is created (onehost create)
ERROR, when the Host enters the error state
DISABLE, when the Host is disabled
command: path can be absolute or relative to /usr/share/one/hooks
arguments: for the hook. You can use the following Host information:
$ID, the ID of the host
$TEMPLATE, the Host template in xml and base64 encoded
remote: values,
YES, The hook is executed in the host
NO, The hook is executed in the OpenNebula server (default)
Sample conguration:
VM_HOOK = [
name = "on_failure_recreate",
on = "FAILED",
command = "/usr/bin/env onevm delete --recreate",
arguments = "$ID" ]
VM_HOOK = [
name = "advanced_hook",
on = "CUSTOM",
state = "ACTIVE",
lcm_state = "BOOT_UNKNOWN",
command = "log.rb",
arguments = "$ID $PREV_STATE $PREV_LCM_STATE" ]
10.1.12 Auth Manager Conguration
AUTH_MAD: The driver that will be used to authenticate and authorize OpenNebula requests. If not dened
OpenNebula will use the built-in auth policies
executable: path of the auth driver executable, can be an absolute path or relative to /usr/lib/one/mads/
authn: list of authentication modules separated by commas, if not dened all the modules available will
be enabled
authz: list of authentication modules separated by commas
SESSION_EXPIRATION_TIME: Time in seconds to keep an authenticated token as valid. During this time,
the driver is not used. Use 0 to disable session caching
ENABLE_OTHER_PERMISSIONS: Whether or not to enable the permissions for other. Users in the
oneadmin group will still be able to change these permissions. Values: YES or NO
DEFAULT_UMASK: Similar to Unix umask, sets the default resources permissions. Its format must be 3 octal
digits. For example a umask of 137 will set the new objects permissions to 640 um- u- --
10.1. ONED Conguration 175
OpenNebula 4.6 Administration Guide, Release 4.6
Sample conguration:
AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509"
]
SESSION_EXPIRATION_TIME = 900
#ENABLE_OTHER_PERMISSIONS = "YES"
DEFAULT_UMASK = 177
10.1.13 Restricted Attributes Conguration
VM_RESTRICTED_ATTR: Virtual Machine attribute to be restricted for users outside the oneadmin group
IMAGE_RESTRICTED_ATTR: Image attribute to be restricted for users outside the oneadmin group
Sample conguration:
VM_RESTRICTED_ATTR = "CONTEXT/FILES"
VM_RESTRICTED_ATTR = "NIC/MAC"
VM_RESTRICTED_ATTR = "NIC/VLAN_ID"
VM_RESTRICTED_ATTR = "NIC/BRIDGE"
#VM_RESTRICTED_ATTR = "RANK"
#VM_RESTRICTED_ATTR = "SCHED_RANK"
#VM_RESTRICTED_ATTR = "REQUIREMENTS"
#VM_RESTRICTED_ATTR = "SCHED_REQUIREMENTS"
IMAGE_RESTRICTED_ATTR = "SOURCE"
10.1.14 Inherited Attributes Conguration
The following attributes will be copied from the resource template to the instantiated VMs. More than one attribute
can be dened.
INHERIT_IMAGE_ATTR: Attribute to be copied from the Image template to each VM/DISK.
INHERIT_DATASTORE_ATTR: Attribute to be copied from the Datastore template to each VM/DISK.
INHERIT_VNET_ATTR: Attribute to be copied from the Network template to each VM/NIC.
Sample conguration:
#INHERIT_IMAGE_ATTR = "EXAMPLE"
#INHERIT_IMAGE_ATTR = "SECOND_EXAMPLE"
#INHERIT_DATASTORE_ATTR = "COLOR"
#INHERIT_VNET_ATTR = "BANDWIDTH_THROTTLING"
INHERIT_DATASTORE_ATTR = "CEPH_HOST"
INHERIT_DATASTORE_ATTR = "CEPH_SECRET"
INHERIT_DATASTORE_ATTR = "CEPH_USER"
INHERIT_VNET_ATTR = "VLAN_TAGGED_ID"
176 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
10.1.15 OneGate Conguration
ONEGATE_ENDPOINT: Endpoint where OneGate will be listening. Optional.
Sample conguration:
ONEGATE_ENDPOINT = "http://192.168.0.5:5030"
10.2 Scheduler
The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. Open-
Nebulas architecture denes this module as a separate process that can be started independently of oned. The
OpenNebula scheduling framework is designed in a generic way, so it is highly modiable and can be easily replaced
by third-party developments.
10.2.1 The Match-making Scheduler
OpenNebula comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy. The
goal of this policy is to prioritize those resources more suitable for the VM.
The match-making algorithm works as follows:
Each disk of a running VM consumes storage from an Image Datastore. The VMs that require more storage
than there is currently available are ltered out, and will remain in the pending state.
Those hosts that do not meet the VM requirements (see the SCHED_REQUIREMENTS attribute) or do not have
enough resources (available CPU and memory) to run the VM are ltered out (see below for more information).
The same happens for System Datastores: the ones that do not meet the DS requirements (see the
SCHED_DS_REQUIREMENTS attribute) or do not have enough free storage are ltered out.
The SCHED_RANK and SCHED_DS_RANK expressions are evaluated upon the Host and Datastore list using
the information gathered by the monitor drivers. Any variable reported by the monitor driver (or manually set
in the Host or Datastore template) can be included in the rank expressions.
Those resources with a higher rank are used rst to allocate VMs.
This scheduler algorithm easily allows the implementation of several placement heuristics (see below) depending on
the RANK expressions used.
Conguring the Scheduling Policies
The policy used to place a VM can be congured in two places:
For each VM, as dened by the SCHED_RANK and SCHED_DS_RANK attributes in the VM template.
Globally for all the VMs in the sched.conf le
Re-Scheduling Virtual Machines
When a VM is in the running state it can be rescheduled. By issuing the onevm resched command the VMs
recheduling ag is set. In a subsequent scheduling interval, the VM will be consider for rescheduling, if:
There is a suitable host for the VM
The VM is not already running in it
10.2. Scheduler 177
OpenNebula 4.6 Administration Guide, Release 4.6
This feature can be used by other components to trigger rescheduling action when certain conditions are met.
Scheduling VM Actions
Users can schedule one or more VM actions to be executed at a certain date and time. The onevm command schedule
option will add a new SCHED_ACTION attribute to the Virtual Machine editable template. Visit the VM guide for
more information.
10.2.2 Conguration
The behavior of the scheduler can be tuned to adapt it to your infrastructure with the following conguration parame-
ters dened in /etc/one/sched.conf:
MESSAGE_SIZE: Buffer size in bytes for XML-RPC responses.
ONED_PORT: Port to connect to the OpenNebula daemon oned (Default: 2633)
SCHED_INTERVAL: Seconds between two scheduling actions (Default: 30)
MAX_VM: Maximum number of Virtual Machines scheduled in each scheduling action (Default: 5000). Use 0
to schedule all pending VMs each time.
MAX_DISPATCH: Maximum number of Virtual Machines actually dispatched to a host in each scheduling
action (Default: 30)
MAX_HOST: Maximum number of Virtual Machines dispatched to a given host in each scheduling action (De-
fault: 1)
LIVE_RESCHEDS: Perform live (1) or cold migrations (0) when rescheduling a VM
DEFAULT_SCHED: Denition of the default scheduling algorithm.
RANK: Arithmetic expression to rank suitable hosts based on their attributes.
POLICY: A predened policy, it can be set to:
POLICY DESCRIPTION
0 Packing: Minimize the number of hosts in use by packing the VMs in the hosts to reduce VM
fragmentation
1 Striping: Maximize resources available for the VMs by spreading the VMs in the hosts
2 Load-aware: Maximize resources available for the VMs by using those nodes with less load
3 Custom: Use a custom RANK
4 Fixed: Hosts will be ranked according to the PRIORITY attribute found in the Host or Cluster
template
DEFAULT_DS_SCHED: Denition of the default storage scheduling algorithm.
RANK: Arithmetic expression to rank suitable datastores based on their attributes.
POLICY: A predened policy, it can be set to:
POLICY DESCRIPTION
0 Packing:: Tries to optimize storage usage by selecting the DS with less free space
1 Striping: Tries to optimize I/O by distributing the VMs across datastores
2 Custom: Use a custom RANK
3 Fixed: Datastores will be ranked according to the PRIORITY attribute found in the Datastore
template
178 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
The optimal values of the scheduler parameters depend on the hypervisor, storage subsystem and number of physical
hosts. The values can be derived by nding out the max number of VMs that can be started in your set up with out
getting hypervisor related errors.
Sample Conguration:
MESSAGE_SIZE = 1073741824
ONED_PORT = 2633
SCHED_INTERVAL = 30
MAX_VM = 5000
MAX_DISPATCH = 30
MAX_HOST = 1
LIVE_RESCHEDS = 0
DEFAULT_SCHED = [
policy = 3,
rank = "- (RUNNING_VMS
*
50 + FREE_CPU)"
]
DEFAULT_DS_SCHED = [
policy = 1
]
Pre-dened Placement Policies
The following list describes the predened policies (DEFAULT_SCHED) that can be congured through the
sched.conf le.
Packing Policy
Target: Minimize the number of cluster nodes in use
Heuristic: Pack the VMs in the cluster nodes to reduce VM fragmentation
Implementation: Use those nodes with more VMs running rst
RANK = RUNNING_VMS
Striping Policy
Target: Maximize the resources available to VMs in a node
Heuristic: Spread the VMs in the cluster nodes
Implementation: Use those nodes with less VMs running rst
RANK = "- RUNNING_VMS"
Load-aware Policy
Target: Maximize the resources available to VMs in a node
10.2. Scheduler 179
OpenNebula 4.6 Administration Guide, Release 4.6
Heuristic: Use those nodes with less load
Implementation: Use those nodes with more FREE_CPU rst
RANK = FREE_CPU
Fixed Policy
Target: Sort the hosts manually
Heuristic: Use the PRIORITY attribute
Implementation: Use those nodes with more PRIORITY rst
RANK = PRIORITY
Pre-dened Storage Policies
The following list describes the predened storage policies (DEFAULT_DS_SCHED) that can be congured through
the sched.conf le.
Packing Policy
Tries to optimize storage usage by selecting the DS with less free space
Target: Minimize the number of system datastores in use
Heuristic: Pack the VMs in the system datastores to reduce VM fragmentation
Implementation: Use those datastores with less free space rst
RANK = "- FREE_MB"
Striping Policy
Target: Maximize the I/O available to VMs
Heuristic: Spread the VMs in the system datastores
Implementation: Use those datastores with more free space rst
RANK = "FREE_MB"
Fixed Policy
Target: Sort the datastores manually
Heuristic: Use the PRIORITY attribute
Implementation: Use those datastores with more PRIORITY rst
RANK = PRIORITY
180 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
10.2.3 Limiting the Resources Exposed by a Host
Prior to assgining a VM to a Host, the available capacity is checked to ensure that the VM ts in the host. The capacity
is obtained by the monitor probes. You may alter this behaviour by reserving an amount of capacity (memory and
cpu). You can reserve this capacity:
Cluster-wise, by updating the cluster template (e.g. onecluster update). All the host of the cluster will
reserve the same amount of capacity.
Host-wise, by updating the host template (e.g. onehost update). This value will override those dened at
cluster level.
In particular the following capacity attributes can be reserved:
RESERVED_CPU in percentage. It will be substracted from the TOTAL CPU
RESERVED_MEM in KB. It will be substracted from the TOTAL MEM
Note: These values can be negative, in that case youll be actually increassing the overall capacity so overcommiting
host capacity.
10.3 Logging & Debugging
OpenNebula provides logs for many resources. It supports two logging systems: le based logging systems and syslog
logging.
In the case of le based logging, OpenNebula keeps separate log les for each active component, all of them stored in
/var/log/one. To help users and administrators nd and solve problems, they can also access some of the error
messages from the CLI or the Sunstone GUI.
With syslog the logging strategy is almost identical, except that the logging message change slightly their format
following syslog logging conventions.
10.3.1 Congure the Logging System
The Logging system can be changed in /etc/one/oned.conf, specically under the LOG section. Two param-
eters can be changed: SYSTEM, which is either syslog or le (default), and the DEBUG_LEVEL is the logging
verbosity.
For the scheduler the logging system can be changed in the exact same way. In this case the conguration is in
/etc/one/sched.conf.
10.3.2 Log Resources
There are different log resources corresponding to different OpenNebula components:
ONE Daemon: The core component of OpenNebula dumps all its logging information onto
/var/log/one/oned.log. Its verbosity is regulated by DEBUG_LEVEL in /etc/one/oned.conf.
By default the one start up scripts will backup the last oned.log le using the current time, e.g.
oned.log.20121011151807. Alternatively, this resource can be logged to the syslog.
Scheduler: All the scheduler information is collected into the /var/log/one/sched.log le. This resource can also
be logged to the syslog.
10.3. Logging & Debugging 181
OpenNebula 4.6 Administration Guide, Release 4.6
Virtual Machines: The information specic of the VM will be dumped in the log le
/var/log/one/<vmid>.log. All VMs controlled by OpenNebula have their folder,
/var/lib/one/vms/<VID>, or to the syslog if enabled. You can nd the following information in
it:
Deployment description les : Stored in deployment.<EXECUTION>, where <EXECUTION> is the
sequence number in the execution history of the VM (deployment.0 for the rst host, deployment.1 for the
second and so on).
Transfer description les : Stored in transfer.<EXECUTION>.<OPERATION>, where
<EXECUTION> is the sequence number in the execution history of the VM, <OPERATION> is the stage
where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
Drivers: Each driver can have activated its ONE_MAD_DEBUGvariable in their RCles. If so, error informa-
tion will be dumped to /var/log/one/name-of-the-driver-executable.log; log information of
the drivers is in oned.log.
10.3.3 Logging Format
The anatomy of an OpenNebula message for a le based logging system is the following:
date [module][log_level]: message body
In the case of syslog it follows the standard:
date hostname process[pid]: [module][log_level]: message body
Where module is any of the internal OpenNebula components: VMM, ReM, TM, etc. And the log_level is a single
character indicating the log level: I for info, D for debug, etc.
For the syslog, OpenNebula will also log the Virtual Machine events like this:
date hostname process[pid]: [VM id][module][log_level]: message body
10.3.4 Virtual Machine Errors
Virtual Machine errors can be checked by the owner or an administrator using the onevm show output:
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : one-0
USER : oneadmin
GROUP : oneadmin
STATE : FAILED
LCM_STATE : LCM_INIT
START TIME : 07/19 17:44:20
END TIME : 07/19 17:44:31
DEPLOY ID : -
VIRTUAL MACHINE MONITORING
NET_TX : 0
NET_RX : 0
USED MEMORY : 0
USED CPU : 0
VIRTUAL MACHINE TEMPLATE
182 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
CONTEXT=[
FILES=/tmp/some_file,
TARGET=hdb ]
CPU=0.1
ERROR=[
MESSAGE="Error excuting image transfer script: Error copying /tmp/some_file to /var/lib/one/0/images/isofiles",
TIMESTAMP="Tue Jul 19 17:44:31 2011" ]
MEMORY=64
NAME=one-0
VMID=0
VIRTUAL MACHINE HISTORY
SEQ HOSTNAME REASON START TIME PTIME
0 host01 erro 07/19 17:44:31 00 00:00:00 00 00:00:00
Here the error tells that it could not copy a le, most probably it does not exist.
Alternatively you can also check the log les for the VM at /var/log/one/<vmid>.log.
10.3.5 Host Errors
Host errors can be checked executing the onehost show command:
$ onehost show 1
HOST 1 INFORMATION
ID : 1
NAME : host01
STATE : ERROR
IM_MAD : im_kvm
VM_MAD : vmm_kvm
TM_MAD : tm_shared
HOST SHARES
MAX MEM : 0
USED MEM (REAL) : 0
USED MEM (ALLOCATED) : 0
MAX CPU : 0
USED CPU (REAL) : 0
USED CPU (ALLOCATED) : 0
RUNNING VMS : 0
MONITORING INFORMATION
ERROR=[
MESSAGE="Error monitoring host 1 : MONITOR FAILURE 1 Could not update remotes",
TIMESTAMP="Tue Jul 19 17:17:22 2011" ]
The error message appears in the ERROR value of the monitoring. To get more information you can check
/var/log/one/oned.log. For example for this error we get in the log le:
Tue Jul 19 17:17:22 2011 [InM][I]: Monitoring host host01 (1)
Tue Jul 19 17:17:22 2011 [InM][I]: Command execution fail: scp -r /var/lib/one/remotes/. host01:/var/tmp/one
Tue Jul 19 17:17:22 2011 [InM][I]: ssh: Could not resolve hostname host01: nodename nor servname provided, or not known
Tue Jul 19 17:17:22 2011 [InM][I]: lost connection
Tue Jul 19 17:17:22 2011 [InM][I]: ExitCode: 1
Tue Jul 19 17:17:22 2011 [InM][E]: Error monitoring host 1 : MONITOR FAILURE 1 Could not update remotes
From the execution output we notice that the host name is not know, probably a mistake naming the host.
10.3. Logging & Debugging 183
OpenNebula 4.6 Administration Guide, Release 4.6
10.4 Onedb Tool
This guide describes the onedb CLI tool. It can be used to get information from an OpenNebula database, upgrade
it, or x inconsistency problems.
10.4.1 Connection Parameters
The command onedb can connect to any SQLite or MySQL database. Visit the onedb man page for a complete
reference. These are two examples for the default databases:
$ onedb <command> -v --sqlite /var/lib/one/one.db
$ onedb <command> -v -S localhost -u oneadmin -p oneadmin -d opennebula
10.4.2 onedb fsck
Checks the consistency of the DB, and xes the problems found. For example, if the machine where OpenNebula is
running crashes, or looses connectivity with the database, you may have a wrong number of VMs running in a Host,
or incorrect usage quotas for some users.
$ onedb fsck --sqlite /var/lib/one/one.db
Sqlite database backup stored in /var/lib/one/one.db.bck
Use onedb restore or copy the file back to restore the DB.
Host 0 RUNNING_VMS has 12 is 11
Host 0 CPU_USAGE has 1200 is 1100
Host 0 MEM_USAGE has 1572864 is 1441792
Image 0 RUNNING_VMS has 6 is 5
User 2 quotas: CPU_USED has 12 is 11.0
User 2 quotas: MEMORY_USED has 1536 is 1408
User 2 quotas: VMS_USED has 12 is 11
User 2 quotas: Image 0 RVMS has 6 is 5
Group 1 quotas: CPU_USED has 12 is 11.0
Group 1 quotas: MEMORY_USED has 1536 is 1408
Group 1 quotas: VMS_USED has 12 is 11
Group 1 quotas: Image 0 RVMS has 6 is 5
Total errors found: 12
10.4.3 onedb version
Prints the current DB version.
$ onedb version --sqlite /var/lib/one/one.db
3.8.0
Use the -v ag to see the complete version and comment.
$ onedb version -v --sqlite /var/lib/one/one.db
Version: 3.8.0
Timestamp: 10/19 16:04:17
Comment: Database migrated from 3.7.80 to 3.8.0 (OpenNebula 3.8.0) by onedb command.
184 Chapter 10. References
OpenNebula 4.6 Administration Guide, Release 4.6
If the MySQL database password contains specials characters, such as @ or #, the onedb command will fail to connect
to it.
The workaround is to temporarily change the oneadmins password to an ASCII string. The set password statement
can be used for this:
$ mysql -u oneadmin -p
mysql> SET PASSWORD = PASSWORD(newpass);
10.4.4 onedb history
Each time the DB is upgraded, the process is logged. You can use the history command to retrieve the upgrade
history.
$ onedb history -S localhost -u oneadmin -p oneadmin -d opennebula
Version: 3.0.0
Timestamp: 10/07 12:40:49
Comment: OpenNebula 3.0.0 daemon bootstrap
...
Version: 3.7.80
Timestamp: 10/08 17:36:15
Comment: Database migrated from 3.6.0 to 3.7.80 (OpenNebula 3.7.80) by onedb command.
Version: 3.8.0
Timestamp: 10/19 16:04:17
Comment: Database migrated from 3.7.80 to 3.8.0 (OpenNebula 3.8.0) by onedb command.
10.4.5 onedb upgrade
The upgrade process is fully documented in the Upgrading from Previous Versions guide.
10.4.6 onedb backup
Dumps the OpenNebula DB to a le.
$ onedb backup --sqlite /var/lib/one/one.db /tmp/my_backup.db
Sqlite database backup stored in /tmp/my_backup.db
Use onedb restore or copy the file back to restore the DB.
10.4.7 onedb restore
Restores the DB from a backup le. Please not that this tool will only restore backups generated from the same
backend, i.e. you cannot backup a SQLite database and then try to populate a MySQL one.
10.5 Datastore conguration
Datastores can be parametrized with several parameters. In the following list there are the meaning of the different
parameters for all the datastores.
10.5. Datastore conguration 185
OpenNebula 4.6 Administration Guide, Release 4.6
RESTRICTED_DIRS: Paths not allowed for image importing
SAFE_DIRS: Paths allowed for image importing
NO_DECOMPRESS: Do not decompress images downloaded
LIMIT_TRANSFER_BW: Maximum bandwidth used to download images. By default is bytes/second but
you can use k, m and g for kilo/mega/gigabytes.
BRIDGE_LIST: List of hosts used for image actions. Used as a roundrobin list.
POOL_NAME: Name of the Ceph pool to use
VG_NAME: Volume group to use
BASE_IQN: iscsi base identier
STAGING_DIR: Temporary directory where images are downloaded
DS_TMP_DIR: Temporary directory where images are downloaded
CEPH_HOST: Space-separated list of Ceph monitors.
CEPH_SECRET: A generated UUID for a LibVirt secret.
LIMIT_MB: Limit, in MB, of storage that OpenNebula will use for this datastore
BASE_PATH: Base path is generated automatically with the value in oned.conf . If the value is changed in
oned.conf, it wont apply to existing datastores. The administrator can change it manually for each datastore
with this attribute. Please note that BASE_PATH will only be used for new Images, the SOURCE of existing
Images will still use the previous BASE_PATH.
Not all these parameters have meaning for all the datastores. Here is the matrix of parameters accepted by each one:
Parameter ceph fs iscsi lvm vmfs
RESTRICTED_DIRSyes yes yes yes yes
SAFE_DIRS yes yes yes yes yes
NO_DECOMPRESSyes yes yes yes yes
LIMIT_TRANSFER_BW yes yes yes yes yes
BRIDGE_LIST

yes

POOL_NAME yes

VG_NAME

yes yes

BASE_IQN

yes

STAGING_DIR yes

DS_TMP_DIR yes

CEPH_HOST yes

CEPH_SECRET yes

LIMIT_MB yes yes yes yes yes
BASEPATH yes yes yes yes yes
186 Chapter 10. References
OpenNebula 4.6 User Guide
Release 4.6
OpenNebula Project
April 28, 2014
CONTENTS
1 Virtual Resource Management 1
1.1 Introduction to Private Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Managing Virtual Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Managing Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Creating Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.5 Managing Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2 Virtual Machine Setup 43
2.1 Contextualization Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2 Adding Content to Your Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Basic Contextualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.4 Advanced Contextualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.5 Windows Contextualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6 Cloud-init . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3 OpenNebula Marketplace 55
3.1 Interacting with the OpenNebula Marketplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Howto Create Apps for the Marketplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 References 63
4.1 Virtual Machine Denition File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Image Denition Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3 Virtual Network Denition File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
i
ii
CHAPTER
ONE
VIRTUAL RESOURCE MANAGEMENT
1.1 Introduction to Private Cloud Computing
The aim of a Private Cloud is not to expose to the world a cloud interface to sell capacity over the Internet, but
to provide local cloud users and administrators with a exible and agile private infrastructure to run virtu-
alized service workloads within the administrative domain. OpenNebula virtual infrastructure interfaces expose
user and administrator functionality for virtualization, networking, image and physical resource conguration,
management, monitoring and accounting. This guide briey describes how OpenNebula operates to build a Cloud
infrastructure. After reading this guide you may be interested in reading the guide describing how an hybrid cloud
operates and the guide describing how a public cloud operates.
1
OpenNebula 4.6 User Guide, Release 4.6
1.1.1 The User View
An OpenNebula Private Cloud provides infrastructure users with an elastic platformfor fast delivery and scalability
of services to meet dynamic demands of service end-users. Services are hosted in VMs, and then submitted,
monitored and controlled in the Cloud by using Sunstone or any of the OpenNebula interfaces:
Command Line Interface (CLI)
XML-RPC API
OpenNebula Ruby and Java Cloud APIs
Lets do a sample session to illustrate the functionality provided by the OpenNebula CLI for Private Cloud
Computing. First thing to do, check the hosts in the physical cluster:
$ onehost list
ID NAME RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
0 host01 0 800 800 800 16G 16G 16G on
1 host02 0 800 800 800 16G 16G 16G on
We can then register an image in OpenNebula, by using oneimage. We are going to build an image template to
register the image le we had previously placed in the /home/cloud/images directory.
NAME = Ubuntu
PATH = /home/cloud/images/ubuntu-desktop/disk.0
PUBLIC = YES
DESCRIPTION = "Ubuntu 10.04 desktop for students."
$ oneimage create ubuntu.oneimg
ID: 0
$ oneimage list
2 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
ID USER GROUP NAME SIZE TYPE REGTIME PUB PER STAT RVMS
1 oneadmin oneadmin Ubuntu 10G OS 09/29 07:24:35 Yes No rdy 0
This image is now ready to be used in a virtual machine. We need to dene a virtual machine template to be submitted
using the onetemplate command.
NAME = my_vm
CPU = 1
MEMORY = 2056
DISK = [ IMAGE_ID = 0 ]
DISK = [ type = swap,
size = 1024 ]
NIC = [ NETWORK_ID = 0 ]
Once we have tailored the requirements to our needs (specially, CPU and MEMORY elds), ensuring that the VM ts
into at least one of both hosts, lets submit the VM (assuming you are currently in your home folder):
$ onetemplate create vm
ID: 0
$ onetemplate list
ID USER GROUP NAME REGTIME PUB
0 oneadmin oneadmin my_vm 09/29 07:28:41 No
The listed template is just a VM denition. To execute an instance, we can use the onetemplate command again:
$ onetemplate instantiate 1
VM ID: 0
This should come back with an ID, that we can use to identify the VM for monitoring and controlling, this time
through the use of the onevm command:
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin one-0 runn 0 0K host01 00 00:00:06
The STAT eld tells the state of the virtual machine. If there is an runn state, the virtual machine is up and running.
Depending on how we set up the image, we may be aware of its IP address. If that is the case we can try now and log
into the VM.
To perform a migration, we use yet again the onevm command. Lets move the VM (with VID=0) to host02
(HID=1):
$ onevm migrate --live 0 1
This will move the VM from host01 to host02. The onevm list shows something like the following:
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin one-0 runn 0 0K host02 00 00:00:48
You can also reproduce this sample session using the graphical interface provided by Sunstone, that will simplify the
typical management operations.
1.1. Introduction to Private Cloud Computing 3
OpenNebula 4.6 User Guide, Release 4.6
1.1.2 Next Steps
You can now read the different guides describing how to dene and manage virtual resources on your OpenNebula
cloud:
Virtual Networks
Virtual Machine Images
Virtual Machine Templates
Virtual Machine Instances
You can also install OneFlow to allows users and administrators to dene, execute and manage multi-tiered applica-
tions composed of interconnected Virtual Machines with auto-scaling.
1.2 Managing Virtual Networks
A host is connected to one or more networks that are available to the virtual machines through the corresponding
bridges. OpenNebula allows the creation of Virtual Networks by mapping them on top of the physical ones
1.2.1 Overview
In this guide youll learn how to dene and use virtual networks. For the sake of simplicity the following examples
assume that the hosts are attached to two physical networks:
A private network, through the virtual bridge vbr0
4 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
A network with Internet connectivity, through vbr1
This guide uses the CLI command onevnet, but you can also manage your virtual networks using Sunstone. Select
the Network tab, and there you will be able to create and manage your virtual networks in a user friendly way.
1.2.2 Adding, Deleting and Updating Virtual Networks
A virtual network is dened by two sets of options:
The underlying networking parameters, e.g. BRIDGE, VLAN or PHY_DEV. These attributes depend on the
networking technology (drivers) used by the hosts. Please refer to the specic networking guide.
A set of Leases. A lease denes a MAC - IP pair, related as MAC = MAC_PREFFIX:IP. For IPv6 networks the
only relevant part is the MAC address (see below).
Depending on how the lease set is dened the networks are:
Fixed. A limited (possibly disjoint) set of leases, e.g: 10.0.0.1, 10.0.0.40 and 10.0.0.34
Ranged. A continuous set of leases (like in a network way), e.g: 10.0.0.0/24
Please refer to the Virtual Network template reference guide for more information. The onevnet command is used
to create a VNet from that template.
1.2. Managing Virtual Networks 5
OpenNebula 4.6 User Guide, Release 4.6
IPv4 Networks
IPv4 leases can be dened in several ways:
Ranged. The ranged can be dened with:
A network address in CIDR format, e.g. NETWORK_ADDRESS=10.0.0.0/24.
A network address and a net mask, e.g. NETWORK_ADDRESS=10.0.0.0 NET-
WORK_MASK=255.255.255.0.
A network address and a size, e.g. NETWORK_ADDRESS=10.0.0.0, NETWORK_SIZE=C.
An arbitrary IP range, e.g. IP_START=10.0.0.1, IP_END=10.0.0.254.
Fixed. Each lesae can be dened by:
An IP address, e.g. LEASE=[IP=10.0.0.1]
An IP address and a MAC to override the default MAC generation (MAC=PREFIX:IP), e.g.
LEASE=[IP=10.0.0.1, MAC=e8:9d:87:8d:11:22]
As an example, we will create two new VNets, Blue and Red. Lets assume we have two les, blue.net and
red.net.
Blue.net le:
NAME = "Blue LAN"
TYPE = FIXED
# We have to bind this network to virbr1 for Internet Access
BRIDGE = vbr1
LEASES = [IP=130.10.0.1]
LEASES = [IP=130.10.0.2, MAC=50:20:20:20:20:21]
LEASES = [IP=130.10.0.3]
LEASES = [IP=130.10.0.4]
# Custom Attributes to be used in Context
GATEWAY = 130.10.0.1
DNS = 130.10.0.1
LOAD_BALANCER = 130.10.0.4
And red.net le:
NAME = "Red LAN"
TYPE = RANGED
# Now well use the host private network (physical)
BRIDGE = vbr0
NETWORK_SIZE = C
NETWORK_ADDRESS = 192.168.0.0
# Custom Attributes to be used in Context
GATEWAY = 192.168.0.1
DNS = 192.168.0.1
LOAD_BALANCER = 192.168.0.3
Once the les have been created, we can create the VNets executing:
6 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
$ onevnet create blue.net
ID: 0
$ onevnet create red.net
ID: 1
Also, onevnet can be used to query OpenNebula about available VNets:
$ onevnet list
ID USER GROUP NAME CLUSTER TYPE BRIDGE LEASES
0 oneadmin oneadmin Blue LAN - F vbr1 0
1 oneadmin oneadmin Red LAN - R vbr0 0
In the output above, USER is the owner of the network and LEASES the number of IP-MACs assigned to a VM from
this network.
The following attributes can be changed after creating the network: VLAN_ID, BRIDGE, VLAN and PHYDEV. To
update the network run onevnet update <id>.
To delete a virtual network just use onevnet delete. For example to delete the previous networks:
$ onevnet delete 2
$ onevnet delete Red LAN
You can also check the IPs leased in a network with the onevnet show command
Check the onevnet command help or the reference guide for more options to list the virtual networks.
IPv6 Networks
OpenNebula can generate three IPv6 addresses associated to each lease:
Link local - fe80::/64 generated always for each lease as IP6_LINK
Unique local address (ULA) - fd00::/8, generate if a local site prex (SITE_PREFIX) is provided as part of the
network template. The address is associated to the lease as IP6_SITE
Global unicast address - if a global routing prex (GLOBAL_PREFIX) is provided in the network template;
available in the lease as IP6_GLOBAL
For all the previous addresses the lower 64 bits are populated with a 64-bit interface identier in modied EUI-64
format. You do not need to dene both SITE_PREFIX and GLOBAL_PREFIX , just the ones for the IP6 addresses
needed by your VMs.
The IPv6 lease set can be generated as follows depending on the network type:
Ranged. You will dene a range of MAC addresses (that will be used to generate the EUI-64 host ID in the
guest) with the rst MAC and a size, e.g. MAC_START=e8:9d:87:8d:11:22 NETWORK_SIZE=254.
Fixed. Just set the MACs for the network hosts as: LEASE=[MAC=e8:9d:87:8d:11:22]
LEASE=[MAC=88:53:2e:08:7f:a0]
For example, the following template denes a ranged IPv6 network:
NAME = "Red LAN 6"
TYPE = RANGED
BRIDGE = vbr0
MAC_START = 02:00:c0:a8:00:01
NETWORK_SIZE = C
1.2. Managing Virtual Networks 7
OpenNebula 4.6 User Guide, Release 4.6
SITE_PREFIX = "fd12:33a:df34:1a::"
GLOBAL_PREFIX = "2004:a128::"
The IP leases are then in the form:
LEASE=[ MAC="02:00:c0:a8:00:01", IP="192.168.0.1", IP6_LINK="fe80::400:c0ff:fea8:1", IP6_SITE="fd12:33a:df34:1a:400:c0ff:fea8:1", IP6_GLOBAL="2004:a128:0:32:400:c0ff:fea8:1", USED="1", VID="4" ]
Note that IPv4 addresses are generated from the MAC address in case you need to congure IPv4 and IPv6 addresses
for the network.
1.2.3 Managing Virtual Networks
Adding and Removing Leases
You can add and remove leases to existing FIXED virtual networks (see the template le reference for more info on
the network types). To do so, use the onevnet addleases and onevnet rmleases commands.
The new lease can be added specifying its IP and, optionally, its MAC. If the lease already exists, the action will fail.
$ onevnet addleases 0 130.10.0.10
$ onevnet addleases 0 130.10.0.11 50:20:20:20:20:31
$
$ onevnet addleases 0 130.10.0.1
[VirtualNetworkAddLeases] Error modifiying network leases. Error inserting lease,
IP 130.10.0.1 already exists
To remove existing leases from the network, they must be free (i.e., not used by any VM).
$ onevnet rmleases 0 130.10.0.3
Hold and Release Leases
Leases can be temporarily be marked on hold state. These leases are reserved, they are part of the network, but they
will not be assigned to any VM.
To do so, use the onevnet hold and onevnet release commands. You see the list of leases on hold with the onevnet
show command.
$ onevnet hold "Blue LAN" 130.10.0.1
$ onevnet hold 0 130.10.0.4
Lease Management in Sunstone
If you are using the Sunstone GUI, you can then easily add, remove, hold and release leases fromthe dialog of extended
information of a Virtual Network. You can open this dialog by clicking the desired element on the Virtual Network
table, as you can see in this picture:
8 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
Update the Virtual Network Template
The TEMPLATE section can hold any arbitrary data. You can use the onevnet update command to open an editor
and edit or add new template attributes. These attributes can be later used in the Virtual Machine Contextualization.
For example:
dns = "$NETWORK[DNS, NETWORK_ID=3]"
Publishing Virtual Networks
The users can share their virtual networks with other users in their group, or with all the users in OpenNebula. See the
Managing Permissions documentation for more information.
Lets see a quick example. To share the virtual network 0 with users in the group, the USE right bit for GROUP must
be set with the chmod command:
$ onevnet show 0
...
PERMISSIONS
OWNER : um-
1.2. Managing Virtual Networks 9
OpenNebula 4.6 User Guide, Release 4.6
GROUP : ---
OTHER : ---
$ onevnet chmod 0 640
$ onevnet show 0
...
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
The following command allows users in the same group USE and MANAGE the virtual network, and the rest of the
users USE it:
$ onevnet chmod 0 664
$ onevnet show 0
...
PERMISSIONS
OWNER : um-
GROUP : um-
OTHER : u--
The commands onevnet publish and onevnet unpublish are still present for compatibility with previous
versions. These commands set/unset
1.2.4 Getting a Lease
A lease from a virtual network can be obtained by simply specifying the virtual network name in the NIC attribute.
For example, to dene VM with two network interfaces, one connected to Red LAN and other connected to Blue
LAN just include in the template:
NIC = [ NETWORK_ID = 0 ]
NIC = [ NETWORK = "Red LAN" ]
Networks can be referred in a NIC in two different ways, see the Simplied Virtual Machine Denition File documen-
tation for more information:
NETWORK_ID, using its ID as returned by the create operation
NETWORK, using its name. In this case the name refers to one of the virtual networks owned by the user
(names can not be repeated for the same user). If you want to refer to an NETWORK of other user you can
specify that with NETWORK_UID (by the uid of the user) or NETWORK_UNAME (by the name of the user).
You can also request a specic address just by adding the IP attributes to NIC (or MAC address, specially in a IPv6):
NIC = [ NETWORK_ID = 1, IP = 192.168.0.3 ]
When the VM is submitted, OpenNebula will look for available IPs in the Blue LAN and Red LAN virtual networks.
The leases on hold will be skipped. If successful, the onevm show command should return information about the
machine, including network information.
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : server
USER : oneadmin
10 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
GROUP : oneadmin
STATE : PENDING
LCM_STATE : LCM_INIT
START TIME : 12/13 06:59:07
END TIME : -
DEPLOY ID : -
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
VIRTUAL MACHINE MONITORING
NET_TX : 0
NET_RX : 0
USED MEMORY : 0
USED CPU : 0
VIRTUAL MACHINE TEMPLATE
NAME=server
NIC=[
BRIDGE=vbr1,
IP=130.10.0.2,
MAC=02:00:87:8d:11:25,
IP6_LINK=fe80::400:87ff:fe8d:1125
NETWORK="Blue LAN",
NETWORK_ID=0,
VLAN=NO ]
NIC=[
BRIDGE=vbr0,
IP=192.168.0.2,
IP6_LINK=fe80::400:c0ff:fea8:2,
MAC=00:03:c0:a8:00:02,
NETWORK="Red LAN",
NETWORK_ID=1,
VLAN=NO ]
VMID=0
Warning: Note that if OpenNebula is not able to obtain a lease from a network the submission will fail.
Now we can query OpenNebula with onevnet show to nd out about given leases and other VNet information:
$ onevnet list
ID USER GROUP NAME CLUSTER TYPE BRIDGE LEASES
0 oneadmin oneadmin Blue LAN - F vbr1 3
1 oneadmin oneadmin Red LAN - R vbr0 3
Note that there are two LEASES on hold, and one LEASE used in each network
$ onevnet show 1
VIRTUAL NETWORK 1 INFORMATION
ID : 1
NAME : Red LAN
USER : oneadmin
GROUP : oneadmin
TYPE : RANGED
BRIDGE : vbr0
VLAN : No
1.2. Managing Virtual Networks 11
OpenNebula 4.6 User Guide, Release 4.6
PHYSICAL DEVICE:
VLAN ID :
USED LEASES : 3
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
VIRTUAL NETWORK TEMPLATE
DNS=192.168.0.1
GATEWAY=192.168.0.1
LOAD_BALANCER=192.168.0.3
NETWORK_MASK=255.255.255.0
RANGE
IP_START : 192.168.0.1
IP_END : 192.168.0.254
LEASES ON HOLD
LEASE=[ MAC="02:00:c0:a8:00:01", IP="192.168.0.1", IP6_LINK="fe80::400:c0ff:fea8:1", USED="1", VID="-1" ]
LEASE=[ MAC="02:00:c0:a8:00:03", IP="192.168.0.3", IP6_LINK="fe80::400:c0ff:fea8:3", USED="1", VID="-1" ]
USED LEASES
LEASE=[ MAC="02:00:c0:a8:00:02", IP="192.168.0.2", IP6_LINK="fe80::400:c0ff:fea8:2", USED="1", VID="4" ]
Warning: IP 192.168.0.2 is in use by Virtual Machine 4
Apply Firewall Rules to VMs
You can apply rewall rules on your VMs, to lter TCP and UDP ports, and to dene a policy for ICMP connections.
Read more about this feature here.
Using the Leases within the Virtual Machine
Hypervisors can attach a specic MAC address to a virtual network interface, but Virtual Machines need to obtain an
IP address.
In order to congure the IP inside the guest, you need to use one of the two available methods:
Instantiate a Virtual Router inside each Virtual Network. The Virtual Router appliance contains a DHCP server
that knows the IP assigned to each VM.
Contextualize the VM. Please visit the contextualization guide to learn how to congure your Virtual Machines
to automatically obtain an IP derived from the MAC.
1.3 Managing Images
The Storage system allows OpenNebula administrators and users to set up images, which can be operative systems or
data, to be used in Virtual Machines easily. These images can be used by several Virtual Machines simultaneously,
and also shared with other users.
12 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
If you want to customize the Storage in your system, visit the Storage subsystem guide.
1.3.1 Image Types
There are six different types of images. Using the command oneimage chtype, you can change the type of an
existing Image.
OS: An OS image contains a working operative system. Every VM template must dene one DISK referring to
an image of this type.
CDROM: These images are readonly data. Only one image of this type can be used in each VM template. These
type of images are not cloned when using shared storage.
DATABLOCK: A datablock image is a storage for data, which can be accessed and modied from different
Virtual Machines. These images can be created from previous existing data, or as an empty drive.
KERNEL: A plain le to be used as kernel (VM attribute OS/KERNEL_DS). Note that KERNEL le images
can be registered only in File Datastores.
RAMDISK: A plain le to be used as ramdisk (VM attribute OS/INITRD_DS). Note that RAMDISK le
images can be registered only in File Datastores.
CONTEXT: A plain le to be included in the context CD-ROM (VM attribute CONTEXT/FILES_DS). Note
that CONTEXT le images can be registered only in File Datastores.
The Virtual Machines can use as many datablocks as needed. Refer to the VM template documentation for further
information.
Warning: Note that some of the operations described below do not apply to KERNEL, RAMDISK and CON-
TEXT images, in particular: clone and persistent.
1.3.2 Image Life-cycle
Short
state
State Meaning
lock LOCKED The image le is being copied or created in the Datastore.
rdy READY Image ready to be used.
used USED Non-persistent Image used by at least one VM. It can still be used by other VMs.
used USED_PERS Persistent Image is use by a VM. It cannot be used by new VMs.
disa DISABLED Image disabled by the owner, it cannot be used by new VMs.
err ERROR Error state, a FS operation failed. See the Image information with oneimage show
for an error message.
dele DELETE The image is being deleted from the Datastore.
This is the state diagram for persistent images:
1.3. Managing Images 13
OpenNebula 4.6 User Guide, Release 4.6
And the following one is the state diagram for non-persistent images:
14 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
1.3.3 Managing Images
Users can manage their images using the command line interface command oneimage. The complete reference is
here.
You can also manage your images using Sunstone. Select the Images tab, and there you will be able to create, enable,
disable, delete your images and even manage their persistence and publicity in a user friendly way. From Sunstone
3.4, you can also upload images directly from the web UI.
1.3. Managing Images 15
OpenNebula 4.6 User Guide, Release 4.6
Create Images
Warning: For VMWare images, please read also the VMware Drivers guide.
The three types of images can be created from an existing le, but for datablock images you can specify a size and
lesystem type and let OpenNebula create an empty image in the datastore.
If you want to create an OS image, you need to prepare a contextualized virtual machine, and extract its disk.
Please read rst the documentation about the MAC to IP mechanism in the virtual network management documenta-
tion, and how to use contextualization here.
Once you have a disk you want to upload, you need to create a new image template, and submit it using the oneimage
create command.
The complete reference for the image template is here. This is how a sample template looks like:
$ cat ubuntu_img.one
NAME = "Ubuntu"
PATH = /home/cloud/images/ubuntu-desktop/disk.0
16 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
TYPE = OS
DESCRIPTION = "Ubuntu 10.04 desktop for students."
You need to choose the Datastore where to register the new Image. To know the available datastores, use the
onedatastore list command. In this case, only the default one is listed:
$ onedatastore list
ID NAME CLUSTER IMAGES TYPE TM
1 default - 1 fs shared
To submit the template, you just have to issue the command
$ oneimage create ubuntu_img.one --datastore default
ID: 0
You can also create images using just parameters in the oneimage create call. The parameters to generate the
image are as follows:
Parameter Description
-name name Name of the new image
-description
description
Description for the new Image
-type type Type of the new Image: OS, CDROM or DATABLOCK, FILE
-persistent Tells if the image will be persistent
-prefix prefix Device prex for the disk (eg. hd, sd, xvd or vd)
-target target Device the disk will be attached to
-path path Path of the image le
-driver driver Driver to use image (raw, qcow2, tap:aio:...)
-disk_type disk_type Type of the image (BLOCK, CDROM or FILE)
-source source Source to be used. Useful for not le-based images
-size size Size in MB. Used for DATABLOCK type
-fstype fstype Type of le system to be built: ext2, ext3, ext4, ntfs, reiserfs, jfs, swap,
qcow2
To create the previous example image you can do it like this:
$ oneimage create --datastore default --name Ubuntu --path /home/cloud/images/ubuntu-desktop/disk.0 \
--description "Ubuntu 10.04 desktop for students."
Warning: You can use gz compressed image les (i.e. as specied in path) when registering themin OpenNebula.
Uploading Images from Sunstone
Image le upload to the server via the client browser is possible with the help of a vendor library. The process is as
follow:
Step 1: The client uploads the whole image to the server in a temporal le in the tpmdir folder specied in the
conguration.
Step 2: OpenNebula registers an image setting the PATH to that temporal le.
Step 3: OpenNebula copies the images to the datastore.
Step 4: The temporal le is deleted and the request returns successfully to the user (a message pops up indicating
that image was uploaded correctly).
Note that when le sizes become big (normally over 1GB), and depending on your hardware, it may take long to
complete the copying in step 3. Since the upload request needs to stay pending until copying is sucessful (so it can
1.3. Managing Images 17
OpenNebula 4.6 User Guide, Release 4.6
delete the temp le safely), there might be Ajax timeouts and/or lack of response from the server. This may cause
errors, or trigger re-uploads (which reinitiate the loading progress bar).
As of Firefox 11 and previous versions, uploads seem to be limited to 2GB. Chrome seems to work well with images
> 4 GB.
Clone Images
Existing images can be cloned to a new one. This is useful to make a backup of an Image before you modify it, or to
get a private persistent copy of an image shared by other user.
To clone an image, execute
$ oneimage clone Ubuntu new_image
Listing Available Images
You can use the oneimage list command to check the available images in the repository.
$ oneimage list
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
0 oneuser1 users Ubuntu default 8M OS No rdy 0
To get complete information about an image, use oneimage show, or list images continuously with oneimage
top.
Publishing Images
The users can share their images with other users in their group, or with all the users in OpenNebula. See the Managing
Permissions documentation for more information.
Lets see a quick example. To share the image 0 with users in the group, the USE right bit for GROUP must be set
with the chmod command:
$ oneimage show 0
...
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
$ oneimage chmod 0 640
$ oneimage show 0
...
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
The following command allows users in the same group USE and MANAGE the image, and the rest of the users USE
it:
$ oneimage chmod 0 664
$ oneimage show 0
18 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
...
PERMISSIONS
OWNER : um-
GROUP : um-
OTHER : u--
The commands oneimage publish and oneimage unpublish are still present for compatibility with previ-
ous versions. These commands set/unset the GROUP USE bit.
Making Images Persistent
Use the oneimage persistent and oneimage nonpersistent commands to make your images persistent
or not.
A persistent image saves back to the datastore the changes made inside the VM after it is shut down. More specically,
the changes are correctly preserved only if the VM is ended with the onevm shutdown or onevm shutdown
--hard commands. Note that depending on the Datastore type a persistent image can be a link to the original image,
so any modication is directly made on the image.
$ oneimage list
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
0 oneadmin oneadmin Ubuntu default 10G OS No rdy 0
$ oneimage persistent Ubuntu
$ oneimage list
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
0 oneadmin oneadmin Ubuntu default 10G OS Yes rdy 0
$ oneimage nonpersistent 0
$ oneimage list
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
0 oneadmin oneadmin Ubuntu default 10G OS No rdy 0
Warning: When images are public (GROUP or OTHER USE bit set) they are always cloned, and persistent
images are never cloned. Therefore, an image cannot be public and persistent at the same time. To manage a public
image that wont be cloned, unpublish it rst and make it persistent.
1.3.4 How to Use Images in Virtual Machines
This a simple example on how to specify images as virtual machine disks. Please visit the virtual machine user guide
and the virtual machine template documentation for a more thorough explanation.
Assuming you have an OS image called Ubuntu desktop with ID 1, you can use it in your virtual machine template as
a DISK. When this machine is deployed, the rst disk will be taken from the image repository.
Images can be referred in a DISK in two different ways:
IMAGE_ID, using its ID as returned by the create operation
IMAGE, using its name. In this case the name refers to one of the images owned by the user (names can not
be repeated for the same user). If you want to refer to an IMAGE of other user you can specify that with
IMAGE_UID (by the uid of the user) or IMAGE_UNAME (by the name of the user).
CPU = 1
MEMORY = 3.08
DISK = [ IMAGE_ID = 1 ]
1.3. Managing Images 19
OpenNebula 4.6 User Guide, Release 4.6
DISK = [ type = swap,
size = 1024 ]
NIC = [ NETWORK_ID = 1 ]
NIC = [ NETWORK_ID = 0 ]
# FEATURES=[ acpi="no" ]
GRAPHICS = [
type = "vnc",
listen = "1.2.3.4",
port = "5902" ]
CONTEXT = [
files = "/home/cloud/images/ubuntu-desktop/init.sh" ]
Save Changes
Once the VM is deployed you can snapshot a disk, i.e. save the changes made to the disk as a new image. There are
two types of disk snapshots in OpenNebula:
Deferred snapshots (disk-snapshot), changes to a disk will be saved as a new Image in the associated datastore
when the VM is shutdown.
Hot snapshots (hot disk-snapshot), just as the deferred snapshots, but the disk is copied to the datastore the
moment the operation is triggered. Therefore, you must guarantee that the disk is in a consistent state during the
save_as operation (e.g. by umounting the disk from the VM).
To save a disk, use the onevm disk-snapshot command. This command takes three arguments: The VM name
(or ID), the disk ID to save and the name of the new image to register. And optionally the live argument to not defer
the disk-snapshot operation.
To know the ID of the disk you want to save, just take a look at the onevm show output for your VM, you are
interested in the ID column in the VM DISK section.
$ onevm show 11
VIRTUAL MACHINE 11 INFORMATION
ID : 11
NAME : ttylinux-11
USER : ruben
GROUP : oneadmin
STATE : PENDING
LCM_STATE : LCM_INIT
RESCHED : No
START TIME : 03/08 22:24:57
END TIME : -
DEPLOY ID : -
VIRTUAL MACHINE MONITORING
USED MEMORY : 0K
USED CPU : 0
NET_TX : 0K
NET_RX : 0K
PERMISSIONS
OWNER : um-
GROUP : ---
20 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
OTHER : ---
VM DISKS
ID TARGET IMAGE TYPE SAVE SAVE_AS
0 hda ttylinux file NO -
1 hdb raw - 100M fs NO -
VM NICS
ID NETWORK VLAN BRIDGE IP MAC
0 net_172 no vbr0 172.16.0.201 02:00:ac:10:00:c9
fe80::400:acff:fe10:c9
VIRTUAL MACHINE TEMPLATE
CPU="1"
GRAPHICS=[
LISTEN="0.0.0.0",
PORT="5911",
TYPE="vnc" ]
MEMORY="512"
OS=[
ARCH="x86_64" ]
TEMPLATE_ID="0"
VCPU="1"
The IDs are assigned in the same order the disks were dened in the VM template.
The next command will register a new image called SO upgrade, that will be ready as soon as the VM is shut down.
Till then the image will be locked, and so you cannot use it.
$ onevm disk-snapshot ttylinux-11 0 "SO upgraded"
This command copies disk 1 to the datastore with name Backup of DB volume, the image will be available once the
image copy end:
$ onevm disk-snapshot --live ttylinux-11 1 "Backup of DB volume"
1.3.5 How to Use File Images in Virtual Machines
KERNEL and RAMDISK
KERNEL and RAMDISK type Images can be used in the OS/KERNEL_DS and OS/INITRD_DS attributes of the
VM template. See the complete reference for more information.
Example:
OS = [ KERNEL_DS = "$FILE[IMAGE=kernel3.6]",
INITRD_DS = "$FILE[IMAGE_ID=23]",
ROOT = "sda1",
KERNEL_CMD = "ro xencons=tty console=tty1" ]
CONTEXT
The contextualization cdrom can include CONTEXT type Images. Visit the complete reference for more information.
CONTEXT = [
FILES_DS = "$FILE[IMAGE_ID=34] $FILE[IMAGE=kernel]",
]
1.3. Managing Images 21
OpenNebula 4.6 User Guide, Release 4.6
1.4 Creating Virtual Machines
In OpenNebula the Virtual Machines are dened with Template les. This guide explains howto describe the wanted-
to-be-ran Virtual Machine, and how users typically interact with the system.
The Template Repository system allows OpenNebula administrators and users to register Virtual Machine denitions
in the system, to be instantiated later as Virtual Machine instances. These Templates can be instantiated several times,
and also shared with other users.
1.4.1 Virtual Machine Model
A Virtual Machine within the OpenNebula system consists of:
A capacity in terms memory and CPU
A set of NICs attached to one or more virtual networks
A set of disk images
A state le (optional) or recovery le, that contains the memory image of a running VM plus some hypervisor
specic information.
The above items, plus some additional VM attributes like the OS kernel and context information to be used inside the
VM, are specied in a template le.
1.4.2 Dening a VM in 3 Steps
Virtual Machines are dened in an OpenNebula Template. Templates are stored in a repository to easily browse and
instantiate VMs from them. To create a new Template you have to dene 3 things
Capacity & Name, how big will the VM be?
Attribute Description Mandatory Default
NAME Name that the VM will get for description purposes. Yes one-<vmid>
MEMORY Amount of RAM required for the VM, in Megabytes. Yes
CPU CPU ratio (e..g half a physical CPU is 0.5). Yes
VCPU Number of virtual cpus. No 1
Disks. Each disk is dened with a DISK attribute. A VM can use three types of disk:
Use a persistent Image changes to the disk image will persist after the VM is shutdown.
Use a non-persistent Image images are cloned, changes to the image will be lost.
Volatile disks are created on the y on the target host. After the VM is shutdown the disk is disposed.
Persistent and Clone Disks
Attribute Description Mandatory Default
IMAGE_ID and IMAGE The ID or Name of the image in the datastore Yes
IMAGE_UID Select the IMAGE of a given user by her ID No self
IMAGE_UNAME Select the IMAGE of a given user by her NAME No self
Volatile
22 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
At-
tribute
Description Manda-
tory
De-
fault
TYPE Type of the disk: swap, fs. swap type will set the label to swap so it is easier to
mount and the context packages will automatically mount it.
Yes
SIZE size in MB Yes
FORMATlesystem for fs images: ext2, ext3, etc. raw will not format the image. For VMs to
run on vmfs or vmware shared congurations, the valid values are: vmdk_thin,
vmdk_zeroedthick, vmdk_eagerzeroedthick
Yes
Network Interfaces. Each network interface of a VM is dened with the NIC attribute.
Attribute Description Mandatory Default
NETWORK_ID and NETWORK The ID or Name of the image in the datastore Yes
NETWORK_UID Select the IMAGE of a given user by her ID No self
NETWORK_UNAME Select the IMAGE of a given user by her NAME No self
The following example shows a VM Template le with a couple of disks and a network interface, also a VNC section
was added.
NAME = test-vm
MEMORY = 128
CPU = 1
DISK = [ IMAGE = "Arch Linux" ]
DISK = [ TYPE = swap,
SIZE = 1024 ]
NIC = [ NETWORK = "Public", NETWORK_UNAME="oneadmin" ]
GRAPHICS = [
TYPE = "vnc",
LISTEN = "0.0.0.0"]
Simple templates can be also created using the command line instead of creating a template le. The parameters to do
this for onetemplate are:
Parameter Description
-name name Name for the VM
-cpu cpu CPU percentage reserved for the VM (1=100% one CPU)
-vcpu vcpu Number of virtualized CPUs
-arch arch Architecture of the VM, e.g.: i386 or x86_64
-memory
memory
Memory ammount given to the VM
-disk
disk0,disk1
Disks to attach. To use a disk owned by other user use user[disk]
-nic
vnet0,vnet1
Networks to attach. To use a network owned by other user use user[network]
-raw string Raw string to add to the template. Not to be confused with the RAW attribute. If you want to
provide more than one element, just include an enter inside quotes, instead of using more than
one -raw option
-vnc Add VNC server to the VM
-ssh [file] Add an ssh public key to the context. If the le is omited then the user variable
SSH_PUBLIC_KEY will be used.
-net_context Add network contextualization parameters
-context
line1,line2
Lines to add to the context section
-boot
device
Select boot device (hd, fd, cdrom or network)
1.4. Creating Virtual Machines 23
OpenNebula 4.6 User Guide, Release 4.6
A similar template as the previous example can be created with the following command:
$ onetemplate create --name test-vm --memory 128 --cpu 1 --disk "Arch Linux" --nic Public
Warning: You may want to add VNC access, input hw or change the default targets of the disks. Check the VM
denition le for a complete reference
Warning: OpenNebula Templates are designed to be hypervisor-agnostic, but there are additional attributes that
are supported for each hypervisor. Check the Xen, KVM and VMware conguration guides for more details
Warning: Volatile disks can not be saved as. Pre-register a DataBlock image if you need to attach arbitrary
volumes to the VM
1.4.3 Managing Templates
Users can manage the Template Repository using the command onetemplate, or the graphical interface Sunstone.
For each user, the actual list of templates available are determined by the ownership and permissions of the templates.
Listing Available Templates
You can use the onetemplate list command to check the available Templates in the system.
$ onetemplate list a
ID USER GROUP NAME REGTIME
0 oneadmin oneadmin template-0 09/27 09:37:00
1 oneuser users template-1 09/27 09:37:19
2 oneadmin oneadmin Ubuntu_server 09/27 09:37:42
To get complete information about a Template, use onetemplate show.
Here is a view of templates tab in Sunstone:
24 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
Adding and Deleting Templates
Using onetemplate create, users can create new Templates for private or shared use. The onetemplate
delete command allows the Template owner -or the OpenNebula administrator- to delete it from the repository.
For instance, if the previous example template is written in the vm-example.txt le:
$ onetemplate create vm-example.txt
ID: 6
You can also clone an existing Template, with the onetemplate clone command:
$ onetemplate clone 6 new_template
ID: 7
Via Sunstone, you can easily add templates using the provided wizards (or copy/pasting a template le) and delete
them clicking on the delete button:
Updating a Template
It is possible to update a template by using the onetemplate update. This will launch the editor dened in the
variable EDITOR and let you edit the template.
$ onetemplate update 3
Publishing Templates
The users can share their Templates with other users in their group, or with all the users in OpenNebula. See the
Managing Permissions documentation for more information.
1.4. Creating Virtual Machines 25
OpenNebula 4.6 User Guide, Release 4.6
Lets see a quick example. To share the Template 0 with users in the group, the USE right bit for GROUP must be set
with the chmod command:
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
$ onetemplate chmod 0 640
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
The following command allows users in the same group USE and MANAGE the Template, and the rest of the users
USE it:
$ onetemplate chmod 0 664
$ onetemplate show 0
...
PERMISSIONS
OWNER : um-
GROUP : um-
OTHER : u--
The commands onetemplate publish and onetemplate unpublish are still present for compatibility
with previous versions. These commands set/unset the GROUP USE bit.
1.4.4 Instantiating Templates
The onetemplate instantiate command accepts a Template ID or name, and creates a VM instance (you can
dene the number of instances using the -multiple num_of_instances option) from the given template.
$ onetemplate instantiate 6
VM ID: 0
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneuser1 users one-0 pend 0 0K 00 00:00:16
You can also merge another template to the one being instantiated. The new attributes will be added, or will replace
the ones fom the source template. This can be more convinient that cloning an existing template and updating it.
$ cat /tmp/file
MEMORY = 512
COMMENT = "This is a bigger instance"
$ onetemplate instantiate 6 /tmp/file
VM ID: 1
The same options to create new templates can be used to be merged with an existing one. See the above table, or
execute onetemplate instantiate -help for a complete reference.
26 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
$ onetemplate instantiate 6 --cpu 2 --memory 1024
VM ID: 2
Merge Use Case
The template merge functionality, combined with the restricted attibutes, can be used to allow users some degree of
customization for predened templates.
Lets say the administrator wants to provide base templates that the users can customize, but with some restrictions.
Having the following restricted attributes in oned.conf :
VM_RESTRICTED_ATTR = "CPU"
VM_RESTRICTED_ATTR = "VPU"
VM_RESTRICTED_ATTR = "NIC"
And the following template:
CPU = "1"
VCPU = "1"
MEMORY = "512"
DISK=[
IMAGE_ID = "0" ]
NIC=[
NETWORK_ID = "0" ]
Users can instantiate it customizing anything except the CPU, VCPU and NIC. To create a VM with different memory
and disks:
$ onetemplate instantiate 0 --memory 1G --disk "Ubuntu 12.10"
Warning: The merged attributes replace the existing ones. To add a new disk, the current one needs to be added
also.
$ onetemplate instantiate 0 --disk 0,"Ubuntu 12.10"
1.4.5 Deployment
The OpenNebula Scheduler will deploy automatically the VMs in one of the available Hosts, if they meet the require-
ments. The deployment can be forced by an administrator using the onevm deploy command.
Use onevm shutdown to shutdown a running VM.
Continue to the Managing Virtual Machine Instances Guide to learn more about the VM Life Cycle, and the available
operations that can be performed.
1.5 Managing Virtual Machines
This guide follows the Creating Virtual Machines guide. Once a Template is instantiated to a Virtual Machine, there
are a number of operations that can be performed using the onevm command.
1.5. Managing Virtual Machines 27
OpenNebula 4.6 User Guide, Release 4.6
1.5.1 Virtual Machine Life-cycle
The life-cycle of a Virtual Machine within OpenNebula includes the following stages:
Warning: Note that this is a simplied version. If you are a developer you may want to take a look at the complete
diagram referenced in the xml-rpc api page):
Short
state
State Meaning
pend Pending By default a VM starts in the pending state, waiting for a resource to run on. It will stay in
this state until the scheduler decides to deploy it, or the user deploys it using the onevm
deploy command.
hold Hold The owner has held the VM and it will not be scheduled until it is released. It can be,
however, deployed manually.
prol Prolog The system is transferring the VM les (disk images and the recovery le) to the host in
which the virtual machine will be running.
boot Boot OpenNebula is waiting for the hypervisor to create the VM.
runn Running The VM is running (note that this stage includes the internal virtualized machine booting
and shutting down phases). In this state, the virtualization driver will periodically monitor it.
migr Migrate The VM is migrating from one resource to another. This can be a life migration or cold
migration (the VM is saved and VM les are transferred to the new resource).
hotp Hotplug A disk attach/detach, nic attach/detach operation is in process.
snap SnapshotA system snapshot is being taken.
save Save The system is saving the VM les after a migration, stop or suspend operation.
epil Epilog In this phase the system cleans up the Host used to virtualize the VM, and additionally disk
images to be saved are copied back to the system datastore.
shut ShutdownOpenNebula has sent the VM the shutdown ACPI signal, and is waiting for it to complete
the shutdown process. If after a timeout period the VM does not disappear, OpenNebula will
assume that the guest OS ignored the ACPI signal and the VM state will be changed to
running, instead of done.
stop Stopped The VM is stopped. VM state has been saved and it has been transferred back along with the
disk images to the system datastore.
susp Suspended Same as stopped, but the les are left in the host to later resume the VM there (i.e. there is
no need to re-schedule the VM).
poff PowerOffSame as suspended, but no checkpoint le is generated. Note that the les are left in the host
to later boot the VM there.
unde Undeployed The VM is shut down. The VM disks are transfered to the system datastore. The VM can be
resumed later.
fail Failed The VM failed.
unkn Unknown The VM couldnt be reached, it is in an unknown state.
done Done The VM is done. VMs in this state wont be shown with onevm list but are kept in the
database for accounting purposes. You can still get their information with the onevm show
command.
28 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
1.5.2 Managing Virtual Machines
The following sections show the basics of the onevm command with simple usage examples. A complete reference
for these commands can be found here.
Create and List Existing VMs
Warning: Read the Creating Virtual Machines guide for more information on how to manage and instantiate VM
Templates.
1.5. Managing Virtual Machines 29
OpenNebula 4.6 User Guide, Release 4.6
Warning: Read the complete reference for Virtual Machine templates.
Assuming we have a VM Template registered called vm-example with ID 6, then we can instantiate the VM issuing a:
$ onetemplate list
ID USER GROUP NAME REGTIME
6 oneadmin oneadmin vm_example 09/28 06:44:07
$ onetemplate instantiate vm-example --name my_vm
VM ID: 0
afterwards, the VM can be listed with the onevm list command. You can also use the onevm top command to
list VMs continuously.
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin my_vm pend 0 0K 00 00:00:03
After a Scheduling cycle, the VM will be automatically deployed. But the deployment can also be forced by oneadmin
using onevm deploy:
$ onehost list
ID NAME RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
2 testbed 0 800 800 800 16G 16G 16G on
$ onevm deploy 0 2
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin my_vm runn 0 0K testbed 00 00:02:40
and details about it can be obtained with show:
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : my_vm
USER : oneadmin
GROUP : oneadmin
STATE : ACTIVE
LCM_STATE : RUNNING
START TIME : 04/14 09:00:24
END TIME : -
DEPLOY ID: : one-0
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
VIRTUAL MACHINE MONITORING
NET_TX : 13.05
NET_RX : 0
USED MEMORY : 512
USED CPU : 0
VIRTUAL MACHINE TEMPLATE
...
30 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
VIRTUAL MACHINE HISTORY
SEQ HOSTNAME REASON START TIME PTIME
0 testbed none 09/28 06:48:18 00 00:07:23 00 00:00:00
Terminating VM Instances...
You can terminate a running instance with the following operations (either as onevm commands or through Sunstone):
shutdown: Gracefully shuts down a running VM, sending the ACPI signal. Once the VM is shutdown the
host is cleaned, and persistent and deferred-snapshot disk will be moved to the associated datastore. If after a
given time the VM is still running (e.g. guest ignoring ACPI signals), OpenNebula will returned the VM to the
RUNNING state.
shutdown --hard: Same as above but the VM is immediately destroyed. Use this action instead of
shutdown when the VM doesnt have ACPI support.
If you need to terminate an instance in any state use:
delete: The VM is immediately destroyed no matter its state. Hosts are cleaned as needed but no images are
moved to the repository, leaving then in error. Think of delete as kill -9 for a process, an so it should be only
used when the VM is not responding to other actions.
All the above operations free the resources used by the VM
Pausing VM Instances...
There are two different ways to temporarily stop the execution of a VM: short and long term pauses. A short term
pause keeps all the VM resources allocated to the hosts so its resume its operation in the same hosts quickly. Use the
following onevm commands or Sunstone actions:
suspend: the VM state is saved in the running Host. When a suspended VM is resumed, it is immediately
deployed in the same Host by restoring its saved state.
poweroff: Gracefully powers off a running VM by sending the ACPI signal. It is similar to suspend but
without saving the VM state. When the VM is resumed it will boot immediately in the same Host.
poweroff --hard: Same as above but the VM is immediately powered off. Use this action when the VM
doesnt have ACPI support.
You can also plan a long term pause. The Host resources used by the VM are freed and the Host is cleaned. Any
needed disk is saved in the system datastore. The following actions are useful if you want to preserve network and
storage allocations (e.g. IPs, persistent disk images):
undeploy: Gracefully shuts down a running VM, sending the ACPI signal. The Virtual Machine disks are
transferred back to the system datastore. When an undeployed VM is resumed, it is be moved to the pending
state, and the scheduler will choose where to re-deploy it.
undeploy --hard: Same as above but the running VM is immediately destroyed.
stop: Same as undeploy but also the VM state is saved to later resume it.
When the VM is successfully paused you can resume its execution with:
resume: Resumes the execution of VMs in the stopped, suspended, undeployed and poweroff states.
1.5. Managing Virtual Machines 31
OpenNebula 4.6 User Guide, Release 4.6
Resetting VM Instances...
There are two ways of resetting a VM: in-host and full reset. The rst one does not frees any resources and reset a
RUNNING VM instance at the hypervisor level:
reboot: Gracefully reboots a running VM, sending the ACPI signal.
reboot --hard: Performs a hard reboot.
A VM instance can be reset in any state with:
delete --recreate: Deletes the VM as described above, but instead of disposing it the VM is moving
again to PENDING state. As the delete operation this action should be used when the VM is not responding to
other actions. Try undeploy or undeploy hard rst.
Delaying VM Instances...
The deployment of a PENDING VM (e.g. after creating or resuming it) can be delayed with:
hold: Sets the VM to hold state. The scheduler will not deploy VMs in the hold state. Please note that VMs
can be created directly on hold, using onetemplate instantiate hold or onevm create hold.
Then you can resume it with:
release: Releases a VM from hold state, setting it to pending. Note that you can automatically release a VM
by scheduling the operation as explained below
Life-Cycle Operations for Administrators
There are some onevm commands operations meant for the cloud administrators:
Scheduling:
resched: Sets the reschedule ag for the VM. The Scheduler will migrate (or migrate live, depending on the
Scheduler conguration) the VM in the next monitorization cycle to a Host that better matches the requirements
and rank restrictions. Read more in the Scheduler documentation.
unresched: Clears the reschedule ag for the VM, canceling the rescheduling operation.
Deployment:
deploy: Starts an existing VM in a specic Host.
migrate --live: The Virtual Machine is transferred between Hosts with no noticeable downtime. This
action requires a shared le system storage.
migrate: The VM gets stopped and resumed in the target host.
Note: By default, the above operations do not check the target host capacity. You can use the -e (-enforce) option to
be sure that the host capacity is not overcommitted.
Troubleshooting:
boot: Forces the hypervisor boot action of a VM stuck in UNKNOWN or BOOT state.
recover: If the VM is stuck in any other state (or the boot operation does not work), you can recover the
VM by simulating the failure or success of the missing action. You have to check the VM state on the host to
decide if the missing action was successful or not.
32 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
Disk Snapshoting
You can take a snapshot of a VM disk to preserve or backup its state at a given point of time. There are two types of
disk snapshots in OpenNebula:
Deferred snapshots, changes to a disk will be saved as a new Image in the associated datastore when the VM
is shutdown. The new image will be locked till the VM is properly shutdown and the transferred from the host
to the datastore.
Live snapshots, just as the deferred snapshots, but the disk is copied to the datastore the moment the operation
is triggered. Therefore, you must guarantee that the disk is in a consistent state during the copy operation (e.g.
by umounting the disk from the VM). While the disk is copied to the datastore the VM will be in the HOTPLUG
state.
The onevm disk-snapshot command can be run while the VM is RUNNING, POWEROFF or SUSPENDED.
See the Image guide for specic examples of the disk-snapshot command.
Disk Hotpluging
New disks can be hot-plugged to running VMs with the onevm disk-attach and disk-detach commands. For
example, to attach to a running VM the Image named storage:
$ onevm disk-attach one-5 --image storage
To detach a disk from a running VM, nd the disk ID of the Image you want to detach using the onevm show
command, and then simply execute onevm detach vm_id disk_id:
$ onevm show one-5
...
DISK=[
DISK_ID="1",
...
]
...
$ onevm disk-detach one-5 1
1.5. Managing Virtual Machines 33
OpenNebula 4.6 User Guide, Release 4.6
NIC Hotpluging
You can also hotplug network interfaces to a RUNNING VM. Simply, specify the network where the new interface
should be attach to, for example:
$ onevm show 2
VIRTUAL MACHINE 2 INFORMATION
ID : 2
NAME : centos-server
USER : ruben
GROUP : oneadmin
STATE : ACTIVE
LCM_STATE : RUNNING
RESCHED : No
HOST : cloud01
...
VM NICS
ID NETWORK VLAN BRIDGE IP MAC
0 net_172 no vbr0 172.16.0.201 02:00:ac:10:0
VIRTUAL MACHINE HISTORY
SEQ HOST REASON START TIME PROLOG_TIME
0 cloud01 none 03/07 11:37:40 0d 00h02m14s 0d 00h00m00s
...
$ onevm attachnic 2 --network net_172
After the operation you should see two NICs 0 and 1:
34 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
$ onevm show 2
VIRTUAL MACHINE 2 INFORMATION
ID : 2
NAME : centos-server
USER : ruben
GROUP : oneadmin
...
VM NICS
ID NETWORK VLAN BRIDGE IP MAC
0 net_172 no vbr0 172.16.0.201 02:00:ac:10:00:c9
fe80::400:acff:fe10:c9
1 net_172 no vbr0 172.16.0.202 02:00:ac:10:00:ca
fe80::400:acff:fe10:ca
...
Also, you can detach a NIC by its ID. If you want to detach interface 1 (MAC=02:00:ac:10:00:ca), just:
> onevm detachnic 2 1
1.5. Managing Virtual Machines 35
OpenNebula 4.6 User Guide, Release 4.6
Snapshotting
You can create, delete and restore snapshots for running VMs. A snapshot will contain the current disks and memory
state.
Warning: The snapshots will only be available during the RUNNING state. If the state changes (stop, migrate,
etc...) the snapshots will be lost.
$ onevm snapshot-create 4 "just in case"
$ onevm show 4
...
SNAPSHOTS
ID TIME NAME HYPERVISOR_ID
0 02/21 16:05 just in case onesnap-0
$ onevm snapshot-revert 4 0 --verbose
VM 4: snapshot reverted
Please take into consideration the following limitations:
The snapshots are lost if any life-cycle operation is performed, e.g. a suspend, migrate, delete request.
KVM: Snapshots are only available if all the VM disks use the qcow2 driver.
VMware: the snapshots will persist in the hypervisor after any life-cycle operation is performed, but they will
not be available to be used with OpenNebula.
Xen: does not support snapshotting
Resizing a VM
You may re-size the capacity assigned to a Virtual Machine in terms of the virtual CPUs, memory and CPU allocated.
VM re-sizing can be done when the VM is not ACTIVE, an so in any of the following states: PENDING, HOLD,
FAILED and specially in POWEROFF.
If you have created a Virtual Machine and you need more resources, the following procedure is recommended:
Performany operation needed to prepare your Virtual Machine for shutting down, e.g. you may want to manually
stop some services...
Poweroff the Virtual Machine
Re-size the VM
Resume the Virtual Machine using the new capacity
36 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
Note that using this procedure the VM will preserve any resource assigned by OpenNebula (e.g. IP leases)
The following is an example of the previous procedure from the command line (the Sunstone equivalent is straight
forward):
> onevm poweroff web_vm
> onevm resize web_vm --memory 2G --vcpu 2
> onevm resume web_vm
From Sunstone:
Scheduling Actions
Most of the onevm commands accept the schedule option, allowing users to delay the actions until the given date
and time.
Here is an usage example:
$ onevm suspend 0 --schedule "09/20"
VM 0: suspend scheduled at 2013-09-20 00:00:00 +0200
$ onevm resume 0 --schedule "09/23 14:15"
VM 0: resume scheduled at 2013-09-23 14:15:00 +0200
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : one-0
[...]
SCHEDULED ACTIONS
ID ACTION SCHEDULED DONE MESSAGE
1.5. Managing Virtual Machines 37
OpenNebula 4.6 User Guide, Release 4.6
0 suspend 09/20 00:00 -
1 resume 09/23 14:15 -
These actions can be deleted or edited using the onevm update command. The time attributes use Unix time inter-
nally.
$ onevm update 0
SCHED_ACTION=[
ACTION="suspend",
ID="0",
TIME="1379628000" ]
SCHED_ACTION=[
ACTION="resume",
ID="1",
TIME="1379938500" ]
These are the commands that can be scheduled:
shutdown
shutdown --hard
undeploy
undeploy --hard
hold
release
stop
suspend
resume
boot
delete
delete-recreate
reboot
reboot --hard
poweroff
poweroff --hard
snapshot-create
User Dened Data
Custom tags can be associated to a VM to store metadata related to this specic VM instance. To add custom attributes
simply use the onevm update command.
$ onevm show 0
...
VIRTUAL MACHINE TEMPLATE
...
VMID="0"
38 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
$ onevm update 0
ROOT_GENERATED_PASSWORD="1234"
~
~
$onevm show 0
...
VIRTUAL MACHINE TEMPLATE
...
VMID="0"
USER TEMPLATE
ROOT_GENERATED_PASSWORD="1234"
Manage VM Permissions
OpenNebula comes with an advanced ACL rules permission mechanism intended for administrators, but each VM
object has also implicit permissions that can be managed by the VM owner. To share a VM instance with other users,
to allow them to list and show its information, use the onevm chmod command:
$ onevm show 0
...
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
$ onevm chmod 0 640
$ onevm show 0
...
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
Administrators can also change the VMs group and owner with the chgrp and chown commands.
1.5.3 Sunstone
You can manage your virtual machines using the onevm command or Sunstone.
In Sunstone, you can easily instantiate currently dened templates by clicking New on the Virtual Machines tab and
manage the life cycle of the new instances
1.5. Managing Virtual Machines 39
OpenNebula 4.6 User Guide, Release 4.6
Using the noVNC Console
In order to use this feature, make sure that:
The VM template has a GRAPHICS section dened, that the TYPE attribute in it is set to VNC.
The specied VNC port on the host on which the VM is deployed is accessible from the Sunstone server host.
The VM is in running state.
If the VM supports VNC and is running, then the VNC icon on the Virtual Machines view should be visible and
clickable:
When clicking the VNC icon, the process of starting a session begins:
A request is made and if a VNC session is possible, Sunstone server will add the VM Host to the list of allowed
vnc session targets and create a random token associated to it.
The server responds with the session token, then a noVNC dialog pops up.
The VNC console embedded in this dialog will try to connect to the proxy either using websockets (default)
or emulating them using Flash. Only connections providing the right token will be successful. Websockets
are supported from Firefox 4.0 (manual activation required in this version) and Chrome. The token expires and
cannot be reused.
40 Chapter 1. Virtual Resource Management
OpenNebula 4.6 User Guide, Release 4.6
In order to close the VNC session just close the console dialog.
Note: From Sunstone 3.8, a single instance of the VNC proxy is launched when Sunstone server starts. This instance
will listen on a single port and proxy all connections from there.
1.5.4 Information for Developers and Integrators
Although the default way to create a VM instance is to register a Template and then instantiate it, VMs can be
created directly from a template le using the onevm create command.
When a VM reaches the done state, it disappears from the onevm list output, but the VM is still in the
database and can be retrieved with the onevm show command.
OpenNebula comes with an accounting tool that reports resource usage data.
The monitoring information, shown with nice graphs in Sunstone, can be retrieved using the XML-RPC methods
one.vm.monitoring and one.vmpool.monitoring.
1.5. Managing Virtual Machines 41
OpenNebula 4.6 User Guide, Release 4.6
42 Chapter 1. Virtual Resource Management
CHAPTER
TWO
VIRTUAL MACHINE SETUP
2.1 Contextualization Overview
OpenNebula provides different methods to pass information to a newly created Virtual Machine. This information can
be the network conguration of the VM, user credentials, init scripts and free form data.
Basic Contextualization: If you only want to congure networking and root ssh keys read this guide.
Advanced Contextualization: For additional topics in contextualization like adding custom init scripts and vari-
ables also read this guide.
Cloud-init: To know how to use the cloud-init functionality with OpenNebula check this guide.
Winwdows Contextualization: Contextualization guide specic for Windows guests. From provisioning to con-
textualization.
2.2 Adding Content to Your Cloud
Once you have setup your OpenNebula cloud youll have ready the infrastructure (clusters, hosts, virtual networks and
datastores) but you need to add contents to it for your users. This basically means two different things:
Add base disk images with OS installations of your choice. Including any software package of interest.
Dene virtual servers in the form of VM Templates. We recommend that VM denitions are made by the
admins as it may require ne or advanced tunning. For example you may want to dene a LAMP server with
the capacity to be instantiated in a remote AWS cloud.
When you have basic virtual server denitions the users of your cloud can use them to easily provision VMs, adjusting
basic parameters, like capacity or network connectivity.
There are three basic methods to bootstratp the contents of your cloud, namely:
External Images. If you already have disk images in any supported format (raw, qcow2, vmdk...) you can just
add them to a datastore. Alternatively you can use any virtualization tool (e.g. virt-manager) to install an image
and then add it to a OpenNebula datastore.
Install within OpenNebula. You can also use OpenNebula to prepare the images for your cloud. The process
will be as follows:
Add the installation medium to a OpenNebula datastore. Usually it will be a OS installation CD-
ROM/DVD.
Create a DATABLOCK image of the desired capacity to install the OS. Once created change its type to OS
and make it persistent.
43
OpenNebula 4.6 User Guide, Release 4.6
Create a new template using the previous two images. Make sure to set the OS/BOOT parameter to cdrom
and enable the VNC console.
Instantiate the template and install the OS and any additional software
Once you are done, shutdown the VM
Use the OpenNebula Marketplace. Go to the marketplace tab in Sunstone, and simply pick a disk image with
the OS and Hypervisor of your choice.
Once the images are ready, just create VM templates with the relevant conguration attributes, including default
capacity, networking or any other preset needed by your infrastructure.
You are done, make sure that your cloud users can access the images and templates you have just created.
2.3 Basic Contextualization
This guide shows how to automatically congure networking in the initialization process of the VM. Following are the
instructions to contextualize your images to congure the network. For more in depth information and information on
how to use this information for other duties head to the Advanced Contextualization guide.
2.3.1 Preparing the Virtual Machine Image
To enable the Virtual Machine images to use the contextualization information written by OpenNebula we need to add
to it a series of scripts that will trigger the contextualization.
You can use the images available in the Marketplace, that are already prepared, or prepare your own images. To make
your life easier you can use a couple of Linux packages that do the work for you.
The contextualization package will also mount any partition labeled swap as swap. OpenNebula sets this label for
volatile swap disks.
Start a image (or nish its installation)
Install context packages with one of these methods:
Install from our repositories package one-context in Ubuntu/Debian or opennebula-context in Cen-
tOS/RedHat. Instructions to add the repository at the installation guide.
Download and install the package for your distribution:
*
DEB: Compatible with Ubuntu 11.10 to 14.04 and Debian Squeeze
*
RPM: Compatible with CentOS and RHEL 6.x
Shutdown the VM
2.3.2 Preparing the Template
We will also need to add the gateway information to the Virtual Networks that need it. This is an example of a Virtual
Network with gateway information:
NAME=public
NETWORK_ADDRESS=80.0.0.0
NETWORK_MASK=255.255.255.0
GATEWAY=80.0.0.1
DNS="8.8.8.8 8.8.4.4"
44 Chapter 2. Virtual Machine Setup
OpenNebula 4.6 User Guide, Release 4.6
And then in the VM template contextualization we set NETWORK to yes:
CONTEXT=[
NETWORK=YES ]
When the template is instantiated, those parameters for eth0 are automatically set in the VM as:
CONTEXT=[
DISK_ID="0",
ETH0_DNS="8.8.8.8 8.8.4.4",
ETH0_GATEWAY="80.0.0.1",
ETH0_IP="80.0.0.2",
ETH0_MASK="255.255.255.0",
ETH0_NETWORK="80.0.0.0",
NETWORK="YES",
TARGET="hda" ]
If you add more that one interface to a Virtual Machine you will end with same parameters changing ETH0 to ETH1,
ETH2, etc.
You can also add SSH_PUBLIC_KEY parameter to the context to add a SSH public key to the authorized_keys
le of root.
CONTEXT=[
SSH_PUBLIC_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+vPFFwem49zcepQxsyO51YMSpuywwt6GazgpJe9vQzw3BA97tFrU5zABDLV6GHnI0/ARqsXRX1mWGwOlZkVBl4yhGSK9xSnzBPXqmKdb4TluVgV5u7R5ZjmVGjCYyYVaK7BtIEx3ZQGMbLQ6Av3IFND+EEzf04NeSJYcg9LA3lKIueLHNED1x/6e7uoNW2/VvNhKK5Ajt56yupRS9mnWTjZUM9cTvlhp/Ss1T10iQ51XEVTQfS2VM2y0ZLdfY5nivIIvj5ooGLaYfv8L4VY57zTKBafyWyRZk1PugMdGHxycEh8ek8VZ3wUgltnK+US3rYUTkX9jj+Km/VGhDRehp user@host"
]
If you want to known more in deep the contextualization options head to the Advanced Contextualization guide.
2.4 Advanced Contextualization
There are two contextualization mechanisms available in OpenNebula: the automatic IP assignment, and a more
generic way to give any le and conguration parameters. You can use any of them individually, or both.
You can use already made packages that install context scripts and prepare udev conguration in your appliances. This
is described in Contextualization Packages for VM Images section.
2.4.1 Automatic IP Assignment
With OpenNebula you can derive the IP address assigned to the VM from the MAC address using the
MAC_PREFFIX:IP rule. In order to achieve this we provide context scripts for Debian, Ubuntu, CentOS and open-
SUSE based systems. These scripts can be easily adapted for other distributions, check dev.opennebula.org.
To congure the Virtual Machine follow these steps:
Warning: These actions are to congure the VM, the commands refer to the VMs root le system
Copy the script $ONE_SRC_CODE_PATH/share/scripts/vmcontext.sh into the /etc/init.d di-
rectory in the VM root le system.
Execute the script at boot time before starting any network service, usually runlevel 2 should work.
$ ln /etc/init.d/vmcontext.sh /etc/rc2.d/S01vmcontext.sh
Having done so, whenever the VM boots it will execute this script, which in turn would scan the avail-
able network interfaces, extract their MAC addresses, make the MAC to IP conversion and construct a
/etc/network/interfaces that will ensure the correct IP assignment to the corresponding interface.
2.4. Advanced Contextualization 45
OpenNebula 4.6 User Guide, Release 4.6
2.4.2 Generic Contextualization
The method we provide to give conguration parameters to a newly started virtual machine is using an ISO image
(OVF recommendation). This method is network agnostic so it can be used also to congure network interfaces. In
the VM description le you can specify the contents of the iso le (les and directories), tell the device the ISO image
will be accessible and specify the conguration parameters that will be written to a le for later use inside the virtual
machine.
In this example we see a Virtual Machine with two associated disks. The Disk Image holds the lesystem where the
Operating System will run from. The ISO image has the contextualization for that VM:
context.sh: le that contains conguration variables, lled by OpenNebula with the parameters specied in
the VM description le
init.sh: script called by VM at start that will congure specic services for this VM instance
certificates: directory that contains certicates for some service
service.conf: service conguration
Warning: This is just an example of what a contextualization image may look like. Only context.sh is
included by default. You have to specify the values that will be written inside context.sh and the les that will
be included in the image.
Warning: To prevent regular users to copy system/secure les, the FILES attribute within CONTEXT is only
allowed to OpenNebula users within the oneadmin group. FILES_DS can be used to include arbitrary les from
Files Datastores.
Dening Context
In VM description le you can tell OpenNebula to create a contextualization image and to ll it with values using
CONTEXT parameter. For example:
CONTEXT = [
hostname = "MAINHOST",
ip_private = "$NIC[IP, NETWORK=\"public net\"]",
dns = "$NETWORK[DNS, NETWORK_ID=0]",
46 Chapter 2. Virtual Machine Setup
OpenNebula 4.6 User Guide, Release 4.6
root_pass = "$IMAGE[ROOT_PASS, IMAGE_ID=3]",
ip_gen = "10.0.0.$VMID",
files_ds = "$FILE[IMAGE=\"certificate\"] $FILE[IMAGE=\"server_license\"]"
]
Variables inside CONTEXT section will be added to context.sh le inside the contextualization image. These
variables can be specied in three different ways:
Hardcoded variables
hostname = "MAINHOST"
Using template variables
$<template_variable>: any single value variable of the VM template, like for example:
ip_gen = "10.0.0.$VMID"
$<template_variable>[<attribute>]: Any single value contained in a multiple value variable in the VM
template, like for example:
ip_private = $NIC[IP]
$<template_variable>[<attribute>, <attribute2>=<value2>]: Any single value contained in a
multiple value variable in the VM template, setting one attribute to discern between multiple variables called the same
way, like for example:
ip_public = "$NIC[IP, NETWORK=\"Public\"]"
You can use any of the attributes dened in the variable, NIC in the previous example.
Using Virtual Network template variables
$NETWORK[<vnet_attribute>, <NETWORK_ID|NETWORK>=<vnet_id|vnet_name>]: Any single
value variable in the Virtual Network template, like for example:
dns = "$NETWORK[DNS, NETWORK_ID=3]"
Using Image template variables
$IMAGE[<image_attribute>, <IMAGE_ID|IMAGE>=<img_id|img_name>]: Any single value vari-
able in the Image template, like for example:
root = "$IMAGE[ROOT_PASS, IMAGE_ID=0]"
Note that the image MUST be in used by any of the DISKs defined in the template. The image\_attribute can be TEMPLATE to include the whole image template in XML (base64 encoded).
Using User template variables
$USER[<user_attribute>]: Any single value variable in the user (owner of the VM) template, like for example:
2.4. Advanced Contextualization 47
OpenNebula 4.6 User Guide, Release 4.6
ssh_key = "$USER[SSH_KEY]"
The user_attribute can be TEMPLATE to include the whole user template in XML (base64 encoded).
Pre-dened variables, apart from those dened in the template you can use:
$UID, the uid of the VM owner
$UNAME, the VM owner user name
$GID, the id of the VM group
$GNAME, the VM group name
$TEMPLATE, the whole template in XML format and encoded in base64
The le generated will be something like this:
# Context variables generated by OpenNebula
hostname="MAINHOST"
ip_private="192.168.0.5"
dns="192.168.4.9"
ip_gen="10.0.0.85"
files_ds="/home/cloud/var/datastores/2/3fae86a862b7539b41de350e8fa56100 /home/cloud/var/datastores/2/40bf97b973c864ac52ef461f90b67211"
target="sdb"
root="13.0"
Some of the variables have special meanings, but none of them are mandatory:
Attribute Description
les_ds Files that will be included in the contextualization image. Each le must be stored in a FILE_DS
Datastore and must be of type CONTEXT
target device where the contextualization image will be available to the VM instance. Please note that the
proper device mapping may depend on the guest OS, e.g. ubuntu VMs should use hd* as the target
device
le Files and directories that will be included in the contextualization image. Specied as absolute
paths, by default this can be used only by oneadmin.
init_scripts If you want the VM to execute an script that is not called init.sh (or if you want to call more than
just one script),this list contains the scripts to run and their order. Ex. init.sh users.sh
mysql.sh will force the VM to execute init.sh , then users.sh and lastly mysql.sh at boot time
TOKEN YES to create a token.txt le for OneGate monitorization
NET-
WORK
YES to ll automatically the networking parameters for each NIC, used by the Contextualization
packages
SET_HOSTNAME This parameter value will be the hostname of the VM.
DNS_HOSTNAME YES to set the VM hostname to the reverse dns name (from the rst IP)
Warning: A default target attribute is generated automatically by OpenNebula, based on the default device prex
set at oned.conf.
Contextualization Packages for VM Images
The VM should be prepared to use the contextualization image. First of all it needs to mount the contextualization
image somewhere at boot time. Also a script that executes after boot will be useful to make use of the information
provided.
The le context.sh is compatible with bash syntax so you can easilly source it inside a shellscript to get the
variables that it contains.
48 Chapter 2. Virtual Machine Setup
OpenNebula 4.6 User Guide, Release 4.6
Contextualization packages are available to several distributions so you can prepare them to work with OpenNebula
without much effort. These are the changes they do to your VM:
Disables udev net and cd persistent rules
Deletes udev net and cd persistent rules
Uncongures the network
Adds OpenNebula contextualization scripts to startup
Warning: These packages are destructive. The conguration for networking will be deleted. Make sure to use
this script on copies of your images.
Instructions on how to install the contextualization packages are located in the contextualization overview documenta-
tion.
After the installation of these packages the images on start will congure the network using the mac address generated
by OpenNebula. They will also try to mount the cdrom context image from /dev/cdrom and if init.sh is found
it will be executed.
Network Conguration
These packages also install a generic network conguration script that will get network information from some con-
textualization parameters and also root SSH key. This way we dont have to supply an init.sh script to do this
work. The parameters that these scripts will use are as follows:
Attribute Description
<DEV>_MAC MAC address of the interface
<DEV>_IP IP assigned to the interface
<DEV>_NETWORK Interface network
<DEV>_MASK Interface net mask
<DEV>_GATEWAY Interface gateway
<DEV>_DNS DNS servers for the network
<DEV>_SEARCH_DOMAIN DNS domain search path
<DEV>_IPV6 Global IPv6 assigned to the interface
<DEV>_GATEWAY6 IPv6 gateway for this interface
<DEV>_CONTEXT_FORCE_IPV4 Congure IPv4 even if IPv6 values are present
DNS main DNS server for the machine
SSH_PUBLIC_KEY public ssh key added to root authorized_keys
We can have the networks dened with those parameters and use them to congure the interfaces. Given these two
networks (excerpt):
Public:
NAME = public
TYPE = RANGED
NETWORK_ADDRESS = 130.10.0.0
NETWORK_MASK = 255.255.255.0
GATEWAY = 130.10.0.1
DNS = "8.8.8.8 8.8.4.4"
Private:
NAME = private
TYPE = RANGED
NETWORK_ADDRESS = 10.0.0.0
NETWORK_MASK = 255.255.0.0
2.4. Advanced Contextualization 49
OpenNebula 4.6 User Guide, Release 4.6
We can congure both networks adding this context to the VM template:
CONTEXT=[
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
NIC=[
NETWORK="public" ]
NIC=[
NETWORK="private" ]
Please note that SSH_PUBLIC_KEY was added as a user attribute, this way the templates can be generic.
When this template is instantiated, the context section will contain all the relevant networking attributes:
CONTEXT=[
DISK_ID="0",
ETH0_DNS="8.8.8.8 8.8.4.4",
ETH0_GATEWAY="130.10.0.1",
ETH0_IP="130.10.0.1",
ETH0_MASK="255.255.255.0",
ETH0_NETWORK="130.10.0.0",
ETH1_IP="10.0.0.1",
ETH1_MASK="255.255.0.0",
ETH1_NETWORK="10.0.0.0",
NETWORK="YES",
SSH_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+vPFFwem49zcepQxsyO51YMSpuywwt6GazgpJe9vQzw3BA97tFrU5zABDLV6GHnI0/ARqsXRX1mWGwOlZkVBl4yhGSK9xSnzBPXqmKdb4TluVgV5u7R5ZjmVGjCYyYVaK7BtIEx3ZQGMbLQ6Av3IFND+EEzf04NeSJYcg9LA3lKIueLHNED1x/6e7uoNW2/VvNhKK5Ajt56yupRS9mnWTjZUM9cTvlhp/Ss1T10iQ51XEVTQfS2VM2y0ZLdfY5nivIIvj5ooGLaYfv8L4VY57zTKBafyWyRZk1PugMdGHxycEh8ek8VZ3wUgltnK+US3rYUTkX9jj+Km/VGhDRehp user@host"
TARGET="hda" ]
2.4.3 Generating Custom Contextualization Packages
Network conguration is a script located in /etc/one-context.d/00-network. Any le located in that di-
rectory will be executed on start, in alphabetical order. This way we can add any script to congure or start processes
on boot. For example, we can have a script that populates authorized_keys le using a variable in the context.sh.
Remember that those variables are exported to the environment and will be easily accessible by the scripts:
#!/bin/bash
echo "$SSH_PUBLIC_KEY" > /root/.ssh/authorized_keys
OpenNebula source code comes with the scripts and the les needed to generate contextualization packages. This way
you can also generate custom packages tweaking the scripts that will go inside your images or adding new scripts that
will perform other duties.
The les are located in share/scripts/context-packages:
base: les that will be in all the packages. Right now it contains empty udev rules and the init script that will
be executed on startup.
base_<type>: les specic for linux distributions. It contains the contextualization scripts for the network
and comes in rpm and deb avors. You can add here your own contextualization scripts and they will be added
to the package when you run the generation script.
generate.sh: The script that generates the packages.
postinstall: This script will be executed after the package installation and will clean the network and udev
conguration. It will also add the init script to the started services on boot.
50 Chapter 2. Virtual Machine Setup
OpenNebula 4.6 User Guide, Release 4.6
To generate the packages you will need:
Ruby >= 1.8.7
gem fpm
dpkg utils for deb package creation
rpm utils for rpm package creation
You can also give to the generation script some parameters using env variables to generate the packages. For example,
to generate an rpm package you will execute:
$ PACKAGE_TYPE=rpm ./generate.sh
These are the default values of the parameters, but you can change any of them the same way we did for
PACKAGE_TYPE:
VERSION=4.4.0
MAINTAINER=C12G Labs <support@c12g.com>
LICENSE=Apache
PACKAGE_NAME=one-context
VENDOR=C12G Labs
DESCRIPTION="
This package prepares a VM image for OpenNebula:
*
Disables udev net and cd persistent rules
*
Deletes udev net and cd persistent rules
*
Unconfigures the network
*
Adds OpenNebula contextualization scripts to startup
To get support use the OpenNebula mailing list:
http://opennebula.org/community:mailinglists
"
PACKAGE_TYPE=deb
URL=http://opennebula.org
For more information check the README.md le from that directory.
2.5 Windows Contextualization
This guide describes the standard process of provisioning and contextualizing a Windows guest.
Note: This guide has been tested for Windows 2008 R2, however it should work with Windows systems >= Windows
7.
2.5.1 Provisioning
Installation
Provisioning a Windows VM is performed the standard way in OpenNebula:
1. Register the Installation media (typically a DVD) into a Datastore
2. Create an empty datablock with an appropriate size, at least 10GB. Change the type to OS. If you are using a
qcow2 image, dont forget to add DRIVER=qcow2 and FORMAT=qcow2.
2.5. Windows Contextualization 51
OpenNebula 4.6 User Guide, Release 4.6
3. Create a template that boots from CDROM, enables VNC, and references the Installation media and the Image
created in step 2.
4. Follow the typical installation procedure over VNC.
5. Perform a deferred disk-snapshot of the OS disk, which will be saved upon shutdown.
6. Shutdown the VM.
The resulting image will boot under any OpenNebula cloud that uses KVM or VMware, and for any storage subsystem.
However it hasnt been contextualized, therefore it will only obtain its IP via DHCP. To apply contextualization please
follow the Contextualization section.
Sysprep
If you are adapting a pre-existing Windows VM to run in an OpenNebula environment, and you want to remove all
the pre-existing senstitive data in order to be able to clone and deliver it to third party users, its highly recommended
to run Sysprep on the image. To do so simply run c:\Windows\System32\sysprep\sysprep.exe. Select
OOBE and Generalize.
2.5.2 Contextualization
Enabling Contextualization
The ofcial addon-opennebula-context provides all the necessary les to run the contextualization in Windows 2008
R2.
The contextualization procedure is as follows:
1. Download startup.vbs to the Windows VM (you can also send it via Context les) and write it to a path
under C:.
2. Open the Local Group Policy Dialog by running gpedit.msc. Under: Computer Conguration -> Windows
Settings -> Scripts -> startup (right click); browse to the startup.vbs le and enable it as a startup script.
Save the image by performing a deferred disk-snapshot of the OS disk, which will be saved upon shutdown.
To use the Windows contextualization script you need to use the previously prepared Windows image and include into
the CONTEXT les the context.ps1 script available here.
Warning: The context.ps1 name matters. If changed, the script will not run.
Features
The context.ps1 script will:
Add a new user (using USERNAME and PASSWORD).
Rename the server (using SET_HOSTNAME).
Enable Remote Desktop.
Enable Ping.
Congure the Network, using the automatically generated networking variables in the CONTEXT CD-ROM.
Run arbritary PowerShell scripts available in the CONTEXT CD-ROM and referenced by the INIT_SCRIPTS
variable.
52 Chapter 2. Virtual Machine Setup
OpenNebula 4.6 User Guide, Release 4.6
Variables
The contextualization variables supported by the Windows context script are very similar to the ones in Linux except
for a few Windows-specic exceptions.
This is the list of supported variables:
<DEV>_MAC: MAC address of the interface.
<DEV>_IP: IP assigned to the interface.
<DEV>_NETWORK: Interface network.
<DEV>_MASK: Interface net mask.
<DEV>_GATEWAY: Interface gateway.
<DEV>_DNS: DNS servers for the network.
<DEV>_SEARCH_DOMAIN: DNS domain search path.
DNS: main DNS server for the machine.
SET_HOSTNAME: Set the hostname of the machine.
INIT_SCRIPTS: List of PowerShell scripts to be executed. Must be available in the CONTEXT CD-ROM.
USERNAME: Create a new user.
PASSWORD: Password for the new user.
Customization
The context.ps1 script has been designed to be easily hacked and modied. Perform any changes to that script
and use it locally.
2.6 Cloud-init
Since version 0.7.3 of cloud-init packages the OpenNebula context CD is supported. It is able to get and congure
networking, hostname, ssh key for root and cloud-init user data. Here are the options in a table:
Option Description
standard network options OpenNebula network parameters in the context added by NETWORK=yes
HOSTNAME VM hostname
SSH_PUBLIC_KEY ssh public key added to roots authorized keys
USER_DATA Specic user data for cloud-init
DSMODE Can be set to local, net or disabled to change cloud-init datasource mode
You have more information on how to use it at the cloud-init documentation page.
There are plenty of examples on what can go in the USER_DATA string at the cloud-init examples page.
Warning: The current version of cloud-init congures the network before running cloud-init conguration. This
makes the network conguration not reliable. Until a new version that xes this is released you can add OpenNeb-
ula context packages or this user data to reboot the machine so the network is properly congured.
2.6. Cloud-init 53
OpenNebula 4.6 User Guide, Release 4.6
CONTEXT=[
USER_DATA="#cloud-config
power_state:
mode: reboot
" ]
2.6.1 Platform Specic Notes
CentOS
Works correctly for cloud-init >= 0.7.4.
Ubuntu/Debian
To make it congure the network correctly it needs to be down so the network conguration part makes its work:
CONTEXT=[
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",
USER_DATA="#cloud-config
bootcmd:
- ifdown -a
runcmd:
- curl http://10.0.1.1:8999/I_am_alive
write_files:
- encoding: b64
content: RG9lcyBpdCB3b3JrPwo=
owner: root:root
path: /etc/test_file
permissions: 0644
packages:
- ruby2.0" ]
54 Chapter 2. Virtual Machine Setup
CHAPTER
THREE
OPENNEBULA MARKETPLACE
3.1 Interacting with the OpenNebula Marketplace
The OpenNebula Marketplace is a catalog of third party virtual appliances ready to run in OpenNebula environments.
The OpenNebula Marketplace only contains appliances metadata. The images and les required by an appliance will
not be stored in the Marketplace, but links to them.
55
OpenNebula 4.6 User Guide, Release 4.6
3.1.1 Using Sunstone to Interact with the OpenNebula Marketplace
Since the release 3.6, Sunstone includes a new tab that allows OpenNebula users to interact with the OpenNebula
Marketplace:
If you want to import a new appliance into your local infrastructure, you just have to select an image and click the
button import. A new dialog box will prompt you to create a new image.
56 Chapter 3. OpenNebula Marketplace
OpenNebula 4.6 User Guide, Release 4.6
After that you will be able use that image in a template in order to create a new instance.
3.1.2 Using the CLI to Interact with the OpenNebula Marketplace
You can also use the CLI to interact with the OpenNebula Marketplace:
List appliances:
$ onemarket list --server http://marketplace.c12g.com
ID NAME PUBLISHER
4fc76a938fb81d3517000001 Ubuntu Server 12.04 LTS (Precise Pangolin) OpenNebula.org
4fc76a938fb81d3517000002 CentOS 6.2 OpenNebula.org
4fc76a938fb81d3517000003 ttylinux OpenNebula.org
4fc76a938fb81d3517000004 OpenNebula Sandbox VMware 3.4.1 C12G Labs
4fcf5d0a8fb81d1bb8000001 OpenNebula Sandbox KVM 3.4.1 C12G Labs
Show an appliance:
$ onemarket show 4fc76a938fb81d3517000004 --server http://marketplace.c12g.com
{
"_id": {"$oid": "4fc76a938fb81d3517000004"},
"catalog": "public",
"description": "This image is meant to be run on a ESX hypervisor, and comes with a preconfigured OpenNebula 3.4.1, ready to manage a ESX farm. Several resources are created within OpenNebula (images, virtual networks, VM templates) to build a pilot cloud under 30 minutes.\n\nMore information can be found on the <a href=\"http://opennebula.org/cloud:sandbox:vmware\">OpenNebula Sandbox: VMware-based OpenNebula Cloud guide</a>.\n\nThe login information for this VM is\n\nlogin: root\npassword: opennebula",
3.1. Interacting with the OpenNebula Marketplace 57
OpenNebula 4.6 User Guide, Release 4.6
"downloads": 90,
"files": [
{
"type": "OS",
"hypervisor": "ESX",
"format": "VMDK",
"size": 693729120,
"compression": "gzip",
"os-id": "CentOS",
"os-release": "6.2",
"os-arch": "x86_64",
"checksum": {
"md5": "2dba351902bffb4716168f3693e932e2"
}
}
],
"logo": "/img/logos/view_dashboard.png",
"name": "OpenNebula Sandbox VMware 3.4.1",
"opennebula_template": "",
"opennebula_version": "",
"publisher": "C12G Labs",
"tags": [
"linux",
"vmware",
"sandbox",
"esx",
"frontend"
],
"links": {
"download": {
"href": "http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000004/download"
}
}
}
Create a new image: You can use the download link as PATH in a new Image template to create am Image.
$ onemarket show 4fc76a938fb81d3517000004 --server http://marketplace.c12g.com
{
...
"links": {
"download": {
"href": "http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000004/download"
}
}
}
$ cat marketplace_image.one
NAME = "OpenNebula Sandbox VMware 3.4.1"
PATH = http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000004/download
TYPE = OS
$ oneimage create marketplace_image.one
ID: 1231
58 Chapter 3. OpenNebula Marketplace
OpenNebula 4.6 User Guide, Release 4.6
3.2 Howto Create Apps for the Marketplace
In this section some general guidelines on creating OpenNebula compatible images for the marketplace are described.
Following this you will nd a tutorial showing how to create an Ubuntu 12.04 image ready to distribute it in the
marketplace.
3.2.1 Image Creation Guidelines
Images in the marketplace are just direct installation of OS, prepared to run with OpenNebula. There are two basic
things you need to do (apart from the standard OS installation):
Add OpenNebula contextualization script, so the image is able to receive and use context information
Disable udev network rule writing, usually images are cloned multiple times, using different MAC addresses
each time. In this case, youll need to disable udev to prevent getting a new interface each time.
These both steps can be automated in some distributions (Debian, Ubuntu, CentOS and RHEL) using preparation
packages. You can nd the packages and more information about them at the Contextualization Packages for VM
Images section.
Add OpenNebula Contextualization Script
The contextualization scripts congure the VM on startup. You can nd the scripts for different distributions at the
OpenNebula repository. Depending on the distribution the method of installation is different so refer to the distribution
documentation to do so. Make sure that these scripts are executed before the network is initialized.
You can nd more information about contextualization in the Contextualizing Virtual Machines guide.
Disable udev Network Rule Writing
Most linux distribution upon start search for new devices and write the conguration for them. This xes the network
device for each different network mac address. This is a bad behavir in VM images as they will be used to run with
very different mac addresses. You need to disable this udev conguration saving and also delete any udev network
rule that could be already saved.
3.2.2 Tutorial: Preparing an Ubuntu 12.04 Xen for the Marketplace
The installation is based on the Ubuntu documentation.
You will need a machine where xen is correctly congured, a bridge with internet connection and a public IP or a
private IP with access to a router that can connecto the internet.
First we create an empty disk, in this case it will be 8 Gb:
$ dd if=/dev/zero of=ubuntu.img bs=1 count=1 seek=8G
Then we download netboot kernel and initrd compatible with Xen. We are using a mirror near to us but you can select
one from the Ubuntu mirrors list:
$ wget http://ftp.dat.etsit.upm.es/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/xen/vmlinuz
$ wget http://ftp.dat.etsit.upm.es/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/xen/initrd.gz
Now we can create a le describing the VM where the ubuntu will be installed:
3.2. Howto Create Apps for the Marketplace 59
OpenNebula 4.6 User Guide, Release 4.6
name = "ubuntu"
memory = 256
disk = [file:PATH/ubuntu.img,xvda,w]
vif = [bridge=BRIDGE]
kernel = "PATH/vmlinuz"
ramdisk = "PATH/initrd.gz"
Change PATH to the path where the VM les are located and BRIDGE to the name of the network bridge you are
going to use. After this we can start the VM:
$ sudo xm create ubuntu.xen
To connect to the VM console and proceed with the installation you can use xm console command:
$ sudo xm console ubuntu
Use the menus to congure your VM. Make sure that you congure the network correctly as this installation will use
it to download packages.
After the installation is done it will reboot again into the installation. You can exit the console pressing <CTRL>+<]>.
Now you should shutdown the machine:
$ sudo xm shutdown ubuntu
The system is now installed in the disk image and now we must start it to congure it so it plays nice with OpenNebula.
The conguratio we are going to do is:
Disable udev network generation rules and clean any that could be saved
Add contextualization scripts
To start the VM we will need a new xen description le:
name = "ubuntu1204"
memory = 512
disk = [file:PATH/ubuntu.img,xvda,w]
vif = [bridge=BRIDGE]
bootloader = "pygrub"
It is pretty similar to the other one but notice that we no longer specify kernel nor initrd and we also add the bootloader
option. This will make out VM use the kernel and initrd that reside inside out VM image.
We can start it using the same command as before:
$ sudo xm create ubuntu-new.xen
And the console also works the same as before:
$ sudo xm console ubuntu
We log and become root. To disable udev network rule generation we should edit the le
/lib/udev/rules.d/75-persistent-net-generator.rules and comment the line that says:
DRIVERS=="?
*
", IMPORT{program}="write_net_rules"
60 Chapter 3. OpenNebula Marketplace
OpenNebula 4.6 User Guide, Release 4.6
Now to make sure that no network rules are saved we can empty the rules le:
# echo > /etc/udev/rules.d/70-persistent-net.rules
Copy the contextualiza located at the OpenNebula repository to /etc/init.d and give it write permissions. This
is the script that will contextualize the VM on start.
Now we modify the le /etc/init/networking.conf and change the line:
pre-start exec mkdir -p /run/network
with
pre-start script
mkdir -p /run/network
/etc/init.d/vmcontext
end script
and also in /etc/init/network-interface.conf we add the line:
/etc/init.d/vmcontext
so it looks similar to:
pre-start script
/etc/init.d/vmcontext
if [ "$INTERFACE" = lo ]; then
# bring this up even if /etc/network/interfaces is broken
ifconfig lo 127.0.0.1 up || true
initctl emit -n net-device-up \
IFACE=lo LOGICAL=lo ADDRFAM=inet METHOD=loopback || true
fi
mkdir -p /run/network
exec ifup --allow auto $INTERFACE
end script
3.2. Howto Create Apps for the Marketplace 61
OpenNebula 4.6 User Guide, Release 4.6
62 Chapter 3. OpenNebula Marketplace
CHAPTER
FOUR
REFERENCES
4.1 Virtual Machine Denition File
A template le consists of a set of attributes that denes a Virtual Machine. Using the command onetemplate
create, a template can be registered in OpenNebula to be later instantiated. For compatibility with previous versions,
you can also create a new Virtual Machine directly from a template le, using the onevm create command.
Warning: There are some template attributes that can compromise the security of the system or the security of
other VMs, and can be used only by users in the oneadmin group. These attributes can be congured in oned.conf,
the default ones are labeled with
*
in the following tables. See the complete list in the Restricted Attributes section.
4.1.1 Syntax
The syntax of the template le is as follows:
Anything behind the pound or hash sign # is a comment.
Strings are delimited with double quotes ", if a double quote is part of the string it needs to be escaped \\".
Single Attributes are in the form:
NAME=VALUE
Vector Attributes that contain several values can be dened as follows:
NAME=[NAME1=VALUE1,NAME2=VALUE2]
Vector Attributes must contain at least one value.
Attribute names are case insensitive, in fact the names are converted to uppercase internally.
4.1.2 XML Syntax
Since OpenNebula 3.4, template les can be in XML, with the following syntax:
The root element must be TEMPLATE
Single Attributes are in the form:
<NAME>VALUE</NAME>
Vector Attributes that contain several values can be dened as follows:
63
OpenNebula 4.6 User Guide, Release 4.6
<NAME>
<NAME1>VALUE1</NAME1>
<NAME2>VALUE2</NAME2>
</NAME>
A simple example:
<TEMPLATE>
<NAME>test_vm</NAME>
<CPU>2</CPU>
<MEMORY>1024</MEMORY>
<DISK>
<IMAGE_ID>2</IMAGE_ID>
</DISK>
<DISK>
<IMAGE>Data</IMAGE>
<IMAGE_UNAME>oneadmin</IMAGE_UNAME>
</DISK>
</TEMPLATE>
4.1.3 Capacity Section
The following attributes can be dened to specied the capacity of a VM.
Attribute Description Mandatory
NAME Name that the VM will get for de-
scription purposes. If NAME is not
supplied a name generated by one
will be in the form of one-<VID>.
NOTE: When dening a Template it
is the name of the VM Template. The
actual name of the VM will be set
when the VM Template is instanti-
ated.
YES For Templates NO For VMs -
will be set to one-<vmid> if
omitted
MEMORY Amount of RAM required for the
VM, in Megabytes.
YES
CPU Percentage of CPUdivided by 100 re-
quired for the Virtual Machine, half a
processor is written 0.5. This value is
used by OpenNebula and the sched-
uler to guide the host overcommit-
ment.
YES
VCPU Number of virtual cpus. This value is
optional, the default hypervisor be-
havior is used, usually one virtual
CPU.
YES - will be set to 1 if omitted, this
can be changed in the driver congu-
ration
Example:
NAME = test-vm
MEMORY = 128
CPU = 1
64 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
4.1.4 OS and Boot Options Section
The OS system is dened with the OS vector attribute. The following sub-attributes are supported:
Note the hypervisor column states that the attribute is Optional, Mandatory, or - not supported for that hypervisor
OS Sub-Attribute Description XEN KVM VMWARE
ARCH CPU architecture to
virtualize

M (default i686) M (default i686)
MACHINE libvirt machine type.
Check libvirt capa-
bilities for the list
of available machine
types.

KERNEL path to the OS kernel


to boot the image in
the host
O see (*) O

KERNEL_DS image to be used as


kernel (see !!)
O see (*) O

INITRD path to the initrd im-


age in the host
O (for kernel) O (for kernel)

INITRD_DS image to be used as


ramdisk (see !!)
O (for kernel) O (for kernel)

ROOT device to be mounted


as root
O (for kernel) O (for kernel)

KERNEL_CMD arguments for the


booting kernel
O (for kernel) O (for kernel)

BOOTLOADER path to the bootloader


executable
O see (*) O

BOOT comma separated


list of boot devices
types, by order of
preference (rst de-
vice in the list is the
rst device used for
boot). Possible val-
ues: hd,fd,cdrom
,network
O (only HVM) M

(*) If no kernel/initrd or bootloader are specied a Xen HVM will be created.


(!!) Use one of KERNEL_DS or KERNEL (and INITRD or INITRD_DS).
KERNEL_DS and INITRD_DS refer to and image registered in a File Datastore and must be of type KERNEL and
RAMDISK, respectively. The image should be refer using one of the following:
$FILE[IMAGE=<image name>], to select own les
$FILE[IMAGE=<image name>, <IMAGE_UNAME|IMAGE_UID>=<owner name|owner id>], to
select images owned by other users, by user name or uid.
$FILE[IMAGE_ID=<image id>], global le selection
Example, a VM booting from sda1 with kernel /vmlinuz :
4.1. Virtual Machine Denition File 65
OpenNebula 4.6 User Guide, Release 4.6
OS = [ KERNEL = /vmlinuz,
INITRD = /initrd.img,
ROOT = sda1,
KERNEL_CMD = "ro xencons=tty console=tty1"]
OS = [ KERNEL_DS = "$FILE[IMAGE=\"kernel 3.6\"]",
INITRD_DS = "$FILE[IMAGE=\"initrd 3.6\"]",
ROOT = sda1,
KERNEL_CMD = "ro xencons=tty console=tty1"]
4.1.5 Features Section
This section congures the features enabled for the VM.
Note the hypervisor column states that the attribute is Optional or - not supported for that hypervisor
Sub-Attribute Description XEN HVM KVM
PAE Physical address extension
mode allows 32-bit guests
to address more than 4 GB
of memory
O O
ACPI Useful for power manage-
ment, for example, with
KVM guests it is required
for graceful shutdown to
work
O O
APIC Enables the advanced pro-
grammable IRQ manage-
ment. Useful for SMP ma-
chines.
O O
LOCALTIME The guest clock will be
synchronized to the hosts
congured timezone when
booted. Useful for Win-
dows VMs

O
HYPERV Add hyperv extensions
to the VM. The options
can be congured in
the driver conguration,
HYPERV_OPTIONS

O
DEVICE_MODE Used to change the IO emu-
lator in Xen HVM.
O

FEATURE = [
PAE = "yes",
ACPI = "yes",
APIC = "no",
DEVICE_MODE = "qemu-dm"
]
4.1.6 Disks Section
The disks of a VM are dened with the DISK vector attribute. You can dene as many DISK attributes as you need.
There are three types of disks:
66 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
Persistent disks, uses an Image registered in a Datastore mark as persistent.
Clone disks, uses an Image registered in a Datastore. Changes to the images will be discarded. A clone disk can
be saved as other image.
Volatile disks, created on-the-y on the target hosts. Disks are disposed when the VM is shutdown and cannot
be saved_as
Persistent and Clone Disks
DISK Sub-Attribute Description Xen KVM VMware
IMAGE_ID ID of the Image to use Mandatory (no IM-
AGE)
Mandatory (no IM-
AGE)
Mandatory (no IM-
AGE)
IMAGE Name of the Image to
use
Mandatory (no IM-
AGE_ID)
Mandatory (no IM-
AGE_ID)
Mandatory (no IM-
AGE_ID)
IMAGE_UID To select the IMAGE
of a given user by her
ID
Optional Optional Optional
IMAGE_UNAME To select the IMAGE
of a given user by her
NAME
Optional Optional Optional
DEV_PREFIX Prex for the emu-
lated device this im-
age will be mounted
at. For instance, hd,
sd, or vd for KVM
virtio. If omitted, the
dev_prex attribute of
the Image will be used
Optional Optional Optional
TARGET Device to map image
disk. If set, it will
overwrite the default
device mapping.
Optional Optional Optional
DRIVER Specic image map-
ping driver
Optional e.g.:
tap:aio:,file:
Optional e.g.: raw,
qcow2

CACHE Selects the cache
mechanism for the
disk. Values are
default, none,
writethrough,
writeback,
directsync
and unsafe. More
info in the libvirt
documentation

Optional

READONLY Set how the image is


exposed by the hyper-
visor
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
IO Set IO policy. Val-
ues are threads,
native

Optional

4.1. Virtual Machine Denition File 67


OpenNebula 4.6 User Guide, Release 4.6
Volatile DISKS
DISK Sub-Attribute Description XEN KVM VMWARE
TYPE Type of the
disk:swap, fs
Optional Optional Optional
SIZE size in MB Optional Optional Optional
FORMAT lesystem for fs im-
ages: ext2, ext3. . .
raw will not format
the image.
Mandatory (for fs) Mandatory (for fs) Mandatory (for fs)
DEV_PREFIX Prex for the emu-
lated device this im-
age will be mounted
at. For instance, hd,
sd. If omitted, the de-
fault dev_prex set in
oned.conf will be used
Optional Optional Optional
TARGET device to map disk Optional Optional Optional
DRIVER special disk map-
ping options. KVM:
raw,qcow2. Xen:
tap:aio:, file:
Optional Optional Optional
CACHE Selects the cache
mechanism for the
disk. Values are
default, none,
writethrough,
writeback,
directsync
and unsafe. More
info in the libvirt
documentation

Optional

READONLY Set how the image is


exposed by the hyper-
visor
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
Optional e.g.: yes,
no. This attribute
should only be used
for special storage
congurations
IO Set IO policy. Val-
ues are threads,
native

Optional

Disks Device Mapping


If the TARGET attribute is not set for a disk, OpenNebula will automatically assign it using the following precedence,
starting with dev_prefix + a:
First OS type Image.
Contextualization CDROM.
CDROM type Images.
The rest of DATABLOCK and OS Images, and Volatile disks.
68 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
Please visit the guide for managing images and the image template reference to learn more about the different image
types.
You can nd a complete description of the contextualization features in the contextualization guide.
The default device prex sd can be changed to hd or other prex that suits your virtualization hypervisor requirements.
You can nd more information in the daemon conguration guide.
An Example
This a sample section for disks. There are four disks using the image repository, and two volatile ones. Note that fs
and swap are generated on-the-y:
# First OS image, will be mapped to sda. Use image with ID 2
DISK = [ IMAGE_ID = 2 ]
# First DATABLOCK image, mapped to sdb.
# Use the Image named Data, owned by the user named oneadmin.
DISK = [ IMAGE = "Data",
IMAGE_UNAME = "oneadmin" ]
# Second DATABLOCK image, mapped to sdc
# Use the Image named Results owned by user with ID 7.
DISK = [ IMAGE = "Results",
IMAGE_UID = 7 ]
# Third DATABLOCK image, mapped to sdd
# Use the Image named Experiments owned by user instantiating the VM.
DISK = [ IMAGE = "Experiments" ]
# Volatile filesystem disk, sde
DISK = [ TYPE = fs,
SIZE = 4096,
FORMAT = ext3 ]
# swap, sdf
DISK = [ TYPE = swap,
SIZE = 1024 ]
Because this VM did not declare a CONTEXT or any disk using a CDROM Image, the rst DATABLOCK found is
placed right after the OS Image, in sdb. For more information on image management and moving please check the
Storage guide.
4.1. Virtual Machine Denition File 69
OpenNebula 4.6 User Guide, Release 4.6
4.1.7 Network Section
NIC Sub-
Attribute
Description Mandatory
NET-
WORK_ID
ID of the network to attach this device, as dened by onevnet. Use if no
NETWORK
Mandatory
(No
NETWORK)
NET-
WORK
Name of the network to use (of those owned by user). Use if no NETWORK_ID Mandatory
(No NET-
WORK_ID)
NET-
WORK_UID
To select the NETWORK of a given user by her ID Optional
NET-
WORK_UNAME
To select the NETWORK of a given user by her NAME Optional
IP Request an specic IP from the NETWORK Optional
MAC* Request an specic HW address from the network interface Optional
BRIDGE Name of the bridge the network device is going to be attached to. Optional
TARGET name for the tun device created for the VM Option for
KVM and
VMWare
SCRIPT name of a shell script to be executed after creating the tun device for the VM Optional
MODEL hardware that will emulate this network interface. With Xen this is the type
attribute of the vif. In KVM you can choose virtio to select its specic
virtualization IO framework
Optional
WHITE_PORTS_TCP iptables_range: Permits access to the VM only through the specied ports
in the TCP protocol. Supersedes BLACK_PORTS_TCP if dened.
Optional
BLACK_PORTS_TCP iptables_range: Doesnt permit access to the VM through the specied
ports in the TCP protocol. Superseded by WHITE_PORTS_TCP if dened.
Optional
WHITE_PORTS_UDP iptables_range: Permits access to the VM only through the specied ports
in the UDP protocol. Supersedes BLACK_PORTS_UDP if dened.
Optional
BLACK_PORTS_UDP iptables_range: Doesnt permit access to the VM through the specied
ports in the UDP protocol. Superseded by WHITE_PORTS_UDP if dened.
Optional
ICMP drop: Blocks ICMP connections to the VM. By default its set to accept. Optional
Warning: The PORTS and ICMP attributes require the rewalling functionality to be congured. Please read the
rewall conguration guide.
Example, a VM with two NIC attached to two different networks:
NIC = [ NETWORK_ID = 1 ]
NIC = [ NETWORK = "Blue",
NETWORK_UID = 0 ]
For more information on setting up virtual networks please check the Managing Virtual Networks guide.
4.1.8 I/O Devices Section
The following I/O interfaces can be dened for a VM:
Note the hypervisor column states that the attribute is Optional, Mandatory, or - not supported for that hypervisor
70 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
Attribute Description XEN KVM VMWARE
INPUT Dene input de-
vices, available
sub-attributes:
TYPE: values
are mouse or
tablet
BUS: values are
usb, ps2 or
xen
O (only usb tablet is
supported)
O

GRAPHICS Wether the VM


should export its
graphical display
and how, available
sub-attributes:
TYPE: values:
vnc, sdl,
spice
LISTEN: IP to
listen on.
PORT: port for
the VNC server
PASSWD:
password for
the VNC server
KEYMAP:
keyboard
conguration
locale to use
in the VNC
display
O O

Example:
GRAPHICS = [
TYPE = "vnc",
LISTEN = "0.0.0.0",
PORT = "5"]
Warning: For KVM hypervisor the port number is a real one, not the VNC port. So for VNC port 0 you should
specify 5900, for port 1 is 5901 and so on.
Warning: If the user does not specify the port variable, OpenNebula will automatically assign
$VNC_BASE_PORT + $VMID, allowing to generate different ports for VMs so they do not collide. The
VNC_BASE_PORT is specied inside the oned.conf le.
4.1.9 Context Section
Context information is passed to the Virtual Machine via an ISO mounted as a partition. This information can be
dened in the VM template in the optional section called Context, with the following attributes:
4.1. Virtual Machine Denition File 71
OpenNebula 4.6 User Guide, Release 4.6
At-
tribute
Description Manda-
tory
VARI-
ABLE
Variables that store values related to this virtual machine or others. The name of the
variable is arbitrary (in the example, we use hostname).
Op-
tional
FILES
*
space-separated list of paths to include in context device. Op-
tional
FILES_DSspace-separated list of File images to include in context device. Op-
tional
TAR-
GET
device to attach the context ISO. Op-
tional
TO-
KEN
YES to create a token.txt le for OneGate monitorization Op-
tional
NET-
WORK
YES to ll automatically the networking parameters for each NIC, used by the
Contextualization packages
Op-
tional
* only for users in oneadmin group
The values referred to by VARIABLE can be dened :
Hardcoded values:
HOSTNAME = "MAINHOST"
Using template variables
$<template_variable>: any single value variable of the VM template, like for example:
IP_GEN = "10.0.0.$VMID"
$<template_variable>[<attribute>]: Any single value contained in a multiple value variable in the VM
template, like for example:
IP_PRIVATE = $NIC[IP]
$<template_variable>[<attribute>, <attribute2>=<value2>]: Any single value contained in
the variable of the VM template, setting one attribute to discern between multiple variables called the same way, like
for example:
IP_PUBLIC = "$NIC[IP, NETWORK=\"Public\"]"
Using Virtual Network template variables
$NETWORK[<vnet_attribute>, <NETWORK_ID|NETWORK>=<vnet_id|vnet_name>]: Any single
value variable in the Virtual Network template, like for example:
dns = "$NETWORK[DNS, NETWORK_ID=3]"
Note: The network MUST be in used by any of the NICs dened in the template. The vnet_attribute can be
TEMPLATE to include the whole vnet template in XML (base64 encoded).
Using Image template variables
$IMAGE[<image_attribute>, <IMAGE_ID|IMAGE>=<img_id|img_name>]: Any single value vari-
able in the Image template, like for example:
root = "$IMAGE[ROOT_PASS, IMAGE_ID=0]"
Note: The image MUST be in used by any of the DISKs dened in the template. The image_attribute can be
TEMPLATE to include the whole image template in XML (base64 encoded).
72 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
Using User template variables
$USER[<user_attribute>]: Any single value variable in the user (owner of the VM) template, like for example:
ssh_key = "$USER[SSH_KEY]"
Note: The user_attribute can be TEMPLATE to include the whole user template in XML (base64 encoded).
Pre-dened variables, apart from those dened in the template you can use:
$UID, the uid of the VM owner
$UNAME, the name of the VM owner
$GID, the id of the VM owners group
$GNAME, the name of the VM owners group
$TEMPLATE, the whole template in XML format and encoded in base64
FILES_DS, each le must be registered in a FILE_DS datastore and has to be of type CONTEXT. Use the following
to select les from Files Datastores:
$FILE[IMAGE=<image name>], to select own les
$FILE[IMAGE=<image name>, <IMAGE_UNAME|IMAGE_UID>=<owner name|owner id>], to
select images owned by other users, by user name or uid.
$FILE[IMAGE_ID=<image id>], global le selection
Example:
CONTEXT = [
HOSTNAME = "MAINHOST",
IP_PRIVATE = "$NIC[IP]",
DNS = "$NETWORK[DNS, NAME=\"Public\"]",
IP_GEN = "10.0.0.$VMID",
FILES = "/service/init.sh /service/certificates /service/service.conf",
FILES_DS = "$FILE[IMAGE_ID=34] $FILE[IMAGE=\"kernel\"]",
TARGET = "sdc"
]
4.1.10 Placement Section
The following attributes placement constraints and preferences for the VM:
Attribute Description
SCHED_REQUIREMENTS Boolean expression that rules out provisioning hosts from list of machines suitable to run
this VM.
SCHED_RANK This eld sets which attribute will be used to sort the suitable hosts for this VM.
Basically, it denes which hosts are more suitable than others.
SCHED_DS_REQUIREMENTS Boolean expression that rules out entries from the pool of datastores suitable to run this
VM.
SCHED_DS_RANK States which attribute will be used to sort the suitable datastores for this VM. Basically, it
denes which datastores are more suitable than others.
Example:
SCHED_REQUIREMENTS = "CPUSPEED > 1000"
SCHED_RANK = "FREE_CPU"
4.1. Virtual Machine Denition File 73
OpenNebula 4.6 User Guide, Release 4.6
SCHED_DS_REQUIREMENTS = "NAME=GoldenCephDS"
SCHED_DS_RANK = FREE_MB
Requirement Expression Syntax
The syntax of the requirement expressions is dened as:
stmt::= expr;
expr::= VARIABLE = NUMBER
| VARIABLE != NUMBER
| VARIABLE > NUMBER
| VARIABLE < NUMBER
| VARIABLE = STRING
| VARIABLE != STRING
| expr & expr
| expr | expr
| ! expr
| ( expr )
Each expression is evaluated to 1 (TRUE) or 0 (FALSE). Only those hosts for which the requirement expression is
evaluated to TRUE will be considered to run the VM.
Logical operators work as expected ( less <, greater >, & AND, | OR, ! NOT), = means equals with numbers
(oats and integers). When you use = operator with strings, it performs a shell wildcard pattern matching.
Any variable included in the Host template or its Cluster template can be used in the requirements. You may also use
an XPath expression to refer to the attribute.
There is a special variable, CURRENT_VMS, that can be used to deploy VMs in a Host where other VMs are (not)
running. It can be used only with the operators = and !=
Warning: Check the Monitoring Subsystem guide to nd out how to extend the information model and add any
information probe to the Hosts.
Warning: There are some predened variables that can be used: NAME, MAX_CPU, MAX_MEM, FREE_MEM,
FREE_CPU, USED_MEM, USED_CPU, HYPERVISOR
Examples:
# Only aquila hosts (aquila0, aquila1...), note the quotes
SCHED_REQUIREMENTS = "NAME = \"aquila
*
\""
# Only those resources with more than 60% of free CPU
SCHED_REQUIREMENTS = "FREE_CPU > 60"
# Deploy only in the Host where VM 5 is running
SCHED_REQUIREMENTS = "CURRENT_VMS = 5"
# Deploy in any Host, except the ones where VM 5 or VM 7 are running
SCHED_REQUIREMENTS = "(CURRENT_VMS != 5) & (CURRENT_VMS != 7)"
Warning: If using OpenNebulas default match-making scheduler in a hypervisor heterogeneous environment,
it is a good idea to add an extra line like the following to the VM template to ensure its placement in a VMWare
hypervisor enabled machine.
74 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
SCHED_REQUIREMENTS = "HYPERVISOR=\"vmware\""
Warning: Template variables can be used in the SCHED_REQUIREMENTS section.
$<template_variable>: any single value variable of the VM template.
$<template_variable>[<attribute>]: Any single value contained in a multiple value variable in
the VM template.
$<template_variable>[<attribute>, <attribute2>=<value2>]: Any single value con-
tained in a multiple value variable in the VM template, setting one atribute to discern between multiple variables
called the same way.
For example, if you have a custom probe that generates a MACS attribute for the hosts, you can do short of a MAC
pinning, so only VMs with a given MAC runs in a given host.
SCHED_REQUIREMENTS = "MAC=\"$NIC[MAC]\""
Rank Expression Syntax
The syntax of the rank expressions is dened as:
stmt::= expr;
expr::= VARIABLE
| NUMBER
| expr + expr
| expr - expr
| expr
*
expr
| expr / expr
| - expr
| ( expr )
Rank expressions are evaluated using each host information. +, -, *, / and - are arithmetic operators. The rank
expression is calculated using oating point arithmetics, and then round to an integer value.
Warning: The rank expression is evaluated for each host, those hosts with a higher rank are used rst to start
the VM. The rank policy must be implemented by the scheduler. Check the conguration guide to congure the
scheduler.
Warning: Similar to the requirements attribute, any number (integer or oat) attribute dened for the host can be
used in the rank attribute
Examples:
# First those resources with a higher Free CPU
SCHED_RANK = "FREE_CPU"
# Consider also the CPU temperature
SCHED_RANK = "FREE_CPU
*
100 - TEMPERATURE"
4.1.11 RAW Section
This optional section of the VM template is used whenever the need to pass special attributes to the underlying
hypervisor arises. Anything placed in the data attribute gets passed straight to the hypervisor, unmodied.
4.1. Virtual Machine Denition File 75
OpenNebula 4.6 User Guide, Release 4.6
RAW Sub-Attribute Description XEN KVM VMWARE
TYPE Possible values are:
kvm, xen, vmware
O O O
DATA Raw data to be passed
directly to the hyper-
visor
O O O
DATA_VMX Raw data to be added
directly to the .vmx
le

O
Example:
Add a custom builder and bootloader to a Xen VM:
RAW = [
TYPE = "xen",
DATA = "builder=\"linux\"
bootloader=\"/usr/lib/xen/boot/domUloader.py\"
bootargs=\"--entry=xvda2:/boot/vmlinuz-xenpae,/boot/vmlinuz-xenpae\"" ]
Add a guest type and a specic scsi controller to a vmware VM:
RAW = [
TYPE = "vmware",
DATA = "<devices><controller type=scsi index=0 model=lsilogic/></devices>",
DATA_VMX = "pciBridge0.present = \"TRUE\"\nguestOS=\"windows7srv-64\""
]
4.1.12 Restricted Attributes
All the default restricted attributes to users in the oneadmin group are summarized in the following list:
CONTEXT/FILES
DISK/SOURCE
NIC/MAC
NIC/VLAN_ID
SCHED_RANK
These attributes can be congured in oned.conf.
4.2 Image Denition Template
This page describes how to dene a new image template. An image template follows the same syntax as the VM
template.
If you want to learn more about the image repository, you can do so here.
Warning: There are some template attributes that can compromise the security of the system or the security of
other VMs, and can be used only by users in the oneadmin group. These attributes can be congured in oned.conf,
the default ones are labeled with
*
in the following tables. See the complete list in the Restricted Attributes section.
76 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
4.2.1 Template Attributes
The following attributes can be dened in the template.
At-
tribute
M / O Value Description
NAME Manda-
tory
Any string Name that the Image will get. Every image must have a
unique name.
DE-
SCRIP-
TION
Optional Any string Human readable description of the image for other users.
TYPE Optional OS, CDROM,
DATABLOCK,
KERNEL, RAMDISK,
CONTEXT
Type of the image, explained in detail in the following
section. If omitted, the default value is the one dened in
oned.conf (install default is OS).
PER-
SIS-
TENT
Optional YES, NO Persistence of the image. If omitted, the default value is NO.
PER-
SIS-
TENT_TYPE
Optional IMMUTABLE An special persistent image, that will not be modied. This
attribute should only be used for special storage
congurations.
DEV_PREFIX Optional Any string Prex for the emulated device this image will be mounted at.
For instance, hd, sd, or vd for KVM virtio. If omitted, the
default value is the one dened in oned.conf (installation
default is hd).
TAR-
GET
Optional Any string Target for the emulated device this image will be mounted at.
For instance, hdb, sdc. If omitted, it will be assigned
automatically.
DRIVER Optional KVM: raw, qcow2
Xen:tap:aio:,
file:
Specic image mapping driver. VMware is unsupported
PATH Manda-
tory (if no
SOURCE)
Any string Path to the original le that will be copied to the image
repository. If not specied for a DATABLOCK type image, an
empty image will be created. Note that gzipped les are
supported and OpenNebula will automatically decompress
them. Bzip2 compressed les is also supported, but its
strongly discouraged since OpenNebula will not calculate its
size properly.
SOURCE*Manda-
tory (if no
PATH)
Any string Source to be used in the DISK attribute. Useful for not
le-based images.
DISK_TYPE Optional BLOCK, CDROM or
FILE (default).
This is the type of the supporting media for the image: a block
device (BLOCK) an ISO-9660 le or readonly block device
(CDROM) or a plain le (FILE).
READ-
ONLY
Optional YES, NO. This attribute should only be used for special storage
congurations. It sets how the image is going to be exposed to
the hypervisor. Images of type CDROM and those with
PERSISTENT_TYPE set to IMMUTABLE will have
READONLY set to YES. Otherwise, by default it is set to NO.
CLONE_FSTYPE Optional thin,zeroedthick,
eagerzeroedthick
,thick,thin
Only for VMware images ion vmfs datastores. Sets the
format of the target image when cloning within the datstore.
More information on types.
MD5 Optional An md5 hash MD5 hash to check for image integrity
SHA1 Optional An sha1 hash SHA1 hash to check for image integrity
4.2. Image Denition Template 77
OpenNebula 4.6 User Guide, Release 4.6
Warning: Be careful when PATH points to a compressed bz2 image, since although it will work, OpenNebula
will not calculate its size correctly.
Mandatory attributes for DATABLOCK images with no PATH set:
At-
tribute
Value Description
SIZE An
inte-
ger
Size in MB.
FSTYPEString Type of le system to be built. Plain. When the disk image is used directly by the hypervisor we
can format the image, and so it is ready to be used by the guest OS. Values: ext2, ext3,
ext4, ntfs, reiserfs, jfs, swap. Any other fs supported by mkfs will work if no special
option is needed. Formatted. The disk image is stored in a hypervisor specic format VMDK
or Qcow2. Then we cannot really make a lesystem on the image, just create the device and let
the guest OS format the disk. Use raw to not to format the new image. Values: raw, qcow2,
vmdk_
*
.
4.2.2 Template Examples
Example of an OS image:
NAME = "Ubuntu Web Development"
PATH = /home/one_user/images/ubuntu_desktop.img
DESCRIPTION = "Ubuntu 10.04 desktop for Web Development students.
Contains the pdf lessons and exercises as well as all the necessary
programming tools and testing frameworks."
Example of a CDROM image:
NAME = "MATLAB install CD"
TYPE = CDROM
PATH = /home/one_user/images/matlab.iso
DESCRIPTION = "Contains the MATLAB installation files. Mount it to install MATLAB on new OS images."
Example of a DATABLOCK image:
NAME = "Experiment results"
TYPE = DATABLOCK
# No PATH set, this image will start as a new empty disk
SIZE = 3.08
FSTYPE = ext3
DESCRIPTION = "Storage for my Thesis experiments."
4.2.3 Restricted Attributes
All the default restricted attributes to users in the oneadmin group are summarized in the following list:
SOURCE
78 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
4.3 Virtual Network Denition File
This page describes how to dene a new Virtual Network template. A Virtual Network template follows the same
syntax as the VM template.
If you want to learn more about the Virtual Network management, you can do so here.
4.3.1 Common Attributes
There are two types of Virtual Networks, ranged and xed. Their only difference is how the leases are dened in the
template.
These are the common attributes for both types of VNets:
Attribute Value Description Mandatory
NAME String Name of the Virtual Network YES
BRIDGE String Name of the physical bridge in the physical host where the VM
should connect its network interface
YES if
PHYDEV is
not set
TYPE RANGED/FIXED Type of this VNet YES
VLAN YES/NO Whether or not to isolate this virtual network using the Virtual
Network Manager drivers. If omitted, the default value is NO.
NO
VLAN_ID Integer Optional VLAN id for the 802.1Q and Open vSwitch networking
drivers.
NO
PHYDEV String Name of the physical network device that will be attached to the
bridge.
YES for
802.1Q driver
SITE_PREFIXString IPv6 unicast local addresses (ULAs). Must be a valid IPv6 Optional
GLOBAL_PREFIX String IPv6 global unicast addresses. Must be a valid IPv6 Optional
Please note that any arbitrary value can be set in the Virtual Network template, and then used in the contextualization
section of the VM. For instance, NETWORK\_GATEWAY="x.x.x.x" might be used to dene the Virtual Network,
and then used in the context section of the VM to congure its network to connect through the GATEWAY.
If you need OpenNebula to generate IPv6 addresses, that can be later used in context or for Virtual Router appliances,
you can use the GLOBAL_PREFIX and SITE_PREFIX attributes
Attributes Used for Contextualization
Attribute Description
NETWORK_ADDRESS Base network address
NETWORK_MASK Network mask
GATEWAY Router for this network, do not set when the network is not routable
DNS Specic DNS for this network
GATEWAY6 IPv6 router for this network
CONTEXT_FORCE_IPV4 When a vnet is IPv6 the IPv4 is not congured unless this attribute is set
4.3.2 Leases
A lease is a denition of an IP-MAC pair. From an IP address, OpenNebula generates an associated MAC using the
following rule: MAC = MAC_PREFFIX:IP. All Virtual Networks share a default value for the MAC_PREFIX, set
in the oned.conf le.
So, for example, from IP 10.0.0.1 and MAC_PREFFIX 02:00, we get 02:00:0a:00:00:01.
4.3. Virtual Network Denition File 79
OpenNebula 4.6 User Guide, Release 4.6
The available leases for new VNets are dened differently for each type.
Fixed Virtual Networks
Fixed VNets need a series of LEASES vector attributes, dened with the following sub-attributes:
Sub-Attribute Value Description Mandatory
IP IP address IP for this lease YES
MAC MAC address MAC associated to this IP NO
Warning: The optional MAC attribute will overwrite the default MAC_PREFIX:IP rule. Be aware that this will
break the default contextualization mechanism.
Ranged Virtual Networks
Instead of a list of LEASES, ranged Virtual Networks contain a range of IPs that can be dened in a exible way using
these attributes:
Attribute Value Description
NET-
WORK_ADDRESS
IP address,
optionally in
CIDR notation
Base network address to generate IP addresses.
NET-
WORK_SIZE
A, B, C, or
Number
Number of VMs that can be connected using this network. It can be dened
either using a number or a network class (A, B or C). The default value for
the network size can be found in oned.conf.
NET-
WORK_MASK
Mask in
dot-decimal
notation
Network mask for this network.
IP_START IP address First IP of the range.
IP_END IP address Last IP of the range.
MAC_STARTMAC address First MAC of the range.
The following examples dene the same network range, from 10.10.10.1 to 10.10.10.254:
NETWORK_ADDRESS = 10.10.10.0
NETWORK_SIZE = C
NETWORK_ADDRESS = 10.10.10.0
NETWORK_SIZE = 254
NETWORK_ADDRESS = 10.10.10.0/24
NETWORK_ADDRESS = 10.10.10.0
NETWORK_MASK = 255.255.255.0
You can change the rst and/or last IP of the range:
NETWORK_ADDRESS = 10.10.10.0/24
IP_START = 10.10.10.17
Or dene the range manually:
IP_START = 10.10.10.17
IP_END = 10.10.10.41
80 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
Finally, you can dene the network by just specifying the MAC address set (specially in IPv6). The following is
equivalent to the previous examples but with MACs:
MAC_START = 02:00:0A:0A:0A:11
NETWORK_SIZE = 254
Warning: With either of the above procedures, no matter if you are dening the set using IPv4 networks, Open-
Nebula will generate IPv6 addresses if the GLOBAL_PREFIX and/or SITE_PREFIX is added to the network
template. Note that the link local IPv6 address will be always generated.
4.3.3 Examples
Sample xed VNet:
NAME = "Blue LAN"
TYPE = FIXED
# We have to bind this network to virbr1 for Internet Access
BRIDGE = vbr1
LEASES = [IP=130.10.0.1]
LEASES = [IP=130.10.0.2, MAC=50:20:20:20:20:21]
LEASES = [IP=130.10.0.3]
LEASES = [IP=130.10.0.4]
# Custom Attributes to be used in Context
GATEWAY = 130.10.0.1
DNS = 130.10.0.1
LOAD_BALANCER = 130.10.0.4
Sample ranged VNet:
NAME = "Red LAN"
TYPE = RANGED
# Now well use the host private network (physical)
BRIDGE = vbr0
NETWORK_ADDRESS = 192.168.0.0/24
IP_START = 192.168.0.3
# Custom Attributes to be used in Context
GATEWAY = 192.168.0.1
DNS = 192.168.0.1
LOAD_BALANCER = 192.168.0.2
4.4 Command Line Interface
OpenNebula provides a set commands to interact with the system:
4.4. Command Line Interface 81
OpenNebula 4.6 User Guide, Release 4.6
4.4.1 CLI
oneacct: gets accounting data from OpenNebula
oneacl: manages OpenNebula ACLs
onecluster: manages OpenNebula clusters
onedatastore: manages OpenNebula datastores
onedb: OpenNebula database migration tool
onegroup: manages OpenNebula groups
onehost: manages OpenNebula hosts
oneimage: manages OpenNebula images
onetemplate: manages OpenNebula templates
oneuser: manages OpenNebula users
onevdc: manages OpenNebula Virtual DataCenters
onevm: manages OpenNebula virtual machines
onevnet: manages OpenNebula networks
onezone: manages OpenNebula zones
The output of these commands can be customized by modifying the conguration les that can be found in
/etc/one/cli/. They also can be customized on a per-user basis, in this case the conguration les should
be placed in $HOME/.one/cli.
4.4.2 OCCI Commands
occi-compute: manages compute objects
occi-network: manages network objects
occi-storage: manages storage objects
occi-instance-type: Retrieve instance types
4.4.3 ECONE Commands
econe-upload: Uploads an image to OpenNebula
econe-describe-images: Lists all registered images belonging to one particular user.
econe-run-instances: Runs an instance of a particular image (that needs to be referenced).
econe-describe-instances: Outputs a list of launched images belonging to one particular user.
econe-terminate-instances: Shutdowns a set ofvirtual machines (or cancel, depending on its state).
econe-reboot-instances: Reboots a set ofvirtual machines.
econe-start-instances: Starts a set ofvirtual machines.
econe-stop-instances: Stops a set ofvirtual machines.
econe-create-volume: Creates a new DATABLOCK in OpenNebula
econe-delete-volume: Deletes an existing DATABLOCK.
82 Chapter 4. References
OpenNebula 4.6 User Guide, Release 4.6
econe-describe-volumes: Describe all available DATABLOCKs for this user
econe-attach-volume: Attaches a DATABLOCK to an instance
econe-detach-volume: Detaches a DATABLOCK from an instance
econe-allocate-address: Allocates a new elastic IP address for the user
econe-release-address: Releases a publicIP of the user
econe-describe-addresses: Lists elastic IP addresses
econe-associate-address: Associates a publicIP of the user with a given instance
econe-disassociate-address: Disasociate a publicIP of the user currently associated with an instance
econe-create-keypair: Creates the named keypair
econe-delete-keypair: Deletes the named keypair, removes the associated keys
econe-describe-keypairs: List and describe the key pairs available to the user
econe-register: Registers an image
4.4.4 oneFlow Commands
oneow: oneFlow Service management
oneow-template: oneFlow Service Template management
4.4. Command Line Interface 83
OpenNebula 4.6 Advanced
Administration Guide
Release 4.6
OpenNebula Project
April 28, 2014
CONTENTS
1 Application Flow and Auto-scaling 1
1.1 OneFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 OneFlow Server Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Managing Multi-tier Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Application Auto-scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Data Center Federation 23
2.1 Data Center Federation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 OpenNebula Federation Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 OpenNebula Federation Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Scalability 31
3.1 Conguring Sunstone for Large Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Conguring OpenNebula for Large Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 High Availability 37
4.1 Virtual Machines High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 OpenNebula High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5 Cloud Bursting 45
5.1 Cloud Bursting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2 Amazon EC2 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6 Application Insight 53
6.1 OneGate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.2 OneGate Server Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.3 Application Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7 Public Cloud 59
7.1 Building a Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.2 EC2 Server Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.3 OCCI Server Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.4 OpenNebula OCCI User Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.5 OpenNebula EC2 User Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.6 EC2 Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
i
ii
CHAPTER
ONE
APPLICATION FLOW AND AUTO-SCALING
1.1 OneFlow
OneFlow allows users and administrators to dene, execute and manage multi-tiered applications, or services com-
posed of interconnected Virtual Machines with deployment dependencies between them. Each group of Virtual Ma-
chines is deployed and managed as a single entity, and is completely integrated with the advanced OpenNebula user
and group management.
1.1.1 Benets
Dene multi-tiered applications (services) as collection of applications
Manage multi-tiered applications as a single entity
Automatic execution of services with dependencies
Provide congurable services from a catalog and self-service portal
Enable tight, efcient administrative control
Fine-grained access control for the secure sharing of services with other users
Auto-scaling policies based on performance metrics and schedule
1.1.2 Next Steps
OneFlow Server Conguration
Multi-tier Applications
Application Auto-scaling
1.2 OneFlow Server Conguration
The OneFlow commands do not interact directly with the OpenNebula daemon, there is a server that takes the requests
and manages the service (multi-tiered application) life-cycle. This guide shows how to start OneFlow, and the different
options that can be congured.
1
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
1.2.1 Installation
Starting with OpenNebula 4.2, OneFlow is included in the default installation. Check the Installation guide for details
of what package you have to install depending on your distribution
1.2.2 Conguration
The OneFlow conguration le can be found at /etc/one/oneflow-server.conf. It uses YAML syntax to
dene the following options:
Option Description
Server Conguration
:one_xmlrpc OpenNebula daemon host and port
:lcm_interval Time in seconds between Life Cycle Manager steps
:host Host where OneFlow will listen
:port Port where OneFlow will listen
Defaults
:default_cooldown Default cooldown period after a scale operation, in sec-
onds
:shutdown_action Default shutdown action. Values: shutdown,
shutdown-hard
:action_number :action_period Default number of virtual machines (action_number)
that will receive the given call in each interval dened
by action_period, when an action is performed on a role.
:vm_name_template Default name for the Virtual Machines created by one-
ow. You can use any of the following placeholders:
$SERVICE_ID
$SERVICE_NAME
$ROLE_NAME
$VM_NUMBER
Auth
:core_auth Authentication driver to communicate with OpenNeb-
ula core cipher: for symmetric cipher encryption of
tokens x509: for x509 certicate encryption of tokens
For more information, visit the OpenNebula Cloud Auth
documentation
Log
:debug_level Log debug level. 0 = ERROR, 1 = WARNING, 2 =
INFO, 3 = DEBUG
This is the default le
################################################################################
# Server Configuration
################################################################################
# OpenNebula daemon contact information
#
:one_xmlrpc: http://localhost:2633/RPC2
# Time in seconds between Life Cycle Manager steps
#
:lcm_interval: 30
2 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# Host and port where OneFlow server will run
:host: 127.0.0.1
:port: 2474
################################################################################
# Defaults
################################################################################
# Default cooldown period after a scale operation, in seconds
:default_cooldown: 300
# Default shutdown action. Values: shutdown, shutdown-hard
:shutdown_action: shutdown
# Default oneflow action options when only one is supplied
:action_number: 1
:action_period: 60
# Default name for the Virtual Machines created by oneflow. You can use any
# of the following placeholders:
# $SERVICE_ID
# $SERVICE_NAME
# $ROLE_NAME
# $VM_NUMBER
:vm_name_template: $ROLE_NAME_$VM_NUMBER_(service_$SERVICE_ID)
#############################################################
# Auth
#############################################################
# Authentication driver to communicate with OpenNebula core
# - cipher, for symmetric cipher encryption of tokens
# - x509, for x509 certificate encryption of tokens
:core_auth: cipher
################################################################################
# Log
################################################################################
# Log debug level
# 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
#
:debug_level: 2
1.2.3 Start OneFlow
To start and stop the server, use the oneflow-server start/stop command:
$ oneflow-server start
oneflow-server started
Warning: By default, the server will only listen to requests coming from localhost. Change the :host at-
tribute in /etc/one/oneflow-server.conf to your server public IP, or 0.0.0.0 so oneow will listen on
any interface.
1.2. OneFlow Server Conguration 3
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Inside /var/log/one/ you will nd new log les for the server, and individual ones for each service in
/var/log/one/oneflow/<id>.log
/var/log/one/oneflow.error
/var/log/one/oneflow.log
1.2.4 Enable the Sunstone Tabs
The OneFlow tabs are hidden by default. To enable them, edit /etc/one/sunstone-views/admin.yaml and
/etc/one/sunstone-views/user.yaml and set oneow tabs inside enabled_tabs to true:
enabled_tabs:
dashboard-tab: true
...
oneflow-dashboard: true
oneflow-services: true
oneflow-templates: true
Be sure to restart Sunstone for the changes to take effect.
For more information on how to customize the views based on the user/group interacting with Sunstone check the
sunstone views guide
1.2.5 Advanced Setup
ACL Rule
By default this rule is dened in OpenNebula to enable the creation of new services by any user. If you want to limit
this, you will have to delete this rule and generate new ones.
*
DOCUMENT/
*
CREATE
If you only want a specic group to be able to use OneFlow, execute:
$ oneacl create "@1 DOCUMENT/
*
CREATE"
Read more about the ACL Rules system here.
1.3 Managing Multi-tier Applications
OneFlow allows users and administrators to dene, execute and manage multi-tiered applications, or services com-
posed of interconnected Virtual Machines with deployment dependencies between them. Each group of Virtual Ma-
chines is deployed and managed as a single entity, and is completely integrated with the advanced OpenNebula user
and group management.
1.3.1 What Is a Service
The following diagram represents a multi-tier application. Each node represents a Role, and its cardinality (the number
of VMs that will be deployed). The arrows indicate the deployment dependencies: each Roles VMs are deployed only
when all its parents VMs are running.
4 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
This Service can be represented with the following JSON template:
{
"name": "my_service",
"deployment": "straight",
"roles": [
{
"name": "frontend",
"vm_template": 0
},
{
"name": "db_master",
"parents": [
"frontend"
],
"vm_template": 1
},
{
"name": "db_slave",
"parents": [
"frontend"
],
"cardinality": 3,
"vm_template": 2
},
{
"name": "worker",
1.3. Managing Multi-tier Applications 5
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
"parents": [
"db_master",
"db_slave"
],
"cardinality": 10,
"vm_template": 3
}
]
}
1.3.2 Managing Service Templates
OneFlow allows OpenNebula administrators and users to register Service Templates in OpenNebula, to be instantiated
later as Services. These Templates can be instantiated several times, and also shared with other users.
Users can manage the Service Templates using the command oneflow-template, or the graphical interface.
For each user, the actual list of Service Templates available is determined by the ownership and permissions of the
Templates.
Create and List Existing Service Templates
The command oneflow-template create registers a JSON template le. For example, if the previous example
template is saved in /tmp/my_service.json, you can execute:
$ oneflow-template create /tmp/my_service.json
ID: 0
You can also create service template from Sunstone:
6 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
To list the available Service Templates, use oneflow-template list/show/top:
$ oneflow-template list
ID USER GROUP NAME
0 oneadmin oneadmin my_service
$ oneflow-template show 0
SERVICE TEMPLATE 0 INFORMATION
ID : 0
NAME : my_service
USER : oneadmin
GROUP : oneadmin
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
TEMPLATE CONTENTS
{
"name": "my_service",
"roles": [
{
....
Templates can be deleted with oneflow-template delete.
1.3. Managing Multi-tier Applications 7
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
1.3.3 Managing Services
A Service Template can be instantiated as a Service. Each newly created Service will be deployed by OneFlow
following its deployment strategy.
Each Service Role creates Virtual Machines in OpenNebula from VM Templates, that must be created beforehand.
Create and List Existing Services
New Services are created from Service Templates, using the oneflow-template instantiate command:
$ oneflow-template instantiate 0
ID: 1
To list the available Services, use oneflow list/top:
$ oneflow list
ID USER GROUP NAME STATE
1 oneadmin oneadmin my_service PENDING
8 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
The Service will eventually change to DEPLOYING. You can see information for each Role and individual Virtual
Machine using oneflow show
$ oneflow show 1
SERVICE 1 INFORMATION
ID : 1
NAME : my_service
USER : oneadmin
GROUP : oneadmin
STRATEGY : straight
SERVICE STATE : DEPLOYING
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
ROLE frontend
ROLE STATE : RUNNING
CARNIDALITY : 1
VM TEMPLATE : 0
NODES INFORMATION
VM_ID NAME STAT UCPU UMEM HOST TIME
0 frontend_0_(service_1) runn 67 120.3M localhost 0d 00h01
1.3. Managing Multi-tier Applications 9
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
ROLE db_master
ROLE STATE : DEPLOYING
PARENTS : frontend
CARNIDALITY : 1
VM TEMPLATE : 1
NODES INFORMATION
VM_ID NAME STAT UCPU UMEM HOST TIME
1 init 0K 0d 00h00
ROLE db_slave
ROLE STATE : DEPLOYING
PARENTS : frontend
CARNIDALITY : 3
VM TEMPLATE : 2
NODES INFORMATION
VM_ID NAME STAT UCPU UMEM HOST TIME
2 init 0K 0d 00h00
3 init 0K 0d 00h00
4 init 0K 0d 00h00
ROLE worker
ROLE STATE : PENDING
PARENTS : db_master, db_slave
CARNIDALITY : 10
VM TEMPLATE : 3
NODES INFORMATION
VM_ID NAME STAT UCPU UMEM HOST TIME
LOG MESSAGES
09/19/12 14:44 [I] New state: DEPLOYING
Life-cycle
The deployment attribute denes the deployment strategy that the Life Cycle Manager (part of the oneow-server)
will use. These two values can be used:
none: All roles are deployed at the same time.
straight: Each Role is deployed when all its parent Roles are RUNNING.
Regardless of the strategy used, the Service will be RUNNING when all of the Roles are also RUNNING. Likewise, a
Role will enter this state only when all the VMs are running.
10 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
This table describes the Service states:
Service State Meaning
PENDING The Service starts in this state, and will stay in it until the LCM decides to deploy it
DEPLOYING Some Roles are being deployed
RUNNING All Roles are deployed successfully
WARNING A VM was found in a failure state
SCALING A Role is scaling up or down
COOLDOWN A Role is in the cooldown period after a scaling operation
UNDEPLOYING Some Roles are being undeployed
DONE The Service will stay in this state after a successful undeployment. It can be deleted
FAILED_DEPLOYING An error occurred while deploying the Service
FAILED_UNDEPLOYING An error occurred while undeploying the Service
FAILED_SCALING An error occurred while scaling the Service
Each Role has an individual state, described in the following table:
1.3. Managing Multi-tier Applications 11
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Role State Meaning
PENDING The Role is waiting to be deployed
DEPLOYING The VMs are being created, and will be monitored until all of them are running
RUNNING All the VMs are running
WARNING A VM was found in a failure state
SCALING The Role is waiting for VMs to be deployed or to be shutdown
COOLDOWN The Role is in the cooldown period after a scaling operation
UNDEPLOYING The VMs are being shutdown. The role will stay in this state until all VMs are done
DONE All the VMs are done
FAILED_DEPLOYING An error occurred while deploying the VMs
FAILED_UNDEPLOYING An error occurred while undeploying the VMs
FAILED_SCALING An error occurred while scaling the Role
Life-Cycle Operations
Services are deployed automatically by the Life Cycle Manager. To undeploy a running Service, users have the
commands oneflow shutdown and oneflow delete.
The command oneflow shutdown will perform a graceful shutdown of all the running VMs, and will delete any
VM in a failed state (see onevm shutdown and delete). If the straight deployment strategy is used, the Roles will
be shutdown in the reverse order of the deployment.
After a successful shutdown, the Service will remain in the DONE state. If any of the VM shutdown operations cannot
be performed, the Service state will show FAILED, to indicate that manual intervention is required to complete the
cleanup. In any case, the Service can be completely removed using the command oneflow delete.
If a Service and its VMs must be immediately undeployed, the command oneflow delete can be used from any
Service state. This will execute a delete operation for each VM and delete the Service. Please be aware that this is not
recommended, because VMs using persistent Images can leave them in an inconsistent state.
When a Service fails during a deployment, undeployment or scaling operation, the command oneflow recover
can be used to retry the previous action once the problem has been solved.
Elasticity
A roles cardinality can be adjusted manually, based on metrics, or based on a schedule. To start the scalability
immediately, use the command oneflow scale:
$ oneflow scale <serviceid> <role_name> <cardinality>
To dene automatic elasticity policies, proceed to the elasticity documentation guide.
1.3.4 Managing Permissions
Both Services and Template resources are completely integrated with the OpenNebula user and group management.
This means that each resource has an owner and group, and permissions. The VMs created by a Service are owned by
the Service owner, so he can list and manage them.
For example, to change the owner and group of the Service 1, we can use oneflow chown/chgrp:
$ oneflow list
ID USER GROUP NAME STATE
1 oneadmin oneadmin my_service RUNNING
$ onevm list
12 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin frontend_0_(ser runn 17 43.5M localhost 0d 01h06
1 oneadmin oneadmin db_master_0_(se runn 59 106.2M localhost 0d 01h06
...
$ oneflow chown my_service johndoe apptools
$ oneflow list
ID USER GROUP NAME STATE
1 johndoe apptools my_service RUNNING
$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 johndoe apptools frontend_0_(ser runn 62 83.2M localhost 0d 01h16
1 johndoe apptools db_master_0_(se runn 74 115.2M localhost 0d 01h16
...
Note that the Services VM ownership is also changed.
All Services and Templates have associated permissions for the owner, the users in its group, and others. For each
one of these groups, there are three rights that can be set: USE, MANAGE and ADMIN. These permissions are very
similar to those of UNIX le system, and can be modied with the command chmod.
For example, to allow all users in the apptools group to USE (list, show) and MANAGE (shutdown, delete) the
Service 1:
$ oneflow show 1
SERVICE 1 INFORMATION
..
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
...
$ oneflow chmod my_service 660
$ oneflow show 1
SERVICE 1 INFORMATION
..
PERMISSIONS
OWNER : um-
GROUP : um-
OTHER : ---
...
Another common scenario is having Service Templates created by oneadmin that can be instantiated by any user. To
implement this scenario, execute:
$ oneflow-template show 0
SERVICE TEMPLATE 0 INFORMATION
ID : 0
NAME : my_service
USER : oneadmin
GROUP : oneadmin
PERMISSIONS
1.3. Managing Multi-tier Applications 13
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
OWNER : um-
GROUP : ---
OTHER : ---
...
$ oneflow-template chmod 0 604
$ oneflow-template show 0
SERVICE TEMPLATE 0 INFORMATION
ID : 0
NAME : my_service
USER : oneadmin
GROUP : oneadmin
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : u--
...
Please refer to the OpenNebula documentation for more information about users & groups, and resource permissions.
1.3.5 Scheduling Actions on the Virtual Machines of a Role
You can use the action command to perform a VM action on all the Virtual Machines belonging to a role. For
example, if you want to suspend the Virtual Machines of the worker Role:
$ oneflow action <service_id> <role_name> <vm_action>
These are the commands that can be performed:
shutdown
shutdown-hard
undeploy
undeploy-hard
hold
release
stop
suspend
resume
boot
delete
delete-recreate
reboot
reboot-hard
poweroff
poweroff-hard
snapshot-create
14 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Instead of performing the action immediately on all the VMs, you can perform it on small groups of VMs with these
options:
-p, -period x: Seconds between each group of actions
-n, -number x: Number of VMs to apply the action to each period
Lets say you need to reboot all the VMs of a Role, but you also need to avoid downtime. This command will reboot
2 VMs each 5 minutes:
$ oneflow action my-service my-role reboot --period 300 --number 2
The oneflow-server.conf le contains default values for period and number that are used if you omit one
of them.
1.3.6 Recovering from Failures
Some common failures can be resolved without manual intervention, calling the oneflow recover command.
This command has different effects depending on the Service state:
State New State Recover action
FAILED_DEPLOYING DEPLOYING
VMs in DONE or FAILED are deleted.
VMs in UNKNOWN are booted.
FAILED_UNDEPLOYING UNDEPLOYING The undeployment is resumed.
FAILED_SCALING SCALING
VMs in DONE or FAILED are deleted.
VMs in UNKNOWN are booted.
For a scale-down, the shut-
down actions are retried.
COOLDOWN RUNNING The Service is simply set to running
before the cooldown period is over.
WARNING WARNING
VMs in DONE or FAILED are deleted.
VMs in UNKNOWN are booted.
New VMs are instantiated to
maintain the current cardinal-
ity.
1.3.7 Service Template Reference
For more information on the resource representation, please check the API guide
Read the elasticity policies documentation for more information.
1.4 Application Auto-scaling
A roles cardinality can be adjusted manually, based on metrics, or based on a schedule.
1.4. Application Auto-scaling 15
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
1.4.1 Overview
When a scaling action starts, the Role and Service enter the SCALING state. In this state, the Role will instantiate
or shutdown a number of VMs to reach its new cardinality.
A role with elasticity policies must dene a minimum and maximum number of VMs:
"roles": [
{
"name": "frontend",
"cardinality": 1,
"vm_template": 0,
"min_vms" : 1,
"max_vms" : 5,
...
After the scaling, the Role and Service are in the COOLDOWN state for the congured duration. During a scale operation
and the cooldown period, other scaling actions for the same or for other Roles are delayed until the Service is RUNNING
again.
1.4.2 Set the Cardinality of a Role Manually
The command oneflow scale starts the scalability immediately.
$ oneflow scale <serviceid> <role_name> <cardinality>
16 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
You can force a cardinality outside the dened range with the --force option.
1.4.3 Maintain the Cardinality of a Role
The min_vms attribute is a hard limit, enforced by the elasticity module. If the cardinality drops below this minimum,
a scale-up operation will be triggered.
1.4.4 Set the Cardinality of a Role Automatically
Auto-scaling Types
Both elasticity_policies and scheduled_policies elements dene an automatic adjustment of the Role cardinality. Three
different adjustment types are supported:
CHANGE: Add/substract the given number of VMs
CARDINALITY: Set the cardinality to the given number
PERCENTAGE_CHANGE: Add/substract the given percentage to the current cardinality
At-
tribute
Type Manda-
tory
Description
type string Yes Type of adjustment. Values: CHANGE, CARDINALITY, PERCENTAGE_CHANGE
adjust in-
te-
ger
Yes Positive or negative adjustment. Its meaning depends on type
min_adjust_step in-
te-
ger
No Optional parameter for PERCENTAGE_CHANGE adjustment type. If present, the
policy will change the cardinality by at least the number of VMs set in this attribute.
Auto-scaling Based on Metrics
Each role can have an array of elasticity_policies. These policies dene an expression that will trigger a
cardinality adjustment.
These expressions can use performance data from
The VM guest. Using the OneGate server, applications can send custom monitoring metrics to OpenNebula.
The VM, at hypervisor level. The Virtualization Drivers return information about the VM, such as CPU, MEM-
ORY, NET_TX and NET_RX.
"elasticity_policies" : [
{
"expression" : "ATT > 50",
"type" : "CHANGE",
"adjust" : 2,
"period_number" : 3,
"period" : 10
},
...
]
1.4. Application Auto-scaling 17
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
The expression can use VM attribute names, oat numbers, and logical operators (!, &, |). When an attribute is found,
it will take the average value for all the running VMs that contain that attribute in the Role. If none of the VMs
contain the attribute, the expression will evaluate to false.
The attribute will be looked for in /VM/USER_TEMPLATE, /VM, and /VM/TEMPLATE, in that order. Logical
operators have the usual precedence.
Attribute Type Manda-
tory
Description
expression string Yes Expression to trigger the elasticity
pe-
riod_number
inte-
ger
No Number of periods that the expression must be true before the elasticity is
triggered
period inte-
ger
No Duration, in seconds, of each period in period_number
Auto-scaling Based on a Schedule
Combined with the elasticity policies, each role can have an array of scheduled_policies. These policies dene
a time, or a time recurrence, and a cardinality adjustment.
"scheduled_policies" : [
{
// Set cardinality to 2 each 10 minutes
"recurrence" : "
*
/10
* * * *
",
"type" : "CARDINALITY",
"adjust" : 2
},
{
// +10 percent at the given date and time
"start_time" : "2nd oct 2013 15:45",
"type" : "PERCENTAGE_CHANGE",
"adjust" : 10
}
]
Attribute Type Mandatory Description
recurrence string No Time for recurring adjustements. Time is specied with the Unix cron sytax
start_time string No Exact time for the adjustement
1.4.5 Visualize in the CLI
The oneflow show / top commands show the dened policies. When a service is scaling, the VMs being
created or shutdown can be identied by an arrow next to their ID:
SERVICE 7 INFORMATION
...
ROLE frontend
ROLE STATE : SCALING
CARNIDALITY : 4
VM TEMPLATE : 0
NODES INFORMATION
VM_ID NAME STAT UCPU UMEM HOST TIME
4 frontend_0_(service_7) runn 0 74.2M host03 0d 00h04
18 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
5 frontend_1_(service_7) runn 0 112.6M host02 0d 00h04
| 6 init 0K 0d 00h00
| 7 init 0K 0d 00h00
ELASTICITY RULES
MIN VMS : 1
MAX VMS : 5
ADJUST EXPRESSION EVALUATION PERIOD
+ 2 (ATT > 50) && !(OTHER_ATT = 5.5 || ABC <= 30) 0 / 3 10s
- 10 % (2) ATT < 20 0 / 1 0s
ADJUST TIME
= 6 0 9
* *
mon,tue,wed,thu,fri
= 10 0 13
* *
mon,tue,wed,thu,fri
= 2 30 22
* *
mon,tue,wed,thu,fri
LOG MESSAGES
06/10/13 18:22 [I] New state: DEPLOYING
06/10/13 18:22 [I] New state: RUNNING
06/10/13 18:26 [I] Role frontend scaling up from 2 to 4 nodes
06/10/13 18:26 [I] New state: SCALING
1.4.6 Interaction with Individual VM Management
All the VMs created by a Service can be managed as regular VMs. When VMs are monitored in an unexpected state,
this is what OneFlow interprets:
VMs in a recoverable state (suspend, poweroff, etc.) are considered are healthy machines. The user will
eventually decide to resume these VMs, so OneFlow will keep monitoring them. For the elasticity module,
these VMs are just like running VMs.
VMs in the nal done state are cleaned from the Role. They do not appear in the nodes information table, and
the cardinality is updated to reect the new number of VMs. This can be seen as an manual scale-down action.
VMs in unknown or failed are in an anomalous state, and the user must be notied. The Role and Service
are set to the WARNING state.
1.4. Application Auto-scaling 19
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
1.4.7 Examples
/
*
Testing:
1) Update one VM template to contain
ATT = 40
and the other VM with
ATT = 60
Average will be 50, true evaluation periods will not increase in CLI output
2) Increase first VM ATT value to 45. True evaluations will increase each
10 seconds, the third time a new VM will be deployed.
3) True evaluations are reset. Since the new VM does not have ATT in its
template, the average will be still bigger than 50, and new VMs will be
deployed each 30s until the max of 5 is reached.
4) Update VM templates to trigger the scale down expression. The number of
VMs is adjusted -10 percent. Because 5
*
0.10 < 1, the adjustment is rounded to 1;
but the min_adjust_step is set to 2, so the final adjustment is -2 VMs.
*
/
20 Chapter 1. Application Flow and Auto-scaling
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
{
"name": "Scalability1",
"deployment": "none",
"roles": [
{
"name": "frontend",
"cardinality": 2,
"vm_template": 0,
"min_vms" : 1,
"max_vms" : 5,
"elasticity_policies" : [
{
// +2 VMs when the exp. is true for 3 times in a row,
// separated by 10 seconds
"expression" : "ATT > 50",
"type" : "CHANGE",
"adjust" : 2,
"period_number" : 3,
"period" : 10
},
{
// -10 percent VMs when the exp. is true.
// If 10 percent is less than 2, -2 VMs.
"expression" : "ATT < 20",
"type" : "PERCENTAGE_CHANGE",
"adjust" : -10,
"min_adjust_step" : 2
}
]
}
]
}
{
"name": "Time_windows",
"deployment": "none",
"roles": [
{
"name": "frontend",
"cardinality": 1,
"vm_template": 0,
"min_vms" : 1,
"max_vms" : 15,
// These policies set the cardinality to:
// 6 from 9:00 to 13:00
// 10 from 13:00 to 22:30
// 2 from 22:30 to 09:00, and the weekend
"scheduled_policies" : [
{
"type" : "CARDINALITY",
1.4. Application Auto-scaling 21
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
"recurrence" : "0 9
* *
mon,tue,wed,thu,fri",
"adjust" : 6
},
{
"type" : "CARDINALITY",
"recurrence" : "0 13
* *
mon,tue,wed,thu,fri",
"adjust" : 10
},
{
"type" : "CARDINALITY",
"recurrence" : "30 22
* *
mon,tue,wed,thu,fri",
"adjust" : 2
}
]
}
]
}
22 Chapter 1. Application Flow and Auto-scaling
CHAPTER
TWO
DATA CENTER FEDERATION
2.1 Data Center Federation
Several OpenNebula instances can be congured as a Federation. Each instance of the Federation is called a Zone,
and they are congured as one master and several slaves.
An OpenNebula Federation is a tightly coupled integration. All the instances will share the same user accounts, groups,
and permissions conguration. Of course, access can be restricted to certain Zones, and also to specic Clusters inside
that Zone.
The typical scenario for an OpenNebula Federation is a company with several Data Centers, distributed in different
geographic locations. This low-level integration does not rely on APIs, administrative employees of all Data Centers
will collaborate on the maintenance of the infrastructure. If your use case requires a synergy with an external cloud
infrastructure, that would fall into the cloudbursting scenario.
For the end users, a Federation allows them to use the resources allocated by the Federation Administrators no matter
where they are. The integration is seamless, meaning that a user logged into the Sunstone web interface of a Zone will
not have to log out and enter the address of the other Zone. Sunstone allows to change the active Zone at any time,
and it will automatically redirect the requests to the right OpenNebula at the target Zone.
2.1.1 Architecture
In a Federation, there is a master OpenNebula zone and several slaves sharing the database tables for users, groups,
ACL rules, and zones. The master OpenNebula is the only one that writes in the shared tables, while the slaves keep
a read-only local copy, and proxy any writing actions to the master. This allows us to guarantee data consistency,
without any impact on the speed of read-only actions.
The synchronization is achieved conguring MySQL to replicate certain tables only. MySQLs replication is able to
perform over long-distance or unstable connections. Even if the master zone crashes and takes a long time to reboot,
the slaves will be able to continue working normally except for a few actions such as new user creation or password
updates.
New slaves can be added to an existing Federation at any moment. Moreover, the administrator can add a clean new
OpenNebula, or import an existing deployment into the Federation keeping the current users, groups, conguration,
and virtual resources.
Regarding the OpenNebula updates, we have designed the database in such a way that different OpenNebula versions
will be able to be part of the same Federation. While an upgrade of the local tables (VM, Image, VNet objects)
will be needed, new versions will keep compatibility with the shared tables. In practice, this means that when a new
OpenNebula version comes out each zone can be updated at a different pace, and the Federation will not be affected.
23
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
To enable users to change zones, Sunstone server is connected to all the oned daemons in the Federation. You can
have one Sunstone for all the Federation, or run one Sunstone for each Zone.
Regarding the administrator users, a Federtion will have a unique oneadmin account. That is the Federation Adminis-
trator account. In a trusted environment, each Zone Administrator will log in with an account in the oneadmin group.
In other scenarios, the Federation Administrator can create a special administrative group with total permissions for
one zone only.
The administrators can share appliances across Zones deploying a private OpenNebula Marketplace.
2.1.2 Next Steps
Continue to the following guides to learn how to congure and manage a Federation:
Federation Conguration
Federation Management
2.2 OpenNebula Federation Conguration
This section will explain how to congure two (or more) OpenNebula zones to work as federation master and slave.
The process described here can be applied to new installations, or existing OpenNebula instances.
MySQL needs to be congured to enable the master-slave replication. Please read the MySQL documentation for
your version for complete instructions. The required steps are summarized here, but it may happen that your MySQL
version needs a different conguration.
2.2.1 1. Congure the OpenNebula Federation Master
Start with an existing OpenNebula, or install OpenNebula as usual following the installation guide. For new
installations, you may need to create a MySQL user for OpenNebula, read more in the MySQL conguration
guide.
24 Chapter 2. Data Center Federation
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# mysql -u root -p
mysql> GRANT ALL PRIVILEGES ON opennebula.
*
TO oneadmin IDENTIFIED BY oneadmin;
Congure OpenNebula to use the master MySQL, and to act as a federation master.
# vi /etc/one/oned.conf
#DB = [ backend = "sqlite" ]
# Sample configuration for MySQL
DB = [ backend = "mysql",
server = "<ip>",
port = 0,
user = "oneadmin",
passwd = "oneadmin",
db_name = "opennebula" ]
FEDERATION = [
MODE = "MASTER",
ZONE_ID = 0,
MASTER_ONED = ""
]
Restart OpenNebula
Edit the local (master) Zone Endpoint. This can be done via Sunstone, or with the onezone command.
$ onezone update 0
ENDPOINT = http://<master-ip>:2633/RPC2
Create a Zone for each one of the slaves, and write down the new Zone ID. This can be done via Sunstone, or
with the onezone command.
$ vim /tmp/zone.tmpl
NAME = slave-name
ENDPOINT = http://<slave-ip>:2633/RPC2
$ onezone create /tmp/zone.tmpl
ID: 100
$ onezone list
ID NAME
0 OpenNebula
100 slave-name
Stop OpenNebula.
2.2.2 2. Import the Existing Slave OpenNebula
Note: If your slave OpenNebula is going to be installed from scratch, you can skip this step.
If the OpenNebula to be added as a Slave is an existing installation, and you need to preserve its database (users,
groups, VMs, hosts...), you need to import the contents with the onedb command.
Stop the slave OpenNebula. Make sure the master OpenNebula is also stopped.
Run the onedb import-slave command. Use -h to get an explanation of each option.
2.2. OpenNebula Federation Conguration 25
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
$ onedb import-slave -h
## USAGE
import-slave
Imports an existing federation slave into the federation master database
## OPTIONS
...
$ onedb import-slave -v \
--username oneadmin --password oneadmin \
--server 192.168.122.3 --dbname opennebula \
--slave-username oneadmin --slave-password oneadmin \
--slave-server 192.168.122.4 --slave-dbname opennebula
The tool will ask for the Zone ID you created in step 1.
Please enter the Zone ID that you created to represent the new Slave OpenNebula:
Zone ID:
You will also need to decide if the users and groups will be merged.
If you had different people using the master and slave OpenNebula instances, then choose not to merge users. In case
of name collision, the slave account will be renamed to username-1.
You will want to merge if your users were accessing both the master and slave OpenNebula instances before the
federation. To put it more clearly, the same person had previous access to the alice user in master and alice user
in the slave. This will be the case if, for example, you had more than one OpenNebula instances pointing to the same
LDAP server for authentication.
When a user is merged, its user template is also copied, using the master contents in case of conict. This means that
if alice had a different password or SSH_KEY in her master and slave OpenNebula users, only the one in master will
be preserved.
In any case, the ownership of existing resources and group membership is preserved.
The import process will move the users from the slave OpeNenbula to the master
OpenNebula. In case of conflict, it can merge users with the same name.
For example:
+----------+-------------++------------+---------------+
| Master | Slave || With merge | Without merge |
+----------+-------------++------------+---------------+
| 5, alice | 2, alice || 5, alice | 5, alice |
| 6, bob | 5, bob || 6, bob | 6, bob |
| | || | 7, alice-1 |
| | || | 8, bob-1 |
+----------+-------------++------------+---------------+
In any case, the ownership of existing resources and group membership
is preserved.
Do you want to merge USERS (Y/N): y
Do you want to merge GROUPS (Y/N): y
When the import process nishes, onedb will write in /var/log/one/onedb-import.log the new user IDs
and names if they were renamed.
26 Chapter 2. Data Center Federation
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
2.2.3 3. Congure the MySQL Replication Master
In your master MySQL: enable the binary log for the opennebula database and set a server ID. Change the
opennebula database name to the one set in oned.conf.
# vi /etc/my.cnf
[mysqld]
log-bin = mysql-bin
server-id = 1
binlog-do-db = opennebula
# service mysqld restart
Master MySQL: You also need to create a special user that will be used by the MySQL replication slaves.
# mysql -u root -p
mysql> CREATE USER one-slave@% IDENTIFIED BY one-slave-pass;
mysql> GRANT REPLICATION SLAVE ON
*
.
*
TO one-slave@%;
Warning: In the previous example we are granting access to user one-replication from any host. You may want
to restrict the hosts with the hostnames of the mysql slaves
Master MySQL: Lock the tables and perform a dump.
First you need to lock the tables before dumping the federated tables.
mysql> FLUSH TABLES WITH READ LOCK;
Then you can safetly execute the mysqldump command in another terminal. Please note the --master-data
option, it must be present to allow the slaves to know the current position of the binary log.
mysqldump -u root -p --master-data opennebula user_pool group_pool zone_pool db_versioning acl > dump.sql
Once you get the dump you can unlock the DB tables again.
mysql> UNLOCK TABLES;
MySQL replication cannot use Unix socket les. You must be able to connect from the slaves to the master
MySQL server using TCP/IP and port 3306 (default mysql port). Please update your rewall accordingly.
You can start the master OpenNebula at this point.
2.2.4 4. Congure the MySQL Replication Slave
For each one of the slaves, congure the MySQL server as a replication slave. Pay attention to the server-id set in
my.cnf, it must be unique for each one.
Set a server ID for the slave MySQL, and congure these tables to be replicated. You may need to change
opennebula to the database name used in oned.conf. The database name must be the same for the master and
slaves OpenNebulas.
# vi /etc/my.cnf
[mysqld]
server-id = 100
replicate-do-table = opennebula.user_pool
replicate-do-table = opennebula.group_pool
replicate-do-table = opennebula.zone_pool
replicate-do-table = opennebula.db_versioning
replicate-do-table = opennebula.acl
2.2. OpenNebula Federation Conguration 27
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# service mysqld restart
Set the master conguration on the slave MySQL.
# mysql -u root -p
mysql> CHANGE MASTER TO
-> MASTER_HOST=master_host_name,
-> MASTER_USER=one-slave,
-> MASTER_PASSWORD=one-slave-pass;
Copy the mysql dump le from the master, and import its contents to the slave.
mysql> CREATE DATABASE opennebula;
mysql> USE opennebula;
mysql> SOURCE /path/to/dump.sql;
Start the slave MySQL process and check its status.
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS\G
The SHOW SLAVE STATUS output will provide detailed information, but to conrm that the slave is connected to
the master MySQL, take a look at these columns:
Slave_IO_State: Waiting for master to send event
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
2.2.5 5. Congure the OpenNebula Federation Slave
For each slave, follow these steps.
If it is a new installation, install OpenNebula as usual following the installation guide.
Congure OpenNebula to use MySQL, rst youll need to create a database user for OpenNebula and grant
access to the OpenNebula database:
# mysql -u root -p
mysql> GRANT ALL PRIVILEGES ON opennebula.
*
TO oneadmin IDENTIFIED BY oneadmin;
and update oned.conf to use these values:
# vi /etc/one/oned.conf
#DB = [ backend = "sqlite" ]
# Sample configuration for MySQL
DB = [ backend = "mysql",
server = "<ip>",
port = 0,
user = "oneadmin",
passwd = "oneadmin",
db_name = "opennebula" ]
Congure OpenNebula to act as a federation slave. Remember to use the ID obtained when the zone was
created.
FEDERATION = [
MODE = "SLAVE",
ZONE_ID = 100,
28 Chapter 2. Data Center Federation
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
MASTER_ONED = "http://<oned-master-ip>:2633/RPC2"
]
Copy the directory /var/lib/one/.one from the master front-end to the slave. This directory should
contain these les:
$ ls -1 /var/lib/one/.one
ec2_auth
occi_auth
one_auth
oneflow_auth
onegate_auth
sunstone_auth
Make sure one_auth (the oneadmin credentials) is present. If its not, copy it from master onead-
mins $HOME/.one to the slave oneadmins $HOME/.one. For most congurations, oneadmins home is
/var/lib/one and this wont be necessary.
Start the slave OpenNebula.
2.3 OpenNebula Federation Management
The administrator of a federation has the ability to add or remove Zones from the federation. See this guide for details
on how to congure the federation in both the master and the slave of the OpenNebula federation.
A user will have access to all the Zones where at least one of her groups has Resource Providers in. This access can
done through Sunstone or through the CLI
2.3.1 Adding a Zone
Adding a Zone through the CLI entails the creation of a Zone template.
Parameter Description
Name Name of the new Zone
Endpoint XMLRPC endpoint of the OpenNebula
# vi zone.tmpl
NAME = ZoneB
ENDPOINT = http://zoneb.opennebula.front-end.server:2633/RPC2
This same operation can be performed through Sunstone (Zone tab -> Create).
Warning: The ENDPOINT has to be reachable from the Sunstone server machine, or the computer running the
CLI in order for the user to access the Zone.
2.3.2 Using a Zone
Through Sunstone
In the upper right position of Sunstone page, users will see a house icon next to the name of the Zone you are curently
using. If the user clicks on that, she will get a dropdown with all the Zones she has access to. Clicking on any of the
Zones in the dropdown will get the user to that Zone.
2.3. OpenNebula Federation Management 29
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Whats happening behind the scenes is that the Sunstone server you are connecting to is redirecting its requests to the
OpenNebula oned process present in the other Zone. In the example above, if the uer clicks on ZoneB, Sunstone will
contact the OpenNebula listening at http://zoneb.opennebula.front-end.server:2633/RPC2.
Through CLI
Users can switch Zones through the command line using the onezone command. The following session can be exam-
ined to understand the Zone management through the CLI.
$ onezone list
C ID NAME ENDPOINT
*
0 OpenNebula http://localhost:2633/RPC2
104 ZoneB http://ultron.c12g.com:2634/RPC2
We can see in the above command output that the user has access to both OpenNebula and ZoneB, and it is
currently in the OpenNebula Zone. To change the active Zone can be changed using the set command of onezone:
$ onezone set 104
Endpoint changed to "http://ultron.c12g.com:2634/RPC2" in /home/<username>/.one/one_endpoint
$ onezone list
C ID NAME ENDPOINT
0 OpenNebula http://localhost:2633/RPC2
*
104 ZoneB http://ultron.c12g.com:2634/RPC2
All the subsequent CLI commands executed would connect to the OpenNebula listening at
http://zoneb.opennebula.front-end.server:2633/RPC2.
30 Chapter 2. Data Center Federation
CHAPTER
THREE
SCALABILITY
3.1 Conguring Sunstone for Large Deployments
Low to medium enterprise clouds will typically deploy Sunstone in a single machine a long with the OpenNebula
daemons. However this simple deployment can be improved by:
Isolating the access from Web clients to the Sunstone server. This can be achieved by deploying the Sunstone
server in a separated machine.
Improve the scalability of the server for large user pools. Usually deploying sunstone in a separate application
container in one or more hosts.
Check also the api scalability guide as these of the tips also have an impact on Sunstone performance.
3.1.1 Deploying Sunstone in a Different Machine
By default the Sunstone server is congured to run in the frontend, but you are able to install the Sunstone server in a
machine different from the frontend.
You will need to install only the sunstone server packages in the machine that will be running the server. If you
are installing from source use the -s option for the install.sh script.
Make sure :one_xmlprc: variable in sunstone-server.conf points to the right place where Open-
Nebula frontend is running, You can also leave it undened and export ONE_XMLRPC environment variable.
Provide the serveradmin credentials in the following le /var/lib/one/.one/sunstone_auth. If you
changed the serveradmin password please check the Cloud Servers Authentication guide.
$ cat /var/lib/one/.one/sunstone_auth
serveradmin:1612b78a4843647a4b541346f678f9e1b43bbcf9
Using this setup the VirtualMachine logs will not be available. If you need to retrieve this information you must deploy
the server in the frontend
3.1.2 Running Sunstone Inside Another Webserver
Self contained deployment of Sunstone (using sunstone-server script) is ok for small to medium installations.
This is no longer true when the service has lots of concurrent users and the number of objects in the system is high
(for example, more than 2000 simultaneous virtual machines).
Sunstone server was modied to be able to run as a rack server. This makes it suitable to run in any web server
that supports this protocol. In ruby world this is the standard supported by most web servers. We now can select web
servers that support spawning multiple processes like unicorn or embedding the service inside apache or nginx
31
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
web servers using the Passenger module. Another benet will be the ability to run Sunstone in several servers and
balance the load between them.
Conguring memcached
When using one on these web servers the use of a memcached server is necessary. Sunstone needs to store user
sessions so it does not ask for user/password for every action. By default Sunstone is congured to use memory
sessions, that is, the sessions are stored in the process memory. Thin and webrick web servers do not spawn new
processes but new threads an all of them have access to that session pool. When using more than one process to server
Sunstone there must be a service that stores this information and can be accessed by all the processes. In this case we
will need to install memcached. It comes with most distributions and its default conguration should be ok. We will
also need to install ruby libraries to be able to access it. The rubygem library needed is memcache-client. If there
is no package for your distribution with this ruby library you can install it using rubygems:
$ sudo gem install memcache-client
Then you will have to change in sunstone conguration (/etc/one/sunstone-server.conf) the value of
:sessions to memcache.
If you want to use novcn you need to have it running. You can start this service with the command:
$ novnc-server start
Another thing you have to take into account is the user on which the server will run. The installation sets the permis-
sions for oneadmin user and group and les like the Sunstone conguration and credentials can not be read by other
users. Apache usually runs as www-data user and group so to let the server run as this user the group of these les
must be changed, for example:
$ chgrp www-data /etc/one/sunstone-server.conf
$ chgrp www-data /etc/one/sunstone-plugins.yaml
$ chgrp www-data /var/lib/one/.one/sunstone_auth
$ chmod a+x /var/lib/one
$ chmod a+x /var/lib/one/.one
$ chgrp www-data /var/log/one/sunstone
*
$ chmod g+w /var/log/one/sunstone
*
We advise to use Passenger in your installation but we will show you how to run Sunstone inside unicorn web server
as an example.
For more information on web servers that support rack and more information about it you can check the rack docu-
mentation page. You can alternatively check a list of ruby web servers.
Running Sunstone with Unicorn
To get more information about this web server you can go to its web page. It is a multi process web server that spawns
new processes to deal with requests.
The installation is done using rubygems (or with your package manager if it is available):
$ sudo gem install unicorn
In the directory where Sunstone les reside (/usr/lib/one/sunstone or
/usr/share/opennebula/sunstone) there is a le called config.ru. This le is specic for rack
applications and tells how to fun the application. To start a new server using unicorn you can run this command
from that directory:
32 Chapter 3. Scalability
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
$ unicorn -p 9869
Default unicorn conguration should be ok for most installations but a conguration le can be created to tune it. For
example, to tell unicorn to spawn 4 processes and write stderr to /tmp/unicorn.log we can create a le called
unicorn.conf that contains:
worker_processes 4
logger debug
stderr_path /tmp/unicorn.log
and start the server and daemonize it using:
$ unicorn -d -p 9869 -c unicorn.conf
You can nd more information about the conguration options in the unicorn documentation.
Running Sunstone with Passenger in Apache
Phusion Passenger is a module for Apache and Nginx web servers that runs ruby rack applications. This can be used
to run Sunstone server and will manage all its life cycle. If you are already using one of these servers or just feel
comfortable with one of them we encourage you to use this method. This kind of deployment adds better concurrency
and lets us add an https endpoint.
We will provide the instructions for Apache web server but the steps will be similar for nginx following Passenger
documentation.
First thing you have to do is install Phusion Passenger. For this you can use pre-made packages for your distribution
or follow the installation instructions from their web page. The installation is self explanatory and will guide you in
all the process, follow them an you will be ready to run Sunstone.
Next thing we have to do is congure the virtual host that will run our Sunstone server. We have to point to the
public directory from the Sunstone installation, here is an example:
<VirtualHost
*
:80>
ServerName sunstone-server
PassengerUser oneadmin
# !!! Be sure to point DocumentRoot to public!
DocumentRoot /usr/lib/one/sunstone/public
<Directory /usr/lib/one/sunstone/public>
# This relaxes Apache security settings.
AllowOverride all
# MultiViews must be turned off.
Options -MultiViews
</Directory>
</VirtualHost>
Now the conguration should be ready, restart -or reload apache conguration- to start the application and point to the
virtual host to check if everything is running.
Running Sunstone in Multiple Servers
You can run Sunstone in several servers and use a load balancer that connects to them. Make sure you are using
memcache for sessions and both Sunstone servers connect to the same memcached server. To do this change the
parameter :memcache_host in the conguration le. Also make sure that both Sunstone instances connect to the
same OpenNebula server.
3.1. Conguring Sunstone for Large Deployments 33
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
3.2 Conguring OpenNebula for Large Deployments
3.2.1 Monitoring
OpenNebula supports two native monitoring systems: ssh-pull and udp-push. The former one, ssh-pull is
the default monitoring system for OpenNebula <= 4.2, however from OpenNebula 4.4 onwards, the default monitoring
system is the udp-push system. This model is highly scalable and its limit (in terms of number of VMs monitored
per second) is bounded to the performance of the server running oned and the database server. Our scalability testing
achieves the monitoring of tens of thousands of VMs in a few minutes.
Read more in the Monitoring guide.
3.2.2 Core Tuning
OpenNebula keeps the monitorization history for a dened time in a database table. These values are then used to
draw the plots in Sunstone.
These monitorization entries can take quite a bit of storage in your database. The amount of storage used will depend
on the size of your cloud, and the following conguration attributes in oned.conf:
MONITORING_INTERVAL (VMware only): Time in seconds between each monitorization. Default: 60.
collectd IM_MAD-i argument (KVM&Xen only): Time in seconds of the monitorization push cycle. Default:
20.
HOST_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring information. Default:
12h.
VM_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring information. Default: 4h.
If you dont use Sunstone, you may want to disable the monitoring history, setting both expiration times to 0.
Each monitoring entry will be around 2 KB for each Host, and 4 KB for each VM. To give you an idea of how much
database storage you will need to prepare, these some examples:
Monitoring interval Host expiration # Hosts Storage
20s 12h 200 850 MB
20s 24h 1000 8.2 GB
Monitoring interval VM expiration # VMs Storage
20s 4h 2000 1.8 GB
20s 24h 10000 7 GB
3.2.3 API Tuning
For large deployments with lots of xmlprc calls the default values for the xmlprc server are too conservative. The
values you can modify and its meaning are explained in the oned.conf guide and the xmlrpc-c library documentation.
From our experience these values improve the server behaviour with a high amount of client calls:
MAX_CONN = 240
MAX_CONN_BACKLOG = 480
OpenNebula Cloud API (OCA) is able to use the library Ox for XML parsing. This library is makes the parsing of
pools much faster. It is used by both the CLI and Sunstone so both will benet from it.
34 Chapter 3. Scalability
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
The core is able to paginate some pool answers. This makes the memory consumption decrease and in some cases
the parsing faster. By default the pagination value is 2000 objects but can be changed using the environment variable
ONE_POOL_PAGE_SIZE. It should be bigger that 2. For example, to list VMs with a page size of 5000 we can use:
$ ONE_POOL_PAGE_SIZE=5000 onevm list
To disable pagination we can use a non numeric value:
$ ONE_POOL_PAGE_SIZE=disabled onevm list
This environment variable can be also used for Sunstone.
3.2.4 Driver Tuning
OpenNebula drivers have by default 15 threads. This is the maximum number of actions a driver can perform at the
same time, the next actions will be queued. You can make this value in oned.conf, the driver parameter is -t.
3.2.5 Database Tuning
For non test installations use MySQL database. sqlite is too slow for more than a couple hosts and a few VMs.
3.2.6 Sunstone Tuning
Please refer to guide about Conguring Sunstone for Large Deployments.
3.2. Conguring OpenNebula for Large Deployments 35
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
36 Chapter 3. Scalability
CHAPTER
FOUR
HIGH AVAILABILITY
4.1 Virtual Machines High Availability
OpenNebula delivers the availability required by most applications running in virtual machines. This guides objective
is to provide information in order to prepare for failures in the virtual machines or physical nodes, and recover from
them. These failures are categorized depending on whether they come from the physical infrastructure (Host failures)
or from the virtualized infrastructure (VM crashes). In both scenarios, OpenNebula provides a cost-effective failover
solution to minimize downtime from server and OS failures.
If you are interested in setting up a high available cluster for OpenNebula, check the High OpenNebula Availability
Guide.
4.1.1 Host Failures
When OpenNebula detects that a host is down, a hook can be triggered to deal with the situation. OpenNebula comes
with a script out-of-the-box that can act as a hook to be triggered when a host enters the ERROR state. This can very
useful to limit the downtime of a service due to a hardware failure, since it can redeploy the VMs on another host.
Lets see how to congure /etc/one/oned.conf to set up this Host hook, to be triggered in the ERROR state.
The following should be uncommented in the mentioned conguration le:
#-------------------------------------------------------------------------------
HOST_HOOK = [
name = "error",
on = "ERROR",
command = "host_error.rb",
arguments = "$HID -r",
remote = no ]
#-------------------------------------------------------------------------------
We are dening a host hook, named error, that will execute the script host_error.rb locally with the following
arguments:
37
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Argu-
ment
Description
Host ID ID of the host containing the VMs to treat. It is compulsory and better left to $HID, that will be
automatically lled by OpenNebula with the Host ID of the host that went down.
Action This dened the action to be performed upon the VMs that were running in the host that went down.
This can be -r (recreate) or -d (delete).
Force-
Sus-
pended
[-f] force resubmission of suspended VMs
Avoid-
Transient
[-p <n>] avoid resubmission if host comes back after <n> monitoring cycles
More information on hooks here.
Additionally, there is a corner case that in critical production environments should be taken into account. OpenNebula
also has become tolerant to network errors (up to a limit). This means that a spurious network error wont trigger the
hook. But if this network error stretches in time, the hook may be triggered and the VMs deleted and recreated. When
(and if) the network comes back, there will be a potential clash between the old and the reincarnated VMs. In order to
prevent this, a script can be placed in the cron of every host, that will detect the network error and shutdown the host
completely (or delete the VMs).
4.1.2 Virtual Machine Failures
The Virtual Machine lifecycle management can fail in several points. The following two cases should cover them:
VM fails: This may be due to a network error that prevents the image to be staged into the node, a hypervisor
related issue, a migration problem, etc. The common symptom is that the VM enters the FAILED state. In
order to deal with these errors, a Virtual Machine hook can be set to recreate the failed VM (or, depending
the production scenario, delete it). This can be achieved by uncommenting the following (for recreating, the
deletion hook is also present in the same le) in /etc/one/oned.conf (and restarting oned):
#-------------------------------------------------------------------------------
VM_HOOK = [
name = "on_failure_recreate",
on = "FAILURE",
command = "onevm delete --recreate",
arguments = "$VMID" ]
#-------------------------------------------------------------------------------
VM crash: This point is concerned with crashes that can happen to a VM after it has been successfully booted
(note that here boot doesnt refer to the actual VM boot process, but to the OpenNebula boot process, that
comprises staging and hypervisor deployment). OpenNebula is able to detect such crashes, and report it as the
VM being in an UNKNOWN state. This failure can be recovered from using the onevm boot functionality.
4.2 OpenNebula High Availability
This guide walks you through the process of setting a high available cluster for OpenNebula. The ultimate goal is to
reduce downtime of core OpenNebula services: core (oned), scheduler (mm_sched) and Sunstone interface (sunstone-
server).
We will be using the classical active-passive cluster architecture which is the recommended solution for OpenNebula.
In this solution two (or more) nodes will be part of a cluster where the OpenNebula daemon, scheduler and Sunstone
(web UI) are cluster resources. When the active node fails, the passive one takes control.
38 Chapter 4. High Availability
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
If you are interested in failover protection against hardware and operating system outages within your virtualized IT
environment, check the Virtual Machines High Availability Guide.
This guide is structured in a how-to formusing the Red Hat HACluster suite tested in a CentOS installation; but generic
considerations and requirements for this setup are discussed to easily implement this solution with other systems.
4.2.1 Overview
In terms of high-availability, OpenNebula consists in three different basic services, namely:
OpenNebula Core: It is the main orchestration component, supervises the life-cycle of each resources (e.g.
hosts, VMs or networks) and operates on the physical infrastructure to deploy and manage virtualized resources.
Scheduler: The scheduler performs a matching between the virtual requests and the available resources using
different scheduling policies. It basically assigns a physical host, and a storage area to each VM.
Sunstone: The GUI for advanced and cloud users as well as system administrators. The GUI is accessed through
a well-known end-point (IP/URL). Sunstone has been architected as a scalable web application supporting
multiple application servers or processes.
The state of the system is stored in a database for persistency and managed by OpenNebula core. In order to improve
the response time of the core daemon, it caches the most recently used data so it reduces the number of queries to the
DB. Note that this prevents an active-active HA conguration for OpenNebula. However such a conguration, given
the lightweight and negligible start times of the core services, does not suppose any advantage.
In this guide we assume that the DB backing OpenNebula core state is also congured in a HA mode. The procedure
for MySQL is well documented elsewhere. Although Sqlite could also be used it is not recommended for a HA
deployment.
4.2. OpenNebula High Availability 39
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
4.2.2 HA Cluster Components & Services
As shown in the previous gure, we will use just one fail-over domain (blue) with two hosts. All OpenNebula services
will be collocated and run on the same server in this case. You can however easily modify this conguration to split
them and allocate each service to a different host and dene different fail-over domains for each one (e.g. blue for
oned and scheduler, red for sunstone).
The following components will be installed and congured based on the RedHat Cluster suite:
* Cluster management, CMAN (cluster manager) and corosync. These components manage cluster membership and
quorum. It prevents service corruption in a distributed setting because of a split-brain condition (e.g. two opennebulas
updating the DB).
* Cluster conguration system, CCS. It keeps and synchronizes the cluster conguration information. There are
other windows-based conguration systems.
* Service Management, rgmanager. This module checks service status and start/stop them as needed in case of
failure.
* Fencing, in order to prevent OpenNebula DB corruption it is important to congure a suitable fencing mechanism.
4.2.3 Installation and Conguration
In the following, we assume that the cluster consists on two servers:
one-server1
one-server2
Warning: While setting and testing the installation it is recommended to disable any rewall. Also watch out for
se_linux.
Step 1: OpenNebula
You should have two servers (they may be VMs, as discussed below) ready to install OpenNebula. These servers will
have the same requirements as regular OpenNebula front-end (e.g. network connection to hosts, ssh passwordless
access, shared lesystems if required...). Remember to use a HA MySQL backend.
It is important to use a twin installation (i.e. same conguration les) so probably it is better to start and congure a
server, and once it is tested rsync the conguration to the other one.
Step 2: Install Cluster Software
In all the cluster servers install the cluster components:
# yum install ricci
# passwd ricci
Warning: Set the same password for user ricci in all the servers
# yum install cman rgmanager
# yum install ccs
Finally enable the daemons and start ricci.
40 Chapter 4. High Availability
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# chkconfig ricci on
# chkconfig cman rgmanager on
# chkconfig rgmanager on
# service ricci start
Step 3: Create the Cluster and Failover Domain
Cluster conguration is stored in /etc/cluster/cluster.conf le. You can either edit this le directly or use
the ccs tool. It is important, however to synchronize and activate the congurtion on all nodes after a change.
To dene the cluster using ccs:
# ccs -h one-server1 --createcluster opennebula
# ccs -h one-server1 --setcman two_node=1 expected_votes=1
# ccs -h one-server1 --addnode one-server1
# ccs -h one-server1 --addnode one-server2
# ccs -h one-server1 --startall
Warning: You can use the -p option in the previous commands with the password set for user ricci.
Now you should have a cluster with two nodes, note the specic quorum options for cman, running. Lets create one
failover domain for OpenNebula services consisting of both servers:
# ccs -h one-server1 --addfailoverdomain opennebula ordered
# ccs -h one-server1 --addfailoverdomainnode opennebula one-server1 1
# ccs -h one-server1 --addfailoverdomainnode opennebula one-server2 2
# ccs -h one-server1 --sync --activate
Step 4: Dene the OpenNebula Service
As pointed out previously well use just one fail-over domain with all the OpenNebula services co-allocated in the
same server. You can easily split the services in different servers and failover domains if needed (e.g. for security
reasons you want Sunstone in other server).
First create the resources of the service: A IP address to reach Sunstone, the one init.d script (it starts oned and
scheduler) and the sunstone init.d script
# ccs --addresource ip address=10.10.11.12 sleeptime=10 monitor_link=1
# ccs --addresource script name=opennebula file=/etc/init.d/opennebula
# ccs --addresource script name=sunstone file=/etc/init.d/opennebula-sunstone
Finally compose the service with these resources and start it:
# ccs --addservice opennebula domain=opennebula recovery=restart autostart=1
# ccs --addsubservice opennebula ip ref=10.10.11.12
# ccs --addsubservice opennebula script ref=opennebula
# ccs --addsubservice opennebula script ref=sunstone
# ccs -h one-server1 --sync --activate
As a reference the /etc/cluster/cluster.conf should look like:
<?xml version="1.0"?>
<cluster config_version="17" name="opennebula">
<fence_daemon/>
<clusternodes>
<clusternode name="one-server1" nodeid="1"/>
4.2. OpenNebula High Availability 41
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
<clusternode name="one-server2" nodeid="2"/>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices/>
<rm>
<failoverdomains>
<failoverdomain name="opennebula" nofailback="0" ordered="1" restricted="0">
<failoverdomainnode name="one-server1" priority="1"/>
<failoverdomainnode name="one-server2" priority="2"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.10.11.12" sleeptime="10"/>
<script file="/etc/init.d/opennebula" name="opennebula"/>
<script file="/etc/init.d/opennebula-sunstone" name="sunstone"/>
</resources>
<service domain="opennebula" name="opennebula" recovery="restart">
<ip ref="10.10.11.12"/>
<script ref="opennebula"/>
<script ref="sunstone"/>
</service>
</rm>
</cluster>
4.2.4 Fencing and Virtual Clusters
Fencing is an essential component when setting up a HA cluster. You should install and test a proper fencing device
before moving to production. In this section we show how to setup a special fencing device for virtual machines.
OpenNebula can be (and it is usually) installed in a virtual machine. Therefore the previous one-server1 and one-
server2 can be in fact virtual machines running in the same physical host (you can run them in different hosts, requiring
a different fencing plugin).
In this case, a virtual HA cluster running in the same host, you could control misbehaving VMs and restart OpenNebula
in other virtual server. However, if you need a to control also host failures you need to fencing mechanism for the
physical host (typically based on power).
Lets assume then that one-server1 and one-server2 are VMs using KVM and libvirt, and running on a physical server.
Step 1: Conguration of the Physical Server
Install the fence agents:
yum install fence-virt fence-virtd fence-virtd-multicast fence-virtd-libvirt
Now we need to generate a random key, for the virtual servers to communicate with the dencing agent in the physical
server. You can use any convinient method, for example: generate key ro access xvm
# mkdir /etc/cluster
# date +%s | sha256sum | base64 | head -c 32 > /etc/cluster/fence_xvm.key
# chmod 400 /etc/cluster/fence_xvm.key
Finally congure the fence-virtd agent
# fence-virtd -c
The conguration le should be similar to:
42 Chapter 4. High Availability
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
interface = "eth0";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
=== End Configuration ===
Warning: Interface (eth0 in the example) is the interface used to communicate the virtual and physical servers.
Now you can start and test the fencing agent:
# chkconfig fence_virtd on
# service fence_virtd start
# fence_xvm -o list
Step 2: Conguration of the Virtual Servers
You need to copy the key to each virtual server:
scp /etc/cluster/fence_xvm.key one-server1:/etc/cluster/
scp /etc/cluster/fence_xvm.key one-server2:/etc/cluster/
Now you should be able to test the fencing agent in the virtual nodes:
# fence_xvm -o list
Step 3: Congure the Cluster to Use Fencing
Finally we need to add the fencing device to the cluster:
ccs --addfencedev libvirt-kvm agent=fence_xvm key_file="/etc/cluster/fence_xvm.key" multicast_address="225.0.0.12" ipport="1229"
And let the servers use it:
4.2. OpenNebula High Availability 43
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# ccs --addmethod libvirt-kvm one-server1
# ccs --addmethod libvirt-kvm one-server2
# ccs --addfenceinst libvirt-kvm one-server1 libvirt-kvm port=one1
# ccs --addfenceinst libvirt-kvm one-server2 libvirt-kvm port=one2
Finally synchronize and activate the conguration:
# ccs -h one-server1 --sync --activate
4.2.5 What to Do After a Fail-over Event
When the active node fails and the passive one takes control, it will start OpenNebula again. This OpenNebula will
see the resources in the exact same way as the one in the server that crashed. However, there will be a set of Virtual
Machines which will be stuck in transient states. For example when a Virtual Machine is deployed and it starts copying
the disks to the target hosts it enters one of this transient states (in this case PROLOG). OpenNebula will wait for
the storage driver to return the PROLOG exit status. This will never happen since the driver fails during the crash,
therefore the Virtual Machine will get stuck in the state.
In these cases its important to review the states of all the Virtual Machines and let OpenNebula know if the driver
exited succesfully or not. There is a command specic for this: onevm recover. You can read more about this
command in the Managing Virtual Machines guide.
In our example we would need to manually check if the disk les have been properly deployed to our host and execute:
$ onevm recover <id> --success # or --failure
The transient states to watch out for are:
BOOT
CLEAN
EPILOG
FAIL
HOTPLUG
MIGRARTE
PROLOG
SAVE
SHUTDOWN
SNAPSHOT
UNKNOWN
44 Chapter 4. High Availability
CHAPTER
FIVE
CLOUD BURSTING
5.1 Cloud Bursting
Cloud bursting is a model in which the local resources of a Private Cloud are combined with resources from remote
Cloud providers. The remote provider could be a commercial Cloud service, such as Amazon EC2, or a partner in-
frastructure running a different OpenNebula instance. Such support for cloud bursting enables highly scalable hosting
environments.
As you may know, OpenNebulas approach to cloud bursting is quite unique. The reason behind this uniqueness is
the transparency to both end users and cloud administrators to use and maintain the cloud bursting functionality. The
transparency to cloud administrators comes from the fact that a an AWS EC2 region is modelled as any other host
(albeit of potentially a much bigger capacity), so the scheduler can place VMs in EC2 as it will do in any other local
host.
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
2 kvm- - 0 0 / 800 (0%) 0K / 16G (0%) on
3 kvm-1 - 0 0 / 100 (0%) 0K / 1.8G (0%) on
4 us-east-1 ec2 0 0 / 500 (0%) 0K / 8.5G (0%) on
On the other hand, the transparency to end users is offered through the hybrid template functionality: the same VM
template in OpenNebula can describe the VM if it is deployed locally and also if it gets deployed in Amazon EC2.
So users just have to instantiate the template, and OpenNebula will transparently choose if that is executed locally or
remotely. A simple template like the following is enough to launch Virtual Machines in Amazon EC2:
NAME=ec2template
CPU=1
MEMORY=1700
EC2=[
AMI="ami-6f5f1206",
BLOCKDEVICEMAPPING="/dev/sdh=:20",
INSTANCETYPE="m1.small",
45
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
KEYPAIR="gsg-keypair" ]
SCHED_REQUIREMENTS="PUBLIC_CLOUD=YES"
$ onetemplate create ec2template.one
ID: 112
$ onetemplate instantiate 112
VM ID: 234
For more information on how to congure an Amazon EC2 host see the following guide:
Amazon EC2 driver
5.2 Amazon EC2 Driver
5.2.1 Considerations & Limitations
You should take into account the following technical considerations when using the EC2 cloud with OpenNebula:
There is no direct access to the dom0, so it cannot be monitored (we dont know where the VM is running on
the EC2 cloud).
The usual OpenNebula functionality for snapshotting, hot-plugging, or migration is not available with EC2.
By default OpenNebula will always launch m1.small instances, unless otherwise specied.
Please refer to the EC2 documentation to obtain more information about Amazon instance types and image manage-
ment:
General information of instances
5.2.2 Prerequisites
You must have a working account for AWS and signup for EC2 and S3 services.
5.2.3 OpenNebula Conguration
Uncomment the EC2 IM and VMM drivers from /etc/one/oned.conf le in order to use the driver.
IM_MAD = [
name = "ec2",
executable = "one_im_sh",
arguments = "-c -t 1 -r 0 ec2" ]
VM_MAD = [
name = "ec2",
executable = "one_vmm_sh",
arguments = "-t 15 -r 0 ec2",
type = "xml" ]
Driver ags are the same as other drivers:
FLAG SETs
-t Number of threads
-r Number of retries
46 Chapter 5. Cloud Bursting
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Additionally you must dene the AWS credentials and AWS region to be used and the maximum capacity that you
want OpenNebula to deploy on the EC2, for this edit the le /etc/one/ec2_driver.conf:
regions:
default:
region_name: us-east-1
access_key_id: YOUR_ACCESS_KEY
secret_access_key: YOUR_SECRET_ACCESS_KEY
capacity:
m1.small: 5
m1.large: 0
m1.xlarge: 0
After OpenNebula is restarted, create a new Host that uses the ec2 drivers:
$ onehost create ec2 --im ec2 --vm ec2 --net dummy
5.2.4 EC2 Specic Template Attributes
In order to deploy an instance in EC2 through OpenNebula you must include an EC2 section in the virtual machine
template. This is an example of a virtual machine template that can be deployed in our local resources or in EC2.
CPU = 0.5
MEMORY = 128
# Xen or KVM template machine, this will be use when submitting this VM to local resources
DISK = [ IMAGE_ID = 3 ]
NIC = [ NETWORK_ID = 7 ]
# EC2 template machine, this will be use wen submitting this VM to EC2
EC2 = [ AMI="ami-00bafcb5",
KEYPAIR="gsg-keypair",
INSTANCETYPE=m1.small]
#Add this if you want to use only EC2 cloud
#SCHED_REQUIREMENTS = HOSTNAME = "ec2"
These are the attributes that can be used in the EC2 section of the template:
5.2. Amazon EC2 Driver 47
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
ATTRIBUTES DESCRIPTION
AMI Unique ID of a machine image, returned by a call to ec2-describe-images.
AKI The ID of the kernel with which to launch the instance.
CLIENTTOKEN Unique, case-sensitive identier you provide to ensure idempotency of the request.
INSTANCETYPESpecies the instance type.
KEYPAIR The name of the key pair, later will be used to execute commands like ssh -i id_keypair or scp -i
id_keypair
LICENSEPOOL license-pool
BLOCKDEVICEMAPPING The block device mapping for the instance. More than one can be specied in a space-separated
list. Check the block-device-mapping option of the EC2 CLI Reference for the syntax
PLACEMENTGROUP Name of the placement group.
PRIVATEIP If youre using Amazon Virtual Private Cloud, you can optionally use this parameter to assign
the instance a specic available IP address from the subnet.
RAMDISK The ID of the RAM disk to select.
SUBNETID If youre using Amazon Virtual Private Cloud, this species the ID of the subnet you want to
launch the instance into. This parameter is also passed to the command ec2-associate-address -i
i-0041230 -a elasticip.
TENANCY The tenancy of the instance you want to launch.
USERDATA Species Base64-encoded MIME user data to be made available to the instance(s) in this
reservation.
SECURITYGROUPS Name of the security group. You can specify more than one security group (comma separated).
ELASTICIP EC2 Elastic IP address to assign to the instance. This parameter is passed to the command
ec2-associate-address -i i-0041230 elasticip.
TAGS Key and optional value of the tag, separated by an equals sign ( = ).You can specify more than
one tag (comma separated).
AVAILABILITYZONE The Availability Zone in which to run the instance.
HOST Denes which OpenNebula host will use this template
EBS_OPTIMIZEDObtain a better I/O throughput for VMs with EBS provisioned volumes
Default values for all these attributes can be dened in the /etc/one/ec2_driver.default le.
<!--
Default configuration attributes for the EC2 driver
(all domains will use these values as defaults)
Valid atributes are: AKI AMI CLIENTTOKEN INSTANCETYPE KEYPAIR LICENSEPOOL
PLACEMENTGROUP PRIVATEIP RAMDISK SUBNETID TENANCY USERDATA SECURITYGROUPS
AVAILABILITYZONE EBS_OPTIMIZED ELASTICIP TAGS
Use XML syntax to specify defaults, note elements are UPCASE
Example:
<TEMPLATE>
<EC2>
<KEYPAIR>gsg-keypair</KEYPAIR>
<INSTANCETYPE>m1.small</INSTANCETYPE>
</EC2>
</TEMPLATE>
-->
<TEMPLATE>
<EC2>
<INSTANCETYPE>m1.small</INSTANCETYPE>
</EC2>
</TEMPLATE>
48 Chapter 5. Cloud Bursting
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
5.2.5 Multi EC2 Site/Region/Account Support
It is possible to dene various EC2 hosts to allow opennebula the managing of different EC2 regions or different EC2
accounts.
When you create a new host the credentials and endpoint for that host are retrieved from the
/etc/one/ec2_driver.conf le using the host name. Therefore, if you want to add a new host to manage
a different region, i.e. eu-west-1, just add your credentials and the capacity limits to the the eu-west-1 section
in the conf le, and specify that name (eu-west-1) when creating the new host.
regions:
...
eu-west-1:
region_name: us-east-1
access_key_id: YOUR_ACCESS_KEY
secret_access_key: YOUR_SECRET_ACCESS_KEY
capacity:
m1.small: 5
m1.large: 0
m1.xlarge: 0
After that, create a new Host with the eu-west-1 name:
$ onehost create eu-west-1 --im ec2 --vm ec2 --net dummy
If the Host name does not match any regions key, the default will be used.
You can dene a different EC2 section in your template for each EC2 host, so with one template you can dene
different AMIs depending on which host it is scheduled, just include a HOST attribute in each EC2 section:
EC2 = [ HOST="ec2",
AMI="ami-0022c769" ]
EC2 = [ HOST="eu-west-1",
AMI="ami-03324cc9" ]
You will have ami-0022c769 launched when this VM template is sent to host ec2 and ami-03324cc9 whenever the VM
template is sent to host eu-west-1.
Warning: If only one EC2 site is dened, the EC2 driver will deploy all EC2 templates onto it, not paying
attention to the HOST attribute.
The availability zone inside a region, can be specied using the AVAILABILITYZONE attribute in the EC2 section
of the template
5.2.6 Hybrid VM Templates
A powerful use of cloud bursting in OpenNebula is the ability to use hybrid templates, dening a VM if OpenNebula
decides to launch it locally, and also dening it if it is going to be outsourced to Amazon EC2. The idea behind this is
to reference the same kind of VM even if it is incarnated by different images (the local image and the remote AMI).
An example of a hybrid template:
## Local Template section
NAME=MNyWebServer
CPU=1
MEMORY=256
5.2. Amazon EC2 Driver 49
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
DISK=[IMAGE="nginx-golden"]
NIC=[NETWORK="public"]
EC2=[
AMI="ami-xxxxx" ]
OpenNebula will use the rst portion (from NAME to NIC) in the above template when the VM is scheduled to a local
virtualization node, and the EC2 section when the VM is scheduled to an EC2 node (ie, when the VM is going to be
launched in Amazon EC2).
5.2.7 Testing
You must create a template le containing the information of the AMIs you want to launch. Additionally if you have
an elastic IP address you want to use with your EC2 instances, you can specify it as an optional parameter.
CPU = 1
MEMORY = 1700
#Xen or KVM template machine, this will be use when submitting this VM to local resources
DISK = [ IMAGE_ID = 3 ]
NIC = [ NETWORK_ID = 7 ]
#EC2 template machine, this will be use wen submitting this VM to EC2
EC2 = [ AMI="ami-00bafcb5",
KEYPAIR="gsg-keypair",
INSTANCETYPE=m1.small]
#Add this if you want to use only EC2 cloud
#SCHED_REQUIREMENTS = HOSTNAME = "ec2"
You only can submit and control the template using the OpenNebula interface:
$ onetemplate create ec2template
$ ontemplate instantiate ec2template
Now you can monitor the state of the VM with
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin one-0 runn 0 0K ec2 0d 07:03
Also you can see information (like IP address) related to the amazon instance launched via the command. The attributes
available are:
AWS_DNS_NAME
AWS_PRIVATE_DNS_NAME
AWS_KEY_NAME
AWS_AVAILABILITY_ZONE
AWS_PLATFORM
AWS_VPC_ID
AWS_PRIVATE_IP_ADDRESS
AWS_IP_ADDRESS
50 Chapter 5. Cloud Bursting
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
AWS_SUBNET_ID
AWS_SECURITY_GROUPS
AWS_INSTANCE_TYPE
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : pepe
USER : oneadmin
GROUP : oneadmin
STATE : ACTIVE
LCM_STATE : RUNNING
RESCHED : No
HOST : ec2
CLUSTER ID : -1
START TIME : 11/15 14:15:16
END TIME : -
DEPLOY ID : i-a0c5a2dd
VIRTUAL MACHINE MONITORING
USED MEMORY : 0K
NET_RX : 0K
NET_TX : 0K
USED CPU : 0
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
VIRTUAL MACHINE HISTORY
SEQ HOST ACTION DS START TIME PROLOG
0 ec2 none 0 11/15 14:15:37 2d 21h48m 0h00m00s
USER TEMPLATE
EC2=[
AMI="ami-6f5f1206",
INSTANCETYPE="m1.small",
KEYPAIR="gsg-keypair" ]
SCHED_REQUIREMENTS="ID=4"
VIRTUAL MACHINE TEMPLATE
AWS_AVAILABILITY_ZONE="us-east-1d"
AWS_DNS_NAME="ec2-54-205-155-229.compute-1.amazonaws.com"
AWS_INSTANCE_TYPE="m1.small"
AWS_IP_ADDRESS="54.205.155.229"
AWS_KEY_NAME="gsg-keypair"
AWS_PRIVATE_DNS_NAME="ip-10-12-101-169.ec2.internal"
AWS_PRIVATE_IP_ADDRESS="10.12.101.169"
AWS_SECURITY_GROUPS="sg-8e45a3e7"
5.2.8 Scheduler Conguration
Since ec2 Hosts are treated by the scheduler like any other host, VMs will be automatically deployed in them. But you
probably want to lower their priority and start using them only when the local infrastructure is full.
5.2. Amazon EC2 Driver 51
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Congure the Priority
The ec2 drivers return a probe with the value PRIORITY = -1. This can be used by the scheduler, conguring the
xed policy in sched.conf:
DEFAULT_SCHED = [
policy = 4
]
The local hosts will have a priority of 0 by default, but you could set any value manually with the onehost/onecluster
update command.
There are two other parameters that you may want to adjust in sched.conf:
- MAX_DISPATCH: Maximum number of Virtual Machines actually dispatched to a host in each scheduling action
- MAX_HOST: Maximum number of Virtual Machines dispatched to a given host in each scheduling action
In a scheduling cycle, when MAX_HOST number of VMs have been deployed to a host, it is discarded for the next
pending VMs.
For example, having this conguration:
MAX_HOST = 1
MAX_DISPATCH = 30
2 Hosts: 1 in the local infrastructure, and 1 using the ec2 drivers
2 pending VMs
The rst VM will be deployed in the local host. The second VM will have also sort the local host with higher priority,
but because 1 VMs was already deployed, the second VM will be launched in ec2.
A quick way to ensure that your local infrastructure will be always used before the ec2 hosts is to set MAX_DISPATH
to the number of local hosts.
Force a Local or Remote Deployment
The ec2 drivers report the host attribute PUBLIC_CLOUD = YES. Knowing this, you can use that attribute in your
VM requirements.
To force a VM deployment in a local host, use:
SCHED_REQUIREMENTS = "!(PUBLIC_CLOUD = YES)"
To force a VM deployment in an ec2 host, use:
SCHED_REQUIREMENTS = "PUBLIC_CLOUD = YES"
52 Chapter 5. Cloud Bursting
CHAPTER
SIX
APPLICATION INSIGHT
6.1 OneGate
OneGate allows Virtual Machine guests to push monitoring information to OpenNebula. Users and administrators can
use it to gather metrics, detect problems in their applications, and trigger OneFlow auto-scaling rules
6.1.1 Next Steps
OneGate Server Conguration
Application Monitoring
6.2 OneGate Server Conguration
The OneGate service allows Virtual Machines guests to push monitoring information to OpenNebula. Althouth it is
installed by default, its use is completely optional.
6.2.1 Requirements
Check the Installation guide for details of what package you have to install depending on your distribution
6.2.2 Conguration
The OneGate conguration le can be found at /etc/one/onegate-server.conf. It uses YAML syntax to
dene the following options:
Server Conguration
one_xmlrpc: OpenNebula daemon host and port
host: Host where OneGate will listen
port: Port where OneGate will listen
Log
debug_level: Log debug level. 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
Auth
auth: Authentication driver for incomming requests. onegate: based on token provided in the context
53
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
core_auth: Authentication driver to communicate with OpenNebula core, cipher for symmetric cipher en-
cryption of tokens x509 for x509 certicate encryption of tokens. For more information, visit the OpenNebula
Cloud Auth documentation.
This is the default le
################################################################################
# Server Configuration
################################################################################
# OpenNebula sever contact information
#
:one_xmlrpc: http://localhost:2633/RPC2
# Server Configuration
#
:host: 127.0.0.1
:port: 5030
################################################################################
# Log
################################################################################
# Log debug level
# 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
#
:debug_level: 3
################################################################################
# Auth
################################################################################
# Authentication driver for incomming requests
# onegate, based on token provided in the context
#
:auth: onegate
# Authentication driver to communicate with OpenNebula core
# cipher, for symmetric cipher encryption of tokens
# x509, for x509 certificate encryption of tokens
#
:core_auth: cipher
6.2.3 Start OneGate
To start and stop the server, use the onegate-server start/stop command:
$ onegate-server start
onegate-server started
Warning: By default, the server will only listen to requests coming from localhost. Change the :host at-
tribute in /etc/one/onegate-server.conf to your server public IP, or 0.0.0.0 so onegate will listen on
any interface.
Inside /var/log/one/ you will nd new log les for the server:
54 Chapter 6. Application Insight
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
/var/log/one/onegate.error
/var/log/one/onegate.log
6.2.4 Use OneGate
Before your VMs can communicate with OneGate, you need to edit /etc/one/oned.conf and set the OneGate
endpoint. This IP must be reachable from your VMs.
ONEGATE_ENDPOINT = "http://192.168.0.5:5030"
Continue to the OneGate usage guide.
6.3 Application Monitoring
OneGate allows Virtual Machine guests to push monitoring information to OpenNebula. Users and administrators can
use it to gather metrics, detect problems in their applications, and trigger OneFlow elasticity rules.
6.3.1 OneGate Workow Explained
OneGate is a server that listens to http connections from the Virtual Machines. OpenNebula assigns an individual
token to each VM instance, and Applications running inside the VM use this token to send monitoring metrics to
OneGate.
When OneGate checks the VM ID and the token sent, the new information is placed inside the VMs user template
section. This means that the application metrics are visible from the command line, Sunstone, or the APIs.
6.3.2 OneGate Usage
First, the cloud administrator must congure and start the OneGate server.
Setup the VM Template
Your VM Template must set the CONTEXT/TOKEN attribute to yes.
CPU = "0.5"
MEMORY = "128"
DISK = [
IMAGE_ID = "0" ]
NIC = [
NETWORK_ID = "0" ]
CONTEXT = [
TOKEN = "YES" ]
When this Template is instantiated, OpenNebula will automatically add the ONEGATE_URL context variable, and a
token.txt will be placed in the context cdrom. This token.txt le is only accessible from inside the VM.
6.3. Application Monitoring 55
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
...
CONTEXT=[
DISK_ID="1",
ONEGATE_URL="http://192.168.0.1:5030/vm/0",
TARGET="hdb",
TOKEN="YES" ]
Push Metrics from the VM Guest
The contextualization cdrom should contain the context.sh and token.txt les.
# mkdir /mnt/context
# mount /dev/hdb /mnt/context
# cd /mnt/context
# ls
context.sh token.txt
# cat context.sh
# Context variables generated by OpenNebula
DISK_ID=1
ONEGATE_URL=http://192.168.0.1:5030/vm/0
TARGET=hdb
TOKEN=yes
# cat token.txt
yCxieDUS7kra7Vn9ILA0+g==
With that data, you can perform this http request message:
Request: PUT ONEGATE\_URL.
Headers: X-ONEGATE-TOKEN: token.txt contents.
Body: Monitoring values, in the usual ATTRIBUTE = VALUE OpenNebula syntax.
For example, using the curl command:
curl -X "PUT" http://192.168.0.1:5030/vm/0 --header "X-ONEGATE-TOKEN: yCxieDUS7kra7Vn9ILA0+g==" -d "APP_LOAD = 9.7"
The new metric is stored in the user template section of the VM:
$ onevm show 0
...
USER TEMPLATE
APP_LOAD="9.7"
6.3.3 Sample Script
#!/bin/bash
# -------------------------------------------------------------------------- #
# Copyright 2002-2013, OpenNebula Project (OpenNebula.org), C12G Labs #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
56 Chapter 6. Application Insight
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
################################################################################
# Initialization
################################################################################
ERROR=0
if [ -z $ONEGATE_TOKEN ]; then
echo "ONEGATE_TOKEN env variable must point to the token.txt file"
ERROR=1
fi
if [ -z $ONEGATE_URL ]; then
echo "ONEGATE_URL env variable must be set"
ERROR=1
fi
if [ $ERROR = 1 ]; then
exit -1
fi
TMP_DIR=mktemp -d
echo "" > $TMP_DIR/metrics
################################################################################
# Memory metrics
################################################################################
MEM_TOTAL=grep MemTotal: /proc/meminfo | awk {print $2}
MEM_FREE=grep MemFree: /proc/meminfo | awk {print $2}
MEM_USED=$(($MEM_TOTAL-$MEM_FREE))
MEM_USED_PERC="0"
if ! [ -z $MEM_TOTAL ] && [ $MEM_TOTAL -gt 0 ]; then
MEM_USED_PERC=echo "$MEM_USED $MEM_TOTAL" | \
awk { printf "%.2f", 100
*
$1 / $2 }
fi
SWAP_TOTAL=grep SwapTotal: /proc/meminfo | awk {print $2}
SWAP_FREE=grep SwapFree: /proc/meminfo | awk {print $2}
SWAP_USED=$(($SWAP_TOTAL - $SWAP_FREE))
SWAP_USED_PERC="0"
if ! [ -z $SWAP_TOTAL ] && [ $SWAP_TOTAL -gt 0 ]; then
SWAP_USED_PERC=echo "$SWAP_USED $SWAP_TOTAL" | \
awk { printf "%.2f", 100
*
$1 / $2 }
fi
6.3. Application Monitoring 57
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
#echo "MEM_TOTAL = $MEM_TOTAL" >> $TMP_DIR/metrics
#echo "MEM_FREE = $MEM_FREE" >> $TMP_DIR/metrics
#echo "MEM_USED = $MEM_USED" >> $TMP_DIR/metrics
echo "MEM_USED_PERC = $MEM_USED_PERC" >> $TMP_DIR/metrics
#echo "SWAP_TOTAL = $SWAP_TOTAL" >> $TMP_DIR/metrics
#echo "SWAP_FREE = $SWAP_FREE" >> $TMP_DIR/metrics
#echo "SWAP_USED = $SWAP_USED" >> $TMP_DIR/metrics
echo "SWAP_USED_PERC = $SWAP_USED_PERC" >> $TMP_DIR/metrics
################################################################################
# Disk metrics
################################################################################
/bin/df -k -P | grep ^/dev > $TMP_DIR/df
cat $TMP_DIR/df | while read line; do
NAME=echo $line | awk {print $1} | awk -F / {print $NF}
DISK_TOTAL=echo $line | awk {print $2}
DISK_USED=echo $line | awk {print $3}
DISK_FREE=echo $line | awk {print $4}
DISK_USED_PERC="0"
if ! [ -z $DISK_TOTAL ] && [ $DISK_TOTAL -gt 0 ]; then
DISK_USED_PERC=echo "$DISK_USED $DISK_TOTAL" | \
awk { printf "%.2f", 100
*
$1 / $2 }
fi
#echo "DISK_TOTAL_$NAME = $DISK_TOTAL" >> $TMP_DIR/metrics
#echo "DISK_FREE_$NAME = $DISK_FREE" >> $TMP_DIR/metrics
#echo "DISK_USED_$NAME = $DISK_USED" >> $TMP_DIR/metrics
echo "DISK_USED_PERC_$NAME = $DISK_USED_PERC" >> $TMP_DIR/metrics
done
################################################################################
# PUT command
################################################################################
curl -X "PUT" --header "X-ONEGATE-TOKEN: cat $ONEGATE_TOKEN" $ONEGATE_URL \
--data-binary @$TMP_DIR/metrics
58 Chapter 6. Application Insight
CHAPTER
SEVEN
PUBLIC CLOUD
7.1 Building a Public Cloud
7.1.1 What is a Public Cloud?
A Public Cloud is an extension of a Private Cloud to expose RESTful Cloud interfaces. Cloud interfaces can be
added to your Private or Hybrid Cloud if you want to provide partners or external users with access to your infrastruc-
ture, or to sell your overcapacity. Obviously, a local cloud solution is the natural back-end for any public cloud.
7.1.2 The User View
The following interfaces provide a simple and remote management of cloud (virtual) resources at a high abstrac-
tion level:
EC2 Query subset
OGF OCCI
Users will be able to use commands thatclone the functionality of the EC2 Cloud service. Starting with a working
installation of an OS residing on an .img le, with three simple steps a user can launch it in the cloud.
59
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
First, they will be able to upload it to the cloud using:
$ ./econe-upload /images/gentoo.img
Success: ImageId ami-00000001
After the image is uploaded in OpenNebula repository, it needs to be registered to be used in the cloud:
$ ./econe-register ami-00000001
Success: ImageId ami-00000001
Now the user can launch the registered image to be run in the cloud:
$ ./econe-run-instances -H ami-00000001
Owner ImageId InstanceId InstanceType
------------------------------------------------------------------------------
helen ami-00000001 i-15 m1.small
Additionally, the instance can be monitored with:
$ ./econe-describe-instances -H
Owner Id ImageId State IP Type
------------------------------------------------------------------------------------------------------------
helen i-15 ami-00000001 pending 147.96.80.33 m1.small
7.1.3 How the System Operates
There is no modication in the operation of OpenNebula to expose Cloud interfaces. Users can interface the
infrastructure using any Private or Public Cloud interface.
7.2 EC2 Server Conguration
7.2.1 Overview
The OpenNebula EC2 Query is a web service that enables you to launch and manage virtual machines in your Open-
Nebula installation through the Amazon EC2 Query Interface. In this way, you can use any EC2 Query tool or utility
to access your Private Cloud. The EC2 Query web service is implemented upon the OpenNebula Cloud API (OCA)
layer that exposes the full capabilities of an OpenNebula private cloud; and Sinatra, a widely used light web frame-
work.
60 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
The current implementation includes the basic routines to use a Cloud, namely: image upload and registration, and the
VM run, describe and terminate operations. The following sections explain you how to install and congure the EC2
Query web service on top of a running OpenNebula cloud.
Warning: The OpenNebula EC2 Query service provides a Amazon EC2 Query API compatible interface to your
cloud, that can be used alongside the native OpenNebula CLI or OpenNebula Sunstone.
Warning: The OpenNebula distribution includes the tools needed to use the EC2 Query service.
7.2.2 Requirements & Installation
You must have an OpenNebula site properly congured and running , be sure to check the OpenNebula Installation
and Conguration Guides to set up your private cloud rst. This guide also assumes that you are familiar with the
conguration and use of OpenNebula.
The OpenNebula EC2 Query service was installed during the OpenNebula installation, and the dependencies of this
service are installed when using the install_gems tool as explained in the installation guide
If you installed OpenNebula from source you can install the EC2 Query dependencias as explained at the end of the
Building from Source Code guide
7.2.3 Conguration
The service is congured through the /etc/one/econe.conf le, where you can set up the basic operational
parameters for the EC2 Query web service. The following table summarizes the available options:
Server conguration
tmpdir: Directory to store temp les when uploading images
one_xmlrpc: oned xmlrpc service, http://localhost:2633/RPC2
host: Host where econe server will run
port: Port where econe server will run
ssl_server: URL for the EC2 service endpoint, when congured through a proxy
7.2. EC2 Server Conguration 61
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Log
debug_level: Log debug level, 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG.
Auth
auth: Authentication driver for incomming requests
core_auth: Authentication driver to communicate with OpenNebula core. Check this guide for more infor-
mation about the core_auth syste
File based templates
use_file_templates: Use former le based templates for instance types instead of OpenNebula templates
instance_types: DEPRECATED The VM types for your cloud
Resources
describe_with_terminated_instances: Include terminated instances in the describe_instances xml.
When this parameter is enabled all the VMs in DONE state will be retrieved in each descibe_instances action
and then ltered. This can cause performance issues when the pool of VMs in DONE state is huge
terminated_instances_expiration_time: Terminated VMs will be included in the list till the ter-
mination date + terminated_instances_expiration_time is eached
datastore_id: Datastore in which the Images uploaded through EC2 will be allocated, by default 1
cluster_id: Cluster associated with the EC2 resources, by default no Cluster is dened
Elastic IP
elasticips_vnet_id: VirtualNetwork containing the elastic ips to be used with EC2. If no dened the
Elastic IP functionality is disabled
associate_script: Script to associate a public IP with a private IP arguments: elastic_ip private_ip
vnet_template(base64_encoded)
disassociate_script: Script to disassociate a public IP arguments: elastic_ip
EBS
ebs_fstype: FSTYPE that will be used when creating new volumes (DATABLOCKs)
Warning: The :host must be a FQDN, do not use IPs here.
Warning: Preserve YAML syntax in the econe.conf le.
Cloud Users
The cloud users have to be created in the OpenNebula system by oneadmin using the oneuser utility. Once a user
is registered in the system, using the same procedure as to create private cloud users, they can start using the system.
The users will authenticate using the Amazon EC2 procedure with AWSAccessKeyId their OpenNebulas username
and AWSSecretAccessKey their OpenNebulas hashed password.
The cloud administrator can limit the interfaces that these users can use to interact with OpenNebula by setting the
driver public for them. Using that driver cloud users will not be able to interact with OpenNebula through Sunstone,
CLI nor XML-RPC.
$ oneuser chauth cloud_user public
62 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Dening VM Types
You can dene as many Virtual Machine types as you want, just:
Create a new OpenNebula template for the new type and make it available for the users group. You can
use restricted attributes and set permissions like any other opennebula resource. You must include the
EC2_INSTANCE_TYPE parameter inside the template denition, otherwise the template will not be avail-
able to be used as an instance type in EC2.
# This is the content of the /tmp/m1.small file
NAME = "m1.small"
EC2_INSTANCE_TYPE = "m1.small"
CPU = 1
MEMORY = 1700
...
$ ontemplate create /tmp/m1.small
$ ontemplate chgrp m1.small users
$ ontemplate chmod m1.small 640
The template must include all the required information to instantiate a new virtual machine, such as network cong-
uration, capacity, placement requirements, etc. This information will be used as a base template and will be merged
with the information provided by the user.
The user will select an instance type along with the ami id, keypair and user data when creating a new instance.
Therefore, the template should not include the OS, since it will be specied by the user with the selected AMI.
Warning: The templates are processed by the EC2 server to include specic data for the instance.
7.2.4 Starting the Cloud Service
To start the EC2 Query service just issue the following command
$ econe-server start
You can nd the econe server log le in /var/log/one/econe-server.log.
To stop the EC2 Query service:
$ econe-server stop
7.2.5 Advanced Conguration
Enabling Keypair
In order to benet from the Keypair functionality, the images that will be used by the econe users must be prepared to
read the EC2_PUBLIC_KEY and EC2_USER_DATA from the CONTEXT disk. This can be easliy achieved with the
new contextualization packages, generating a new custom contextualization package like this one:
#!/bin/bash
echo "$EC2_PUBLIC_KEY" > /root/.ssh/authorized_keys
7.2. EC2 Server Conguration 63
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Enabling Elastic IP Functionality
An Elastic IP address is associated with the user, not a particular instance, and the user controls that address until he
chooses to release it. This way the user can programmatically remap his public IP addresses to any of his instances.
In order to enable this functionality you have to follow the following steps:
1. Create a VNET Containing the Elastic IPS
As oneadmin create a new FIXED VirtualNetwork containing the public IPs that will be controlled by the EC2
users:
NAME = "ElasticIPs"
TYPE = FIXED
PHYDEV = "eth0"
VLAN = "YES"
VLAN_ID = 50
BRIDGE = "brhm"
LEASES = [IP=10.0.0.1]
LEASES = [IP=10.0.0.2]
LEASES = [IP=10.0.0.3]
LEASES = [IP=10.0.0.4]
# Custom Attributes to be used in Context
GATEWAY = 130.10.0.1
$ onevnet create /tmp/fixed.vnet
ID: 8
This VNET will be managed by the oneadmin user, therefore USE permission for the ec2 users is not required
Update the econe.conf le with the VNET ID:
:elastic_ips_vnet: 8
Provide associate and disassociate scripts
The interaction with the infrastructure has been abstracted, therefore two scripts have to be provided by the cloud
administrator in order to interact with each specic network conguration. This two scripts enable us to adapt this
feature to different congurations and data centers.
These scripts are language agnostic and their path has to be specied in the econe conguration le:
:associate_script: /usr/bin/associate_ip.sh
:disassociate_script: /usr/bin/disassociate_ip.sh
The associate script will receive three arguments: elastic_ip to be associated; private_ip of the instance; Virtual
Network template base64 encoded
The disassociate script will receive three arguments: elastic_ip to be disassociated
Scripts to interact with OpenFlow can be found in the following ecosystem project
Using a Specic Group for EC2
It is recommended to create a new group to handle the ec2 cloud users:
64 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
$ onegroup create ec2
ID: 100
Create and add the users to the ec2 group (ID:100):
$ oneuser create clouduser my_password
ID: 12
$ oneuser chgrp 12 100
Also, you will have to create ACL rules so that the cloud users are able to deploy their VMs in the allowed hosts.
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
1 kvm1 - 2 110 / 200 (55%) 640M / 3.6G (17%) on
1 kvm2 - 2 110 / 200 (55%) 640M / 3.6G (17%) on
1 kvm3 - 2 110 / 200 (55%) 640M / 3.6G (17%) on
These rules will allow users inside the ec2 group (ID:100) to deploy VMs in the hosts kvm01 (ID:0) and kvm03 (ID:3)
$ oneacl create "@100 HOST/#1 MANAGE"
$ oneacl create "@100 HOST/#3 MANAGE"
You have to create a VNet network using the onevnet utility with the IPs you want to lease to the VMs
created with the EC2 Query service.
$ onevnet create /tmp/templates/vnet
ID: 12
Remember that you will have to add this VNet (ID:12) to the users group (ID:100) and give USE (640) permissions to
the group in order to get leases from it.
$ onevnet chgrp 12 100
$ onevnet chmod 12 640
Warning: You will have to update the NICtemplate, inside the /etc/one/ec2query_templates directory,
in order to use this VNet ID
Conguring a SSL Proxy
OpenNebula EC2 Query Service runs natively just on normal HTTP connections. If the extra security provided by
SSL is needed, a proxy can be set up to handle the SSL connection that forwards the petition to the EC2 Query Service
and takes back the answer to the client.
This set up needs:
A server certicate for the SSL connections
An HTTP proxy that understands SSL
EC2Query Service conguration to accept petitions from the proxy
If you want to try out the SSL setup easily, you can nd in the following lines an example to set a self-signed certicate
to be used by a lighttpd congured to act as an HTTP proxy to a correctly congured EC2 Query Service.
Lets assume the server were the lighttpd proxy is going to be started is called cloudserver.org. Therefore, the
steps are:
7.2. EC2 Server Conguration 65
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
1. Snakeoil Server Certicate
We are going to generate a snakeoil certicate. If using an Ubuntu system follow the next steps (otherwise your
milleage may vary, but not a lot):
Install the ssl-cert package
$ sudo apt-get install ssl-cert
Generate the certicate
$ sudo /usr/sbin/make-ssl-cert generate-default-snakeoil
As we are using lighttpd, we need to append the private key with the certicate to obtain a server certicate valid
to lighttpd
$ sudo cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem
2. lighttpd as a SSL HTTP Proxy
You will need to edit the /etc/lighttpd/lighttpd.conf conguration le and
Add the following modules (if not present already)
mod_access
mod_alias
mod_proxy
mod_accesslog
mod_compress
Change the server port to 443 if you are going to run lighttpd as root, or any number above 1024 otherwise:
server.port = 8443
Add the proxy module section:
#### proxy module
## read proxy.txt for more info
proxy.server = ( "" =>
("" =>
(
"host" => "127.0.0.1",
"port" => 4567
)
)
)
#### SSL engine
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/server.pem"
The host must be the server hostname of the computer running the EC2Query Service, and the port the one that the
EC2Query Service is running on.
66 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
3. EC2Query Service Conguration
The econe.conf needs to dene the following:
# Host and port where econe server will run
:host: localhost
:port: 4567
#SSL proxy URL that serves the API (set if is being used)
:ssl_server: https://cloudserver.org:8443/
Once the lighttpd server is started, EC2Query petitions using HTTPS uris can be directed to
https://cloudserver.org:8443, that will then be unencrypted, passed to localhost, port 4567, satis-
ed (hopefully), encrypted again and then passed back to the client.
Warning: Note that :ssl_server must be an URL that may contain a custom path.
7.3 OCCI Server Conguration
The OpenNebula OCCI (Open Cloud Computing Interface) server is a web service that enables you to launch and
manage virtual machines in your OpenNebula installation using an implementation of the OGF OCCI API specication
based on the draft 0.8. This implementation also includes some extensions, requested by the community, to support
OpenNebula specic functionality. The OpenNebula OCCI service is implemented upon the OpenNebula Cloud API
(OCA) layer that exposes the full capabilities of an OpenNebula private cloud; and Sinatra, a widely used light web
framework.
The following sections explain how to install and congure the OCCI service on top of a running OpenNebula cloud.
Warning: The OpenNebula OCCI service provides an OCCI interface to your cloud instance, that can be used
alongside the native OpenNebula CLI, Sunstone or even the EC2 Query API
Warning: The OpenNebula distribution includes the tools needed to use the OpenNebula OCCI service
7.3. OCCI Server Conguration 67
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
7.3.1 Requirements
You must have an OpenNebula site properly congured and running to install the OpenNebula OCCI service, be sure
to check the OpenNebula Installation and Conguration Guides to set up your private cloud rst. This guide also
assumes that you are familiar with the conguration and use of OpenNebula.
The OpenNebula OCCI service was installed during the OpenNebula installation, and the dependencies of this service
are installed when using the install_gems tool as explained in the installation guide
If you installed OpenNebula from source you can install the OCCI dependencias as explained at the end of the Building
from Source Code guide
7.3.2 Considerations & Limitations
The OCCI Server included in the OpenNebula distribution does not implement the latest OCCI specication, it is based
on the draft 0.8 of the OFG OCCI specication. The implementation of the latest specication is being developed by
TU-Dortmund in a ecosystem project. You can check the documentation of this project in the following link
7.3.3 Conguration
occi-server.conf
The service is congured through the /etc/one/occi-server.conf le, where you can set up the basic oper-
ational parameters for the OCCI service, namely:
The following table summarizes the available options:
Server conguration
tmpdir: Directory to store temp les when uploading images one_xmlrpc: oned xmlrpc service,
http://localhost:2633/RPC2 host: Host where OCCI server will run. port: Port where OCCI server will run.
ssl_server: SSL proxy that serves the API (set if is being used).
Log
debug_level: Log debug level, 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
Auth
auth: Authentication driver for incoming requests core_auth: Authentication driver to communicate with Open-
Nebula core
Resources
instance_types: The Computes types for your cloud datastore_id: Datastore in which the Images uploaded
through OCCI will be allocated, by default 1 cluster_id: Cluster associated with the OCCI resources, by default
no Cluster is dened
Warning: The SERVER must be a FQDN, do not use IPs here
Warning: Preserve YAML syntax in the occi-server.conf le
Example:
68 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
#############################################################
# Server configuration
#############################################################
# Directory to store temp files when uploading images
:tmpdir: /var/tmp/one
# OpenNebula sever contact information
:one_xmlrpc: http://localhost:2633/RPC2
# Host and port where OCCI server will run
:host: 127.0.0.1
:port: 4567
# SSL proxy that serves the API (set if is being used)
#:ssl_server: fqdm.of.the.server
#############################################################
# Auth
#############################################################
# Authentication driver for incomming requests
# occi, for OpenNebulas user-password scheme
# x509, for x509 certificates based authentication
# opennebula, use the driver defined for the user in OpenNebula
:auth: occi
# Authentication driver to communicate with OpenNebula core
# cipher, for symmetric cipher encryption of tokens
# x509, for x509 certificate encryption of tokens
:core_auth: cipher
#############################################################
# Log
#############################################################
# Log debug level
# 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
:debug_level: 3
#############################################################
# Resources
#############################################################
# Cluster associated with the OCCI resources, by default no Cluster is defined
#:cluster_id:
# Datastore in which the Images uploaded through OCCI will be allocated, by default 1
#:datastore_id:
# VM types allowed and its template file (inside templates directory)
:instance_types:
:small:
:template: small.erb
:cpu: 1
:memory: 1024
:medium:
:template: medium.erb
7.3. OCCI Server Conguration 69
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
:cpu: 4
:memory: 4096
:large:
:template: large.erb
:cpu: 8
:memory: 8192
Conguring OCCI Virtual Networks
You have to adapt the /etc/one/occi_templates/network.erb le to the conguration that the Virtual
Networks created through the OCCI interface will use. For more information about the Virtual Network conguration
check the following guide.
NAME = "<%= @vnet_info[NAME] %>"
TYPE = RANGED
NETWORK_ADDRESS = <%= @vnet_info[ADDRESS] %>
<% if @vnet_info[SIZE] != nil %>
NETWORK_SIZE = <%= @vnet_info[SIZE]%>
<% end %>
<% if @vnet_info[DESCRIPTION] != nil %>
DESCRIPTION = "<%= @vnet_info[DESCRIPTION] %>"
<% end %>
<% if @vnet_info[PUBLIC] != nil %>
PUBLIC = "<%= @vnet_info[PUBLIC] %>"
<% end %>
#BRIDGE = NAME_OF_DEFAULT_BRIDGE
#PHYDEV = NAME_OF_PHYSICAL_DEVICE
#VLAN = YES|NO
Dening Compute Types
You can dene as many Compute types as you want, just:
Create a template (new_type.erb) for the new type and place it in /etc/one/occi_templates. This
template will be completed with the data for each occi-compute create request and the content of the
/etc/one/occi_templates/common.erb le, and then submitted to OpenNebula.
# This is the content of the new /etc/one/occi_templates/new_type.erb file
CPU = 1
MEMORY = 512
OS = [ kernel = /vmlinuz,
initrd = /initrd.img,
root = sda1,
kernel_cmd = "ro xencons=tty console=tty1"]
Add a new type in the instance_types section of the occi-server.conf
:new_type:
:template: new_type.erb
:cpu: 1
:memory: 512
70 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
You can add common attributes for your cloud templates modifying the
/etc/one/occi_templates/common.erb le.
Warning: The templates are processed by the OCCI service to include specic data for the instance, you should
not need to modify the <%= ... %> compounds inside the common.erb le.
7.3.4 Usage
Starting the Cloud Service
To start the OCCI service just issue the following command
occi-server start
You can nd the OCCI server log le in /var/log/one/occi-server.log.
To stop the OCCI service:
occi-server stop
Warning: In order to start the OCCI server the /var/lib/one/.one/occi_auth le should be readable
by the user that is starting the server and the serveradmin user must exist in OpenNebula
Cloud Users
The cloud users have to be created in the OpenNebula system by oneadmin using the oneuser utility. Once a user
is registered in the system, using the same procedure as to create private cloud users, they can start using the system.
The users will authenticate using the HTTP basic authentication with user-ID their OpenNebulas username and
password their OpenNebulas password.
The cloud administrator can limit the interfaces that these users can use to interact with OpenNebula by setting the
driver public for them. Using that driver cloud users will not be able to interact with OpenNebula through Sunstone,
CLI nor XML-RPC.
$ oneuser chauth cloud_user public
7.3.5 Tuning & Extending
Authorization Methods
OpenNebula OCCI Server supports two authorization methods in order to log in. The method can be set in the occi-
server.conf , as explained above. These two methods are:
Basic Auth
In the basic mode, username and password(sha1) are matched to those in OpenNebulas database in order to authenti-
cate the user in each request.
7.3. OCCI Server Conguration 71
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
x509 Auth
This method performs the login to OpenNebula based on a x509 certicate DN (Distinguished Name). The DN is
extracted from the certicate and matched to the password value in the user database (remember, spaces are removed
from DNs).
The user password has to be changed running one of the following commands
oneuser chauth new_user x509 "/C=ES/O=ONE/OU=DEV/CN=clouduser"
oneuser chauth new_user --x509 --cert /tmp/my_cert.pem
or create a new user:
oneuser create new_user "/C=ES/O=ONE/OU=DEV/CN=clouduser" --driver x509
oneuser create new_user --x509 --cert /tmp/my_cert.pem
To enable this login method, set the :auth: option of /etc/one/sunstone-server.conf to x509:
:auth: x509
Note that OpenNebula will not verify that the user is holding a valid certicate at the time of login: this is expected to
be done by the external container of the OCCI server (normally Apache), whose job is to tell the users client that the
site requires a user certicate and to check that the certicate is consistently signed by the chosen Certicate Authority
(CA).
Conguring a SSL Proxy
OpenNebula OCCI runs natively just on normal HTTP connections. If the extra security provided by SSL is needed,
a proxy can be set up to handle the SSL connection that forwards the petition to the OCCI Service and takes back the
answer to the client.
This set up needs:
A server certicate for the SSL connections
An HTTP proxy that understands SSL
OCCI Service conguration to accept petitions from the proxy
If you want to try out the SSL setup easily, you can nd in the following lines an example to set a self-signed certicate
to be used by a lighttpd congured to act as an HTTP proxy to a correctly congured OCCI Service.
Lets assume the server were the lighttpd proxy is going to be started is called cloudserver.org. Therefore, the
steps are:
1. Snakeoil Server Certicate
We are going to generate a snakeoil certicate. If using an Ubuntu system follow the next steps (otherwise your
milleage may vary, but not a lot):
Install the ssl-cert package
$ sudo apt-get install ssl-cert
Generate the certicate
$ sudo /usr/sbin/make-ssl-cert generate-default-snakeoil
72 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
As we are using lighttpd, we need to append the private key with the certicate to obtain a server certicate valid
to lighttpd
$ sudo cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem
2. lighttpd as a SSL HTTP Proxy
You will need to edit the /etc/lighttpd/lighttpd.conf conguration le and
Add the following modules (if not present already)
mod_access
mod_alias
mod_proxy
mod_accesslog
mod_compress
Change the server port to 443 if you are going to run lighttpd as root, or any number above 1024 otherwise:
server.port = 8443
Add the proxy module section:
#### proxy module
## read proxy.txt for more info
proxy.server = ( "" =>
("" =>
(
"host" => "127.0.0.1",
"port" => 4567
)
)
)
#### SSL engine
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/server.pem"
The host must be the server hostname of the computer running the EC2Query Service, and the port the one that the
EC2Query Service is running on.
3. OCCI Service Conguration
The occi.conf needs to dene the following:
# Host and port where the occi server will run
:server: <FQDN OF OCCI SERVER>
:port: 4567
# SSL proxy that serves the API (set if is being used)
:ssl_server: https://localhost:443
7.3. OCCI Server Conguration 73
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Once the lighttpd server is started, OCCI petitions using HTTPS uris can be directed to
https://cloudserver.org:8443, that will then be unencrypted, passed to localhost, port 4567, satis-
ed (hopefully), encrypted again and then passed back to the client.
7.4 OpenNebula OCCI User Guide
The OpenNebula OCCI API is a RESTful service to create, control and monitor cloud resources using an implemen-
tation of the OGF OCCI API specication based on the draft 0.8. This implementation also includes some extensions,
requested by the community, to support OpenNebula specic functionality. Interactions with the resources are done
through HTTP verbs (GET, POST, PUT and DELETE).
7.4.1 Commands
There are four kind of resources, listed below with their implemented actions:
Storage:
occi-storage list [-verbose]
occi-storage create xml_template
occi-storage update xml_template
occi-storage show resource_id
occi-storage delete resource_id
Network:
occi-network list [-verbose]
occi-network create xml_template
occi-network update xml_template
occi-network show resource_id
occi-network delete resource_id
Compute:
occi-compute list [-verbose]
occi-compute create xml_template
occi-compute update xml_template
occi-compute show resource_id
occi-compute delete resource_id
occi-compute attachdisk resource_id storage_id
occi-compute detachdisk resource_id storage_id
Instance_type:
occi-instance-type list [-verbose]
occi-instance-type show resource_id
74 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
7.4.2 User Account Conguration
An account is needed in order to use the OpenNebula OCCI cloud. The cloud administrator will be responsible
for assigning these accounts, which have a one to one correspondence with OpenNebula accounts, so all the cloud
administrator has to do is check the managing users guide to setup accounts, and automatically the OpenNebula OCCI
cloud account will be created.
In order to use such an account, the end user can make use of clients programmed to access the services described in
the previous section. For this, she has to set up her environment, particularly the following aspects:
Authentication: This can be achieved in two different ways, listed here in order of priority (i.e. values specied
in the argument line supersede environmental variables)
Using the commands arguments. All the commands accept a username (as the OpenNebula username)
and a password (as the OpenNebula password)
If the above is not available, the ONE_AUTH variable will be checked for authentication (with the same
used for OpenNebula CLI, pointing to a le containing a single line: username:password).
Server location: The command need to know where the OpenNebula OCCI service is running. You can pass
the OCCI service endpoint using the -url ag in the commands. If that is not present, the OCCI_URL
environment variable is used (in the form of a http URL, including the port if it is not the standard 80). Again,
if the OCCI_URL variable is not present, it will default to http://localhost:4567
Warning: The OCCI_URL has to use the FQDN of the OCCI Service
7.4.3 Create Resources
Lets take a walk through a typical usage scenario. In this brief scenario it will be shown how to upload an image to
the OCCI OpenNebula Storage repository, how to create a Network in the OpenNebula OCCI cloud and how to create
Compute resource using the image and the network previously created.
Storage
Assuming we have a working Ubuntu installation residing in an .img le, we can upload it into the OpenNebula OCCI
cloud using the following OCCI representation of the image:
<STORAGE>
<NAME>Ubuntu Desktop</NAME>
<DESCRIPTION>Ubuntu 10.04 desktop for students.</DESCRIPTION>
<TYPE>OS</TYPE>
<URL>file:///images/ubuntu/jaunty.img</URL>
</STORAGE>
Next, using the occi-storage command we will create the Storage resource:
$ occi-storage --url http://cloud.server:4567 --username oneadmin --password opennebula create image.xml
<STORAGE href=http://cloud.server:4567/storage/0>
<ID>3</ID>
<NAME>Ubuntu Desktop</NAME>
<TYPE>OS</TYPE>
<DESCRIPTION>Ubuntu 10.04 desktop for students.</DESCRIPTION>
<PUBLIC>NO</PUBLIC>
<PERSISTENT>NO</PERSISTENT>
<SIZE>41943040</SIZE>
</STORAGE>
The user should take note of this ID, as it will be needed to add it to the Compute resource.
7.4. OpenNebula OCCI User Guide 75
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Network
The next step would be to create a Network resource
<NETWORK>
<NAME>MyServiceNetwork</NAME>
<ADDRESS>192.168.1.1</ADDRESS>
<SIZE>200</SIZE>
<PUBLIC>NO</PUBLIC>
</NETWORK>
Next, using the occi-network command we will create the Network resource:
$ occi-network --url http://cloud.server:4567 --username oneadmin --password opennebula create vnet.xml
<NETWORK href=http://cloud.server:4567/network/0>
<ID>0</ID>
<NAME>MyServiceNetwork</NAME>
<ADDRESS>192.168.1.1/ADDRESS>
<SIZE>200/SIZE>
<PUBLIC>NO</PUBLIC>
</NETWORK>
Compute
The last step would be to create a Compute resource referencing the Storage and Networks resource previously created
by means of their ID, using a representation like the following:
<COMPUTE>
<NAME>MyCompute</NAME>
<INSTANCE_TYPE href="http://www.opennebula.org/instance_type/small"/>
<DISK>
<STORAGE href="http://www.opennebula.org/storage/0"/>
</DISK>
<NIC>
<NETWORK href="http://www.opennebula.org/network/0"/>
<IP>192.168.1.12</IP>
</NIC>
<CONTEXT>
<HOSTNAME>MAINHOST</HOSTNAME>
<DATA>DATA1</DATA>
</CONTEXT>
</COMPUTE>
Next, using the occi-compute command we will create the Compute resource:
$ occi-compute --url http://cloud.server:4567 --username oneadmin --password opennebula create vm.xml
<COMPUTE href=http://cloud.server:4567/compute/0>
<ID>0</ID>
<CPU>1</CPU>
<MEMORY>1024</MEMORY>
<NAME>MyCompute</NAME>
<INSTANCE_TYPE href="http://www.opennebula.org/instance_type/small"/>
<STATE>PENDING</STATE>
<DISK id=0>
<STORAGE href=http://cloud.server:4567/storage/3 name=Ubuntu Desktop/>
<TYPE>DISK</TYPE>
<TARGET>hda</TARGET>
</DISK>
<NIC>
<NETWORK href=http://cloud.server:4567/network/0 name=MyServiceNetwork/>
<IP>192.168.1.12</IP>
76 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
<MAC>02:00:c0:a8:01:0c</MAC>
</NIC>
<CONTEXT>
<DATA>DATA1</DATA>
<HOSTNAME>MAINHOST</HOSTNAME>
<TARGET>hdb</TARGET>
</CONTEXT>
</COMPUTE>
7.4.4 Updating Resources
Storage
Some of the characteristics of an storage entity can be modied using the occi-storage update command:
Warning: Only one characteristic can be updated per request
Storage Persistence
In order to make a storage entity persistent we can update the resource using the following xml:
<STORAGE href=http://cloud.server:4567/storage/0>
<ID>3</ID>
<PERSISTENT>YES</PERSISTENT>
</STORAGE>
Next, using the occi-storage command we will create the Storage resource:
$ occi-storage --url http://cloud.server:4567 --username oneadmin --password opennebula update image.xml
<STORAGE href=http://cloud.server:4567/storage/0>
<ID>3</ID>
<NAME>Ubuntu Desktop</NAME>
<TYPE>OS</TYPE>
<DESCRIPTION>Ubuntu 10.04 desktop for students.</DESCRIPTION>
<PUBLIC>NO</PUBLIC>
<PERSISTENT>YES</PERSISTENT>
<SIZE>41943040</SIZE>
</STORAGE>
Publish a Storage
In order to publish a storage entity so that other users can use it, we can update the resource using the following xml:
<STORAGE href=http://cloud.server:4567/storage/0>
<ID>3</ID>
<PUBLIC>YES</PUBLIC>
</STORAGE>
Next, using the occi-storage command we will create the Storage resource:
$ occi-storage --url http://cloud.server:4567 --username oneadmin --password opennebula update image.xml
<STORAGE href=http://cloud.server:4567/storage/0>
<ID>3</ID>
7.4. OpenNebula OCCI User Guide 77
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
<NAME>Ubuntu Desktop</NAME>
<TYPE>OS</TYPE>
<DESCRIPTION>Ubuntu 10.04 desktop for students.</DESCRIPTION>
<PUBLIC>YES</PUBLIC>
<PERSISTENT>YES</PERSISTENT>
<SIZE>41943040</SIZE>
</STORAGE>
Network
Some of the characteristics of an network entity can be modied using the occi-network update command:
Warning: Only one characteristic can be updated per request
Publish a Network
In order to publish a network entity so that other users can use it, we can update the resource using the following xml:
<NETWORK href=http://cloud.server:4567/network/0>
<ID>0</ID>
<PUBLIC>YES</PUBLIC>
</NETWORK>
Next, using the occi-network command we will update the Network resource:
$ occi-network --url http://cloud.server:4567 --username oneadmin --password opennebula update vnet.xml
<NETWORK href=http://cloud.server:4567/network/0>
<ID>0</ID>
<NAME>MyServiceNetwork</NAME>
<ADDRESS>192.168.1.1/ADDRESS>
<SIZE>200/SIZE>
<PUBLIC>YES</PUBLIC>
</NETWORK>
Compute
Some of the characteristics of a compute entity can be modied using the occi-compute update command:
Warning: Only one characteristic can be updated per request
Change the Compute State
In order to change the Compute state, we can update the resource using the following xml:
<COMPUTE href=http://cloud.server:4567/compute/0>
<ID>0</ID>
<STATE>STOPPED</STATE>
</COMPUTE>
Next, using the occi-compute command we will update the Compute resource:
The available states to update a Compute resource are:
78 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
STOPPED
SUSPENDED
RESUME
CANCEL
SHUTDOWN
REBOOT
RESET
DONE
Save a Compute Disk in a New Storage
In order to save a Compute disk in a new image, we can update the resource using the following xml. The disk will be
saved after shutting down the Compute.
<COMPUTE href=http://cloud.server:4567/compute/0>
<ID>0</ID>
<DISK id="0">
<STORAGE href="http://cloud.server:4567/storage/0" name="first_image"/>
<SAVE_AS name="save_as1"/>
</DISK>
</COMPUTE>
Next, using the occi-compute command we will update the Compute resource:
$ occi-compute --url http://cloud.server:4567 --username oneadmin --password opennebula update vm.xml
<COMPUTE href=http://cloud.server:4567/compute/0>
<ID>0</ID>
<CPU>1</CPU>
<MEMORY>1024</MEMORY>
<NAME>MyCompute</NAME>
<INSTANCE_TYPE>small</INSTANCE_TYPE>
<STATE>STOPPED</STATE>
<DISK id=0>
<STORAGE href=http://cloud.server:4567/storage/3 name=Ubuntu Desktop/>
<SAVE_AS href="http://cloud.server:4567/storage/7"/>
<TYPE>DISK</TYPE>
<TARGET>hda</TARGET>
</DISK>
<NIC>
<NETWORK href=http://cloud.server:4567/network/0 name=MyServiceNetwork/>
<IP>192.168.1.12</IP>
<MAC>02:00:c0:a8:01:0c</MAC>
</NIC>
<CONTEXT>
<DATA>DATA1</DATA>
<HOSTNAME>MAINHOST</HOSTNAME>
<TARGET>hdb</TARGET>
</CONTEXT>
</COMPUTE>
7.4. OpenNebula OCCI User Guide 79
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Create a Volume and Attach It to a Running VM
In this example we will show how to create a new volume using the following template and attach it to a running
compute resource.
<STORAGE>
<NAME>Volume1</NAME>
<TYPE>DATABLOCK</TYPE>
<DESCRIPTION>Volume to be hotplugged</DESCRIPTION>
<PUBLIC>NO</PUBLIC>
<PERSISTENT>NO</PERSISTENT>
<FSTYPE>ext3</FSTYPE>
<SIZE>10</SIZE>
</STORAGE>
$ cat /tmp/storage
<STORAGE>
<NAME>Volume1</NAME>
<TYPE>DATABLOCK</TYPE>
<DESCRIPTION>Volume to be hotplugged</DESCRIPTION>
<PUBLIC>NO</PUBLIC>
<PERSISTENT>NO</PERSISTENT>
<FSTYPE>ext3</FSTYPE>
<SIZE>10</SIZE>
</STORAGE>
$ occi-storage create /tmp/storage
<STORAGE href=http://127.0.0.1:4567/storage/5>
<ID>5</ID>
<NAME>Volume1</NAME>
<USER href=http://127.0.0.1:4567/user/0 name=oneadmin/>
<GROUP>oneadmin</GROUP>
<STATE>READY</STATE>
<TYPE>DATABLOCK</TYPE>
<DESCRIPTION>Volume to be hotplugged</DESCRIPTION>
<SIZE>10</SIZE>
<FSTYPE>ext3</FSTYPE>
<PUBLIC>NO</PUBLIC>
<PERSISTENT>NO</PERSISTENT>
</STORAGE>
$ occi-compute list
<COMPUTE_COLLECTION>
<COMPUTE href=http://127.0.0.1:4567/compute/4 name=one-4/>
<COMPUTE href=http://127.0.0.1:4567/compute/6 name=one-6/>
</COMPUTE_COLLECTION>
$ occi-storage list
<STORAGE_COLLECTION>
<STORAGE name=ttylinux - kvm href=http://127.0.0.1:4567/storage/1/>
<STORAGE name=Ubuntu Server 12.04 (Precise Pangolin) - kvm href=http://127.0.0.1:4567/storage/2/>
<STORAGE name=Volume1 href=http://127.0.0.1:4567/storage/5/>
</STORAGE_COLLECTION>
$ occi-compute attachdisk 6 5
<COMPUTE href=http://127.0.0.1:4567/compute/6>
<ID>6</ID>
<USER name=oneadmin href=http://127.0.0.1:4567/user/0/>
80 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
<GROUP>oneadmin</GROUP>
<CPU>1</CPU>
<MEMORY>512</MEMORY>
<NAME>one-6</NAME>
<STATE>ACTIVE</STATE>
<DISK id=0>
<STORAGE name=Ubuntu Server 12.04 (Precise Pangolin) - kvm href=http://127.0.0.1:4567/storage/2/>
<TYPE>FILE</TYPE>
<TARGET>hda</TARGET>
</DISK>
<DISK id=1>
<STORAGE name=Volume1 href=http://127.0.0.1:4567/storage/5/>
<TYPE>FILE</TYPE>
<TARGET>sda</TARGET>
</DISK>
<NIC>
<NETWORK name=local-net href=http://127.0.0.1:4567/network/0/>
<IP>192.168.122.6</IP>
<MAC>02:00:c0:a8:7a:06</MAC>
</NIC>
</COMPUTE>
Warning: You can obtain more information on how to use the above commands accessing their Usage help
passing them the -h ag. For instance, a -T option is available to set a connection timeout.
Warning: In platforms where curl is not available or buggy (i.e. CentOS), a -M option is available to perform
upload using the native ruby Net::HTTP using http multipart
7.5 OpenNebula EC2 User Guide
The EC2 Query API offers the functionality exposed by Amazon EC2: upload images, register them, run, monitor and
terminate instances, etc. In short, Query requests are HTTP or HTTPS requests that use the HTTP verb GET or POST
and a Query parameter.
OpenNebula implements a subset of the EC2 Query interface, enabling the creation of public clouds managed by
OpenNebula.
7.5.1 AMIs
upload image: Uploads an image to OpenNebula
describe images: Lists all registered images belonging to one particular user.
7.5.2 Instances
run instances: Runs an instance of a particular image (that needs to be referenced).
describe instances: Outputs a list of launched images belonging to one particular user.
terminate instances: Shutdowns a set ofvirtual machines (or cancel, depending on its state).
reboot instances: Reboots a set ofvirtual machines.
7.5. OpenNebula EC2 User Guide 81
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
start instances: Starts a set ofvirtual machines.
stop instances: Stops a set ofvirtual machines.
7.5.3 EBS
create volume: Creates a new DATABLOCK in OpenNebula
delete volume: Deletes an existing DATABLOCK.
describe volumes: Describe all available DATABLOCKs for this user
attach volume: Attaches a DATABLOCK to an instance
detach volume: Detaches a DATABLOCK from an instance
create snapshot:
delete snapshot:
describe snpahost:
7.5.4 Elastic IPs
allocate address: Allocates a new elastic IP address for the user
release address: Releases a publicIP of the user
describe addresses: Lists elastic IP addresses
associate address: Associates a publicIP of the user with a given instance
disassociate address: Disasociate a publicIP of the user currently associated with an instance
7.5.5 Keypairs
create keypair: Creates the named keypair
delete keypair: Deletes the named keypair, removes the associated keys
describe keypairs: List and describe the key pairs available to the user
7.5.6 Tags
create-tags
describe-tags
remove-tags
Commands description can be accessed from the Command Line Reference.
82 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
User Account Conguration
An account is needed in order to use the OpenNebula cloud. The cloud administrator will be responsible for assigning
these accounts, which have a one to one correspondence with OpenNebula accounts, so all the cloud administrator
has to do is check the conguration guide to setup accounts, and automatically the OpenNebula cloud account will be
created.
In order to use such an account, the end user can make use of clients programmed to access the services described in
the previous section. For this, she has to set up his environment, particularly the following aspects:
Authentication: This can be achieved in three different ways, here listed in order of priority (i.e. values specied
in the argument line supersede environmental variables)
Using the commands arguments. All the commands accept an Access Key (as the OpenNebula username)
and a Secret Key (as the OpenNebula hashed password)
Using EC2_ACCESS_KEY and EC2_SECRET_KEY environment variables the same way as the argu-
ments
If none of the above is available, the ONE_AUTH variable will be checked for authentication (with the
same used for OpenNebula CLI).
Server location: The command need to know where the OpenNebula cloud service is running. That information
needs to be stored within the EC2_URL environment variable (in the form of a http URL, including the port if
it is not the standard 80).
Warning: The EC2_URL has to use the FQDN of the EC2-Query Server
Hello Cloud!
Lets take a walk through a typical usage scenario. In this brief scenario it will be shown how to upload an image to
the OpenNebula image repository, how to register it in the OpenNebula cloud and perform operations upon it.
upload_image
Assuming we have a working Gentoo installation residing in an .img le, we can upload it into the OpenNebula cloud
using the econe-upload command:
$ econe-upload /images/gentoo.img
Success: ImageId ami-00000001
describe_images
We will need the ImageId to launch the image, so in case we forgotten we can list registered images using the econe-
describe-images command:
$ econe-describe-images -H
Owner ImageId Status Visibility Location
------------------------------------------------------------------------------
helen ami-00000001 available private 19ead5de585f43282acab4060bfb7a07
run_instance
Once we recall the ImageId, we will need to use the econe-run-instances command to launch an Virtual Machine
instance of our image:
$ econe-run-instances -H ami-00000001
Owner ImageId InstanceId InstanceType
7.5. OpenNebula EC2 User Guide 83
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
------------------------------------------------------------------------------
helen ami-00000001 i-15 m1.small
We will need the InstanceId to monitor and shutdown our instance, so we better write down that i-15.
describe_instances
If we have too many instances launched and we dont remember everyone of them, we can ask econe-describe-
instances to show us which instances we have submitted.
$ econe-describe-instances -H
Owner Id ImageId State IP Type
------------------------------------------------------------------------------------------------------------
helen i-15 ami-00000001 pending 147.96.80.33 m1.small
We can see that the instances with Id i-15 has been launched, but it is still pending, i.e., it still needs to be deployed
into a physical host. If we try the same command again after a short while, we should be seeing it running as in the
following excerpt:
$ econe-describe-instances -H
Owner Id ImageId State IP Type
------------------------------------------------------------------------------------------------------------
helen i-15 ami-00000001 running 147.96.80.33 m1.small
terminate_instances
After we put the Virtual Machine to a good use, it is time to shut it down to make space for other Virtual Machines
(and, presumably, to stop being billed for it). For that we can use the econe-terminate-instances passing to it as an
argument the InstanceId that identies our Virtual Machine:
$ econe-terminate-instances i-15
Success: Terminating i-15 in running state
Warning: You can obtain more information on how to use the above commands accessing their Usage help
passing them the -h ag
7.6 EC2 Ecosystem
In order to interact with the EC2 Service that OpenNebula implements you can use the client included in the OpenNeb-
ula distribution, but also you can choose one of the well known tools that are supposed to interact with cloud servers
through the EC2 Query API, like the Firefox extension HybridFox, or the command line tools, Euca2ools.
7.6.1 HybridFox
HybridFox is a Mozilla Firefox extension for managing your Amazon EC2 account. Launch new instances, mount
Elastic Block Storage volumes, map Elastic IP addresses, and more.
Conguration
You have to set up the credentials to interact with OpenNebula, by pressing the Credentials button:
1. Account Name, add a name for this account
2. AWS Access Key, add your OpenNebula username
84 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
3. AWS Secret Access Key, add your OpenNebula SHA1 hashed password
Also you have to specify in a new Region the endpoint in which the EC2 Service is running, by pressing on
the Regions button. Take care of using exactly the same url and port that is specied in the econe.conf le,
otherwise you will get AuthFailure:
7.6. EC2 Ecosystem 85
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Warning: If you have problems adding a new region, try to add it manually in the ec2ui.endpoints variable inside
the Firefox about:cong
Typical usage scenarios
List images
86 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Run instances
7.6. EC2 Ecosystem 87
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Control instances
88 Chapter 7. Public Cloud
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
You can also use HybridFox a similar Mozilla Firefox extension to interact with cloud services through the
EC2 Query API
7.6.2 Euca2ools
Euca2ools are command-line tools for interacting with Web services that export a REST/Query-based API compatible
with Amazon EC2 and S3 services.
You have to set the following environment variables in order to interact with the OpenNebula EC2 Query Server. The
EC2_URL will be the same endpoint as dened in the /etc/one/econe.conf le of Opennebula. The
EC2_ACCESS_KEY will be the OpenNebula username and the EC2_SECRET_KEY the OpenNebula sha1 hashed
user password
~$ env | grep EC2
EC2_SECRET_KEY=e17a13.0834936f71bb3242772d25150d40791e72
EC2_URL=http://localhost:4567
EC2_ACCESS_KEY=oneadmin
7.6. EC2 Ecosystem 89
OpenNebula 4.6 Advanced Administration Guide, Release 4.6
Typical usage scenarios
List images
~$ euca-describe-images
IMAGE ami-00000001 srv/cloud/images/1 daniel available private i386 machine
IMAGE ami-00000002 srv/cloud/images/2 daniel available private i386 machine
IMAGE ami-00000003 srv/cloud/images/3 daniel available private i386 machine
IMAGE ami-00000004 srv/cloud/images/4 daniel available private i386 machine
List instances
~$ euca-describe-instances
RESERVATION default daniel default
INSTANCE i-0 ami-00000002 192.168.0.1 192.168.0.1 running default 0 m1.small 2010-06-21T18:51:13+02:00 default eki-EA801065 eri-1FEE1144
INSTANCE i-3 ami-00000002 192.168.0.4 192.168.0.4 running default 0 m1.small 2010-06-21T18:53:30+02:00 default eki-EA801065 eri-1FEE1144
Run instances
~$ euca-run-instances --instance-type m1.small ami-00000001
RESERVATION r-47a5402e daniel default
INSTANCE i-4 ami-00000001 192.168.0.2 192.168.0.2 pending default 2010-06-22T11:54:07+02:00 None None
90 Chapter 7. Public Cloud
OpenNebula 4.6 Integration Guide
Release 4.6
OpenNebula Project
April 28, 2014
CONTENTS
1 Getting Started 1
1.1 Scalable Architecture and APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Cloud Interfaces 5
2.1 OpenNebula OCCI Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 System Interfaces 15
3.1 XML-RPC API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Ruby OpenNebula Cloud API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.3 Java OpenNebula Cloud API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.4 OneFlow Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4 Infrastructure Integration 117
4.1 Using Hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.2 Virtualization Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3 Storage Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4 Monitoring Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.5 Networking Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.6 Authentication Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.7 Cloud Bursting Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5 References 145
5.1 Custom Routes for Sunstone Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2 Building from Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.3 Build Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
i
ii
CHAPTER
ONE
GETTING STARTED
1.1 Scalable Architecture and APIs
OpenNebula has been designed to be easily adapted to any infrastructure and easily extended with new components.
The result is a modular system that can implement a variety of Cloud architectures and can interface with multiple dat-
acenter services. In this Guide we review the main interfaces of OpenNebula, their use and give pointers to additional
documentation for each one.
We have classied the interfaces in two categories: end-user cloud and system interfaces. Cloud interfaces are primary
used to develop tools targeted to the end-user, and they provide a high level abstraction of the functionality provided
by the Cloud. On the other hand, the system interfaces expose the full functionality of OpenNebula and are mainly
used to adapt and tune the behavior of OpenNebula to the target infrastructure.
1
OpenNebula 4.6 Integration Guide, Release 4.6
1.1.1 1. Cloud Interfaces
Cloud interfaces enable you to manage virtual machines, networks and images through a simple and easy-to-use REST
API. The Cloud interfaces hide most of the complexity of a Cloud and are specially suited for end-users. OpenNebula
implements two different interfaces, namely:
EC2-Query API. OpenNebula implements the functionality offered by the Amazons EC2 API, mainly those
related to virtual machine management. In this way, you can use any EC2 Query tool to access your OpenNebula
Cloud.
OCCI-OGF. The OpenNebula OCCI API is a RESTful service to create, control and monitor cloud resources
using an implementation of the OGF OCCI API specication based on the draft 0.8
Use the cloud interface if... you are developing portals, tools or specialized solutions for end-users.
You can nd more information at... EC2-Query reference, and OCCI reference guides.
2 Chapter 1. Getting Started
OpenNebula 4.6 Integration Guide, Release 4.6
1.1.2 2. System Interfaces
2.1. The OpenNebula XML-RPC Interface
The XML-RPC interface is the primary interface for OpenNebula, and it exposes all the functionality to interface
the OpenNebula daemon. Through the XML-RPC interface you can control and manage any OpenNebula resource,
including virtual machines, networks, images, users, hosts and clusters.
Use the XML-RPCinterface if... you are developing specialized libraries for Cloud applications or you need a low-level
interface with the OpenNebula core.
You can nd more information at... XML-RPC reference guide.
2.2. The OpenNebula Cloud API (OCA)
The OpenNebula Cloud API provides a simplied and convenient way to interface the OpenNebula core. The OCA
interfaces exposes the same functionality as that of the XML-RPC interface. OpenNebula includes two language
bindings for OCA: Ruby and JAVA.
Use the OCA interface if... you are developing advanced IaaS tools that need full access to the OpenNebula function-
ality.
You can nd more information at... OCA-Ruby reference guide and the OCA-JAVA reference guide.
2.3. The OpenNebula Drivers Interfaces
The interactions between OpenNebula and the Cloud infrastructure are performed by specic drivers each one ad-
dressing a particular area:
Storage. The OpenNebula core issue abstract storage operations (e.g. clone or delete) that are implemented by
specic programs that can be replaced or modied to interface special storage backends and le-systems.
Virtualization. The interaction with the hypervisors are also implemented with custom programs to boot, stop
or migrate a virtual machine. This allows you to specialize each VM operation so to perform custom operations.
Monitoring. Monitoring information is also gathered by external probes. You can add additional probes to
include custom monitoring metrics that can be later used to allocate virtual machines or for accounting purposes
Authorization. OpenNebula can be also congured to use an external program to authorize and authenticate
user requests. In this way, you can implement any access policy to Cloud resources.
Networking the hypervisor is also prepared with the network conguration for each Virtual Machine.
Use the driver interfaces if... you need OpenNebula to interface any specic storage, virtualization, monitoring or
authorization system already deployed in your datacenter or to tune the behavior of the standard OpenNebula drivers.
You can nd more information at... the virtualization system,storage system, the information system, the authentication
system and network system guides.
2.4. The OpenNebula DataBase
OpenNebula saves its state and lots of accounting information in a persistent data-base. OpenNebula can use MySQL
or SQLite database that can be easily interfaced with any of DB tool.
Use the OpenNebula DB if... you need to generate custom accounting or billing reports.
1.1. Scalable Architecture and APIs 3
OpenNebula 4.6 Integration Guide, Release 4.6
4 Chapter 1. Getting Started
CHAPTER
TWO
CLOUD INTERFACES
2.1 OpenNebula OCCI Specication
2.1.1 Overview
The OpenNebula OCCI API is a RESTful service to create, control and monitor cloud resources using an implemen-
tation of the OGF OCCI API specication based on the draft 0.8. This implementation also includes some extensions,
requested by the community, to support OpenNebula specic functionality. There are two types of resources that
resemble the basic entities managed by the OpenNebula system, namely:
Pool Resources (PR): Represents a collection of elements owned by a given user. In particular ve collections
are dened:
<COLLECTIONS>
<COMPUTE_COLLECTION href="http://localhost:4567/compute">
<INSTANCE_TYPE_COLLECTION href="http://localhost:4567/instance_type">
<NETWORK_COLLECTION href="http://localhost:4567/network">
<STORAGE_COLLECTION href="http://localhost:4567/storage">
<USER_COLLECTION href="http://localhost:4567/user">
</COLLECTIONS>
Entry Resources (ER): Represents a single entry within a given collection: COMPUTE, NETWORK, STOR-
AGE, INSTANCE_TYPE and USER.
Each one of ERs in the pool are described by an element (e.g. COMPUTE, INSTANCE_TYPE, NETWORK, STORAGE
or USER) with one attribute:
href, a URI for the ER
<COMPUTE_COLLECTION>
<COMPUTE href="http://www.opennebula.org/compute/310" name="TestVM"/>
<COMPUTE href="http://www.opennebula.org/compute/432" name="Server1"/>
<COMPUTE href="http://www.opennebula.org/compute/123" name="Server2"/>
</COMPUTE_COLLECTION>
A COMPUTE entry resource can be linked to one or more STORAGE or NETWORK resources and one
INSTANCE_TYPE and USER.
5
OpenNebula 4.6 Integration Guide, Release 4.6
2.1.2 Authentication & Authorization
User authentication will be HTTP Basic access authentication to comply with REST philosophy. The credentials
passed should be the User name and password. If you are not using the occi tools provided by OpenNebula, the
password has to be SHA1 hashed as well as it is stored in the database instead of using the plain version.
2.1.3 HTTP Headers
The following headers are compulsory:
Content-Length: The size of the Entity Body in octets
Content-Type: application/xml
Uploading images needs HTTP multi part support, and also the following header
Content-Type: multipart/form-data
2.1.4 Return Codes
The OpenNebula Cloud API uses the following subset of HTTP Status codes:
200 OK : The request has succeeded.
201 Created : Request was successful and a new resource has being created
202 Accepted : The request has been accepted for processing, but the processing has not been completed
204 No Content : The request has been accepted for processing, but no info in the response
400 Bad Request : Malformed syntax
401 Unauthorized : Bad authentication
403 Forbidden : Bad authorization
404 Not Found : Resource not found
500 Internal Server Error : The server encountered an unexpected condition which prevented it from fullling
the request.
6 Chapter 2. Cloud Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
501 Not Implemented : The functionality requested is not supported
The methods specied below are described without taking into account 4xx (can be inferred from authorization in-
formation in section above) and 5xx errors (which are method independent). HTTP verbs not dened for a particular
entity will return a 501 Not Implemented.
2.1.5 Resource Representation
Network
The NETWORK element denes a virtual network that interconnects those COMPUTES with a network interface card
attached to that network. The trafc of each network is isolated from any other network, so it constitutes a broadcasting
domain.
The following elements dene a NETWORK:
ID, the uuid of the NETWORK
NAME describing the NETWORK
USER link to the USER owner of the NETWORK
GROUP of the NETWORK
DESCRIPTION of the NETWORK
ADDRESS, of the NETWORK
SIZE, of the network, defaults to C
The elements in bold can be provided in a POST request in order to create a new NETWORK resource based on those
parameters.
Example:
<NETWORK href="http://www.opennebula.org/network/123">
<ID>123</ID>
<NAME>BlueNetwork</NAME>
<USER href="http://www.opennebula.org/user/33" name="cloud_user"/>
<GROUP>cloud_group</GROUP>
<DESCRIPTION>This NETWORK is blue<DESCRIPTION>
<ADDRESS>192.168.0.1</ADDRESS>
<SIZE>C</SIZE>
</NETWORK>
Storage
The STORAGE is a resource containing an operative system or data, to be used as a virtual machine disk:
ID the uuid of the STORAGE
NAME describing the STORAGE
USER link to the USER owner of the STORAGE
GROUP of the STORAGE
DESCRIPTION of the STORAGE
TYPE, type of the image
OS: contains a working operative system
2.1. OpenNebula OCCI Specication 7
OpenNebula 4.6 Integration Guide, Release 4.6
CDROM: readonly data
DATABLOCK: storage for data, which can be accessed and modied from different Computes
SIZE, of the image in MBs
FSTYPE, in case of DATABLOCK, the type of lesystem desired
The elements in bold can be provided in a POST request in order to create a new NETWORK resource based on those
parameters.
Example:
<STORAGE href="http://www.opennebula.org/storage/123">
<ID>123</ID>
<NAME>Ubuntu Desktop</NAME>
<USER href="http://www.opennebula.org/user/33" name="cloud_user"/>
<GROUP>cloud_group</GROUP>
<DESCRIPTION>Ubuntu 10.04 desktop for students.</DESCRIPTION>
<TYPE>OS</TYPE>
<SIZE>2048</SIZE>
</STORAGE>
Compute
The COMPUTE element denes a virtual machine by specifying its basic conguration attributes such as NIC or DISK.
The following elements dene a COMPUTE:
ID, the uuid of the COMPUTE.
NAME, describing the COMPUTE.
USER link to the USER owner of the COMPUTE
GROUP of the COMPUTE
CPU number of CPUs of the COMPUTE
MEMORY MBs of MEMORY of the COMPUTE
INSTANCE_TYPE, ink to a INSTANCE_TYPE resource
DISK, the block devices attached to the virtual machine.
STORAGE link to a STORAGE resource
TARGET
SAVE_AS link to a STORAGE resource to save the disk image when the COMPUTE is DONE
TYPE
NIC, the network interfaces.
NETWORK link to a NETWORK resource
IP
MAC
CONTEXT, key value pairs to be passed on creation to the COMPUTE.
KEY1 VALUE1
KEY2 VALUE2
8 Chapter 2. Cloud Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
STATE, the state of the COMPUTE. This can be one of:
Example:
<COMPUTE href="http://www.opennebula.org/compute/32">
<ID>32</ID>
<NAME>Web Server</NAME>
<CPU>1</CPU>
<MEMORY>1024</MEMORY>
<USER href="http://0.0.0.0:4567/user/310" name="cloud_user"/>
<GROUP>cloud_group</GROUP>
<INSTANCE_TYPE href="http://0.0.0.0:4567/instance_type/small">small</INSTANCE_TYPE>
<STATE>ACTIVE</STATE>
<DISK>
<STORAGE href="http://www.opennebula.org/storage/34" name="Ubuntu10.04"/>
<TYPE>OS</TYPE>
<TARGET>hda</TARGET>
</DISK>
<DISK>
<STORAGE href="http://www.opennebula.org/storage/24" name="testingDB"/>
<SAVE_AS href="http://www.opennebula.org/storage/54"/>
<TYPE>CDROM</TYPE>
<TARGET>hdc</TARGET>
</DISK>
<NIC>
<NETWORK href="http://www.opennebula.org/network/12" name="Private_LAN"/>
<MAC>00:ff:72:31:23:17</MAC>
<IP>192.168.0.12</IP>
</NIC>
<NIC>
<NETWORK href="http://www.opennebula.org/network/10" name="Public_IPs"/>
<MAC>00:ff:72:17:20:27</MAC>
<IP>192.168.0.25</IP>
</NIC>
<CONTEXT>
<PUB_KEY>FDASF324DSFA3241DASF</PUB_KEY>
</CONTEXT>
</COMPUTE>
Instance type
An INSTANCE_TYPE species the COMPUTE capacity values
ID, the uuid of the INSTANCE_TYPE.
2.1. OpenNebula OCCI Specication 9
OpenNebula 4.6 Integration Guide, Release 4.6
NAME, describing the INSTANCE_TYPE.
CPU number of CPUs of the INSTANCE_TYPE
MEMORY MBs of MEMORY of the INSTANCE_TYPE
Example:
<INSTANCE_TYPE href="http://www.opennebula.org/instance_type/small">
<ID>small</ID>
<NAME>small</NAME>
<CPU>1</CPU>
<MEMORY>1024</MEMORY>
</INSTANCE_TYPE>
User
A USER species the COMPUTE capacity values
ID, the uuid of the INSTANCE_TYPE.
NAME, describing the INSTANCE_TYPE.
GROUP, fo the USER
QUOTA,
CPU:
MEMORY:
NUM_VMS:
STORAGE
USAGE,
CPU:
MEMORY:
NUM_VMS:
STORAGE
Example:
<USER href="http://www.opennebula.org/user/42">
<ID>42</ID>
<NAME>cloud_user</NAME>
<GROUP>cloud_group</GROUP>
<QUOTA>
<CPU>8</CPU>
<MEMORY>4096</MEMORY>
<NUM_VMS>10</NUM_VMS>
<STORAGE>0</STORAGE>
</QUOTA>
<USAGE>
<CPU>2</CPU>
<MEMORY>512</MEMORY>
<NUM_VMS>2</NUM_VMS>
<STORAGE>0</STORAGE>
</USAGE>
</USER>
10 Chapter 2. Cloud Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
2.1.6 Request Methods
Method URL Meaning / Entity Body Response
GET / List the available collections in
the cloud.
200 OK: An XML representation of the the available
collections in the http body
Network
Method URL Meaning / Entity Body Response
GET /network List the contents of the NETWORK collection. Optionally a
verbose param (/network?verbose=true) can be
provided to retrieve an extended version of the collection
200 OK: An XML
representation of the collection
in the http body
POST/network Create a new NETWORK. An XML representation of a
NETWORK without the ID element should be passed in the
http body
201 Created: An XML
representation of the new
NETWORK with the ID
GET /network/<id> Show the NETWORK resource identied by <id> 200 OK : An XML
representation of the
NETWORK in the http body
PUT /network/<id> Update the NETWORK resource identied by <id> 202 Accepted : The update
request is being process, polling
required to conrm update
DELETE /network/<id> Delete the NETWORK resource identied by <id> 204 No Content:
Storage
Method URL Meaning / Entity Body Response
GET /storage List the contents of the STORAGE collection. Optionally a
verbose param (/storage?verbose=true) can be
provided to retrieve an extended version of the collection
200 OK: An XML
representation of the collection
in the http body
POST/storage Create an new STORAGE. An XML representation of a
STORAGE without the ID element should be passed in the
http body
201 Created: An XML
representation of the new
NETWORK with the ID
GET /storage/<id> Show the STORAGE resource identied by <id> 200 OK : An XML
representation of the
STORAGE in the http body
PUT /storage/<id> Update the STORAGE resource identied by <id> 202 Accepted : The update
request is being process, polling
required to conrm update
DELETE /storage/<id> Delete the STORAGE resource identied by <id> 204 No Content:
2.1. OpenNebula OCCI Specication 11
OpenNebula 4.6 Integration Guide, Release 4.6
Compute
Method URL Meaning / Entity Body Response
GET /compute List the contents of the COMPUTE collection. Optionally a
verbose param (/compute?verbose=true) can be
provided to retrieve an extended version of the collection
200 OK: An XML
representation of the pool in the
http body
POST/compute Create a new COMPUTE. An XML representation of a
COMPUTE without the ID element should be passed in the
http body
201 Created: An XML
representation of the new
COMPUTE with the ID
GET /compute/<id> Show the COMPUTE resource identied by <id> 200 OK : An XML
representation of the network in
the http body
PUT /compute/<id> Update the COMMPUTE resource identied by <id> 202 Accepted : The update
request is being process, polling
required to conrm update
DELETE /compute/<id> Delete the COMPUTE resource identied by <id> 204 No Content: The Network
has been successfully deleted
Instance type
Method URL Meaning / Entity Body Response
GET /instance_type List the contents of the INSTANCE_TYPE collection.
Optionally a verbose param
(/instance_type?verbose=true) can be provided to
retrieve an extended version of the collection
200 OK: An XML
representation of the
collection in the http body
GET /instance_type/<id> Show the INSTANCE_TYPE resource identied by <id> 200 OK : An XML
representation of the
INSTANCE_TYPE in the
http body
User
Method URL Meaning / Entity Body Response
GET /user List the contents of the USER collection. Optionally a verbose
param (/user?verbose=true) can be provided to retrieve an
extended version of the collection
200 OK: An XML
representation of the
collection in the http body
GET /user/<id> Show the USER resource identied by <id> 200 OK : An XML
representation of the
USER in the http body
2.1.7 Implementation Notes
Authentication
It is recommended that the server-client communication is performed over HTTPS to avoid sending user authentication
information in plain text.
12 Chapter 2. Cloud Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Notications
HTTP protocol does not provide means for notication, so this API relies on asynchronous polling to nd whether a
RESOURCE update is successful or not.
2.1.8 Examples
2.1. OpenNebula OCCI Specication 13
OpenNebula 4.6 Integration Guide, Release 4.6
14 Chapter 2. Cloud Interfaces
CHAPTER
THREE
SYSTEM INTERFACES
3.1 XML-RPC API
This reference documentation describes the xml-rpc methods exposed by OpenNebula. Each description consists of
the method name and the input and output values.
All xml-rpc responses share a common structure.
Type Data Type Description
OUT Boolean True or false whenever is successful or not.
OUT String If an error occurs this is the error message.
OUT Int Error code.
The output will always consist of three values. The rst and third ones are xed, but the second one will contain the
String error message only in case of failure. If the method is successful, the returned value may be of another Data
Type.
The Error Code will contain one of the following values:
Value Code Meaning
0x0000 SUCCESS Success response.
0x0100 AUTHENTICATION User could not be authenticated.
0x0200 AUTHORIZATION User is not authorized to perform the requested action.
0x0400 NO_EXISTS The requested resource delhost not exist.
0x0800 ACTION FIXME
0x1000 XML_RPC_API FIXME
0x2000 INTERNAL FIXME
Warning: All methods expect a session string associated to the connected user as the rst parameter. It has to be
formed with the contents of the ONE_AUTH le, which will be <username>:<password> with the default
core auth driver.
Warning: Each XML-RPCrequest has to be authenticated and authorized. See the Auth Subsystemdocumentation
for more information.
The information strings returned by the one.
*
.info methods are XML-formatted. The complete XML Schemas
(XSD) reference is included at the end of this page. We encourage you to use the -x option of the command line
interface to collect sample outputs from your own infrastructure.
The methods that accept XML templates require the root element to be TEMPLATE. For instance, this template:
NAME = abc
MEMORY = 1024
ATT1 = value1
15
OpenNebula 4.6 Integration Guide, Release 4.6
Can be also given to OpenNebula with the following XML:
<TEMPLATE>
<NAME>abc</NAME>
<MEMORY>1024</MEMORY>
<ATT1>value1</ATT1>
</TEMPLATE>
3.1.1 Authorization Requests Reference
For each XML-RPC request, the session token is authenticated, and after that the Request Manager generates an
authorization request that can include more than one operation. The following tables document these requests.
onevm
onevm command XML-RPC
Method
Auth. Request
deploy one.vm.deploy VM:ADMIN HOST:MANAGE
delete boot shutdown suspend hold stop resume
release poweroff reboot
one.vm.action VM:MANAGE
resched unresched one.vm.action VM:ADMIN
migrate one.vm.migrate VM:ADMIN HOST:MANAGE
disk-snapshot one.vm.savedisk VM:MANAGE IMAGE:CREATE
disk-attach one.vm.attach VM:MANAGE IMAGE:USE
disk-detach one.vm.detach VM:MANAGE
nic-attach one.vm.attachnic VM:MANAGE NET:USE
nic-detach one.vm.detachnic VM:MANAGE
create one.vm.allocate VM:CREATE IMAGE:USE
NET:USE
show one.vm.info VM:USE
chown chgrp one.vm.chown VM:MANAGE [USER:MANAGE]
[GROUP:USE]
chmod one.vm.chmod VM:<MANAGEADMIN>
rename one.vm.rename VM:MANAGE
snapshot-create one.vm.snapshotcreate VM:MANAGE
snapshot-delete one.vm.snapshotdelete VM:MANAGE
snapshot-revert one.vm.snapshotrevert VM:MANAGE
resize one.vm.resize VM:MANAGE
update one.vm.update VM:MANAGE
recover one.vm.recover VM:ADMIN
list top one.vmpool.info VM:USE
Warning: The deploy action requires the user issuing the command to have VM:ADMIN rights. This user will
usually be the scheduler with the oneadmin credentials.
The scheduler deploys VMs to the Hosts over which the VM owner has MANAGE rights.
16 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
onetemplate
onetemplate command XML-RPC Method Auth. Request
update one.template.update TEMPLATE:MANAGE
instantiate one.template.instantiate TEMPLATE:USE [IMAGE:USE] [NET:USE]
create one.template.allocate TEMPLATE:CREATE
clone one.template.clone TEMPLATE:CREATE TEMPLATE:USE
delete one.template.delete TEMPLATE:MANAGE
show one.template.info TEMPLATE:USE
chown chgrp one.template.chown TEMPLATE:MANAGE [USER:MANAGE]
[GROUP:USE]
chmod one.template.chmod TEMPLATE:<MANAGEADMIN
rename one.template.rename TEMPLATE:MANAGE
list top one.templatepool.info TEMPLATE:USE
onehost
onehost command XML-RPC Method Auth. Request
enable disable one.host.enable HOST:ADMIN
update one.host.update HOST:ADMIN
create one.host.allocate HOST:CREATE
delete one.host.delete HOST:ADMIN
rename one.host.rename HOST:ADMIN
show one.host.info HOST:USE
list top one.hostpool.info HOST:USE
Warning: onehost sync is not performed by the core, it is done by the ruby command onehost.
onecluster
onecluster command XML-RPC Method Auth. Request
create one.cluster.allocate CLUSTER:CREATE
delete one.cluster.delete CLUSTER:ADMIN
update one.cluster.update CLUSTER:MANAGE
addhost one.cluster.addhost CLUSTER:ADMIN HOST:ADMIN
delhost one.cluster.delhost CLUSTER:ADMIN HOST:ADMIN
adddatastore one.cluster.adddatastore CLUSTER:ADMIN DATASTORE:ADMIN
deldatastore one.cluster.deldatastore CLUSTER:ADMIN DATASTORE:ADMIN
addvnet one.cluster.addvnet CLUSTER:ADMIN NET:ADMIN
delvnet one.cluster.delvnet CLUSTER:ADMIN NET:ADMIN
rename one.cluster.rename CLUSTER:MANAGE
show one.cluster.info CLUSTER:USE
list one.clusterpool.info CLUSTER:USE
3.1. XML-RPC API 17
OpenNebula 4.6 Integration Guide, Release 4.6
onegroup
onegroup command XML-RPC Method Auth. Request
create one.group.allocate GROUP:CREATE
delete one.group.delete GROUP:ADMIN
show one.group.info GROUP:USE
update one.group.update GROUP:MANAGE
quota one.group.quota GROUP:ADMIN
add_provider one.group.addprovider GROUP:ADMIN
ZONE:ADMIN
CLUSTER:ADMIN
del_provider one.group.delprovider GROUP:ADMIN
ZONE:ADMIN
CLUSTER:ADMIN
list one.grouppool.info GROUP:USE
one.groupquota.info
defaultquota one.groupquota.update Ony for users in the oneadmin group
onevnet
onevnet command XML-RPC Method Auth. Request
addleases one.vn.addleases NET:MANAGE
rmleases one.vn.rmleases NET:MANAGE
hold one.vn.hold NET:MANAGE
release one.vn.release NET:MANAGE
update one.vn.update NET:MANAGE
create one.vn.allocate NET:CREATE
delete one.vn.delete NET:MANAGE
show one.vn.info NET:USE
chown chgrp one.vn.chown NET:MANAGE [USER:MANAGE] [GROUP:USE]
chmod one.vn.chmod NET:<MANAGEADMIN>
rename one.vn.rename NET:MANAGE
list one.vnpool.info NET:USE
oneuser
oneuser command XML-RPC Method Auth. Request
create one.user.allocate USER:CREATE
delete one.user.delete USER:ADMIN
show one.user.info USER:USE
passwd one.user.passwd USER:MANAGE
update one.user.update USER:MANAGE
chauth one.user.chauth USER:ADMIN
quota one.user.quota USER:ADMIN
chgrp one.user.chgrp USER:MANAGE GROUP:USE
addgroup one.user.addgroup USER:MANAGE GROUP:MANAGE
delgroup one.user.delgroup USER:MANAGE GROUP:MANAGE
encode
list one.userpool.info USER:USE
one.userquota.info
defaultquota one.userquota.update Ony for users in the oneadmin group
18 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
onedatastore
oneimage command XML-RPC Method Auth. Request
create one.datastore.allocate DATASTORE:CREATE
delete one.datastore.delete DATASTORE:ADMIN
show one.datastore.info DATASTORE:USE
update one.datastore.update DATASTORE:MANAGE
rename one.datastore.rename DATASTORE:MANAGE
chown chgrp one.datastore.chown DATASTORE:MANAGE [USER:MANAGE] [GROUP:USE]
chmod one.datastore.chmod DATASTORE:<MANAGE ADMIN>
list one.datastorepool.info DATASTORE:USE
oneimage
oneimage command XML-RPC Method Auth. Request
persistent nonpersistent one.image.persistent IMAGE:MANAGE
enable disable one.image.enable IMAGE:MANAGE
chtype one.image.chtype IMAGE:MANAGE
update one.image.update IMAGE:MANAGE
create one.image.allocate IMAGE:CREATE DATASTORE:USE
clone one.image.clone IMAGE:CREATE IMAGE:USE
delete one.image.delete IMAGE:MANAGE
show one.image.info IMAGE:USE
chown chgrp one.image.chown IMAGE:MANAGE [USER:MANAGE] [GROUP:USE]
chmod one.image.chmod IMAGE:<MANAGE ADMIN>
rename one.image.rename IMAGE:MANAGE
list top one.imagepool.info IMAGE:USE
onezone
onezone command XML-RPC Method Auth. Request
create one.zone.allocate ZONE:CREATE
rename one.zone.rename ZONE:MANAGE
update one.zone.update ZONE:MANAGE
delete one.zone.delete ZONE:ADMIN
show one.zone.info ZONE:USE
list one.zonepool.info ZONE:USE
set ZONE:USE
oneacl
oneacl command XML-RPC Method Auth. Request
create one.acl.addrule ACL:MANAGE
delete one.acl.delrule ACL:MANAGE
list one.acl.info ACL:MANAGE
oneacct
command XML-RPC Method Auth. Request
oneacct one.vmpool.accounting VM:USE
3.1. XML-RPC API 19
OpenNebula 4.6 Integration Guide, Release 4.6
documents
XML-RPC Method Auth. Request
one.document.update DOCUMENT:MANAGE
one.document.allocate DOCUMENT:CREATE
one.document.delete DOCUMENT:MANAGE
one.document.info DOCUMENT:USE
one.document.chown DOCUMENT:MANAGE [USER:MANAGE] [GROUP:USE]
one.document.chmod DOCUMENT:<MANAGE ADMIN>
one.document.rename DOCUMENT:MANAGE
one.documentpool.info DOCUMENT:USE
system
command XML-RPC Method Auth. Request
one.system.version
one.system.cong Ony for users in the oneadmin group
3.1.2 Actions for Templates Management
one.template.allocate
Description: Allocates a new template in OpenNebula.
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the template contents. Syntax can be the usual attribute=value
or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.template.clone
Description: Clones an existing virtual machine template.
Parameters
Type Data Type Description
IN String The session string.
IN Int The ID of the template to be cloned.
IN String Name for the new template.
OUT Boolean true or false whenever is successful or not
OUT Int/String The new template ID / The error string.
OUT Int Error code.
one.template.delete
Description: Deletes the given template from the pool.
20 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.template.instantiate
Description: Instantiates a new virtual machine from a template.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN String Name for the new VM instance. If it is an empty string, OpenNebula will assign one
automatically.
IN Boolean False to create the VM on pending (default), True to create it on hold.
IN String A string containing an extra template to be merged with the one being instantiated. It can be
empty. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The new virtual machine ID / The error string.
OUT Int Error code.
one.template.update
Description: Replaces the template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.template.chmod
Description: Changes the permission bits of a template.
Parameters
3.1. XML-RPC API 21
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.template.chown
Description: Changes the ownership of a template.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.template.rename
Description: Renames a template.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.template.info
Description: Retrieves information for the template.
Parameters
22 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.templatepool.info
Description: Retrieves information for all or part of the Resources in the pool.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int When the next parameter is >= -1 this is the Range start ID. Can be -1. For smaller values this is
the offset used for pagination.
IN Int For values >= -1 this is the Range end ID. Can be -1 to get until the last ID. For values < -1 this
is the page size used for pagination.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The range can be used to retrieve a subset of the pool, from the start to the end ID. To retrieve the complete pool,
use (-1, -1); to retrieve all the pool from a specic ID to the last one, use (<id>, -1), and to retrieve the rst
elements up to an ID, use (0, <id>).
3.1.3 Actions for Virtual Machine Management
The VM Life Cycle is explained in this diagram:.
3.1. XML-RPC API 23
OpenNebula 4.6 Integration Guide, Release 4.6
It contains all the LifeCycleManager states, and the transitions triggered by the onevm commands. It is intended to be
consulted by developers.
The simplied diagram used in the Virtual Machine Instances documentation uses a smaller number of state names.
These names are the ones used by onevm list, e.g. prolog, prolog_migrate and prolog_resume are all presented as
prol. It is intended as a reference for end-users.
one.vm.allocate
Description: Allocates a new virtual machine in OpenNebula.
Parameters
24 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data
Type
Description
IN String The session string.
IN String A string containing the template for the vm. Syntax can be the usual attribute=value
or XML.
IN Boolean False to create the VM on pending (default), True to create it on hold.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.vm.deploy
Description: initiates the instance of the given vmid on the target host.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN Int The Host ID of the target host where the VM will be deployed.
IN Int The Datastore ID of the target system datastore where the VM will be deployed. It is optional,
and can be set to -1 to let OpenNebula choose the datastore.
IN Boolean true to enforce the Host capacity is not overcommitted.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.action
Description: submits an action to be performed on a virtual machine.
Parameters
Type Data Type Description
IN String The session string.
IN String the action name to be performed, see below.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
The action String must be one of the following:
shutdown
shutdown-hard
hold
release
stop
suspend
resume
boot
3.1. XML-RPC API 25
OpenNebula 4.6 Integration Guide, Release 4.6
delete
delete-recreate
reboot
reboot-hard
resched
unresched
poweroff
poweroff-hard
undeploy
undeploy-hard
one.vm.migrate
Description: migrates one virtual machine (vid) to the target host (hid).
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int the target host id (hid) where we want to migrate the vm.
IN Boolean if true we are indicating that we want livemigration, otherwise false.
IN Boolean true to enforce the Host capacity is not overcommitted.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.savedisk
Description: Sets the disk to be saved in the given image.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN Int Disk ID of the disk we want to save.
IN String Name for the new Image where the disk will be saved.
IN String Type for the new Image. If it is an empty string, then the default one will be used. See the
existing types in the Image template reference.
IN Boolean True to save the disk immediately, false will perform the operation when the VM shuts down.
IN Boolean True to clone clone also the VM originating Template, and replace the disk with the saved image
OUT Boolean true or false whenever is successful or not
OUT Int/String The new allocated Image ID / The error string.
If the Template was cloned, the new Template ID is not returned. The Template can be found by
name: <image_name>-<image_id>
OUT Int Error code.
26 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
one.vm.attach
Description: Attaches a new disk to the virtual machine
Parameters
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN String A string containing a single DISK vector attribute. Syntax can be the usual
attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.detach
Description: Detaches a disk from a virtual machine
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The disk ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.attachnic
Description: Attaches a new network interface to the virtual machine
Parameters
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN String A string containing a single NIC vector attribute. Syntax can be the usual
attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.detachnic
Description: Detaches a network interface from a virtual machine
Parameters
3.1. XML-RPC API 27
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The nic ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.chmod
Description: Changes the permission bits of a virtual machine.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vm.chown
Description: Changes the ownership of a virtual machine.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vm.rename
Description: Renames a virtual machine
Parameters
28 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.snapshotcreate
Description: Creates a new virtual machine snapshot
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new snapshot name. It can be empty.
OUT Boolean true or false whenever is successful or not
OUT Int/String The new snapshot ID / The error string.
OUT Int Error code.
one.vm.snapshotrevert
Description: Reverts a virtual machine to a snapshot
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The snapshot ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.snapshotdelete
Description: Deletes a virtual machine snapshot
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The snapshot ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.resize
Description: Changes the capacity of the virtual machine
Parameters
3.1. XML-RPC API 29
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data
Type
Description
IN String The session string.
IN Int The object ID.
IN String Template containing the new capacity elements CPU, VCPU, MEMORY. If one of them is not
present, or its value is 0, it will not be resized.
IN Boolean true to enforce the Host capacity is not overcommitted. This parameter is only acknoledged for
users in the oneadmin group, Host capacity will be always enforced for regular users.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vm.update
Description: Replaces the user template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new user template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vm.recover
Description: Recovers a stuck VM that is waiting for a driver operation. The recovery may be done by failing
or succeeding the pending operation. You need to manually check the vm status on the host, to decide if the
operation was successful or not.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Boolean Recover the VM by succeeding (true) of failing (false) the pending action
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vm.info
Description: Retrieves information for the virtual machine.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
30 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
one.vm.monitoring
Description: Returns the virtual machine monitoring records.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The monitoring information string / The error string.
OUT Int Error code.
The monitoring information returned is a list of VM elements. Each VM element contains the complete xml of the
VM with the updated information returned by the poll action.
For example:
<MONITORING_DATA>
<VM>
...
<LAST_POLL>123</LAST_POLL>
...
</VM>
<VM>
...
<LAST_POLL>456</LAST_POLL>
...
</VM>
</MONITORING_DATA>
one.vmpool.info
Description: Retrieves information for all or part of the VMs in the pool.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int When the next parameter is >= -1 this is the Range start ID. Can be -1. For smaller values this is
the offset used for pagination.
IN Int For values >= -1 this is the Range end ID. Can be -1 to get until the last ID. For values < -1 this
is the page size used for pagination.
IN Int VM state to lter by.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The range can be used to retrieve a subset of the pool, from the start to the end ID. To retrieve the complete pool,
use (-1, -1); to retrieve all the pool from a specic ID to the last one, use (<id>, -1), and to retrieve the rst
elements up to an ID, use (0, <id>).
The state lter can be one of the following:
3.1. XML-RPC API 31
OpenNebula 4.6 Integration Guide, Release 4.6
Value State
-2 Any state, including DONE
-1 Any state, except DONE
0 INIT
1 PENDING
2 HOLD
3 ACTIVE
4 STOPPED
5 SUSPENDED
6 DONE
7 FAILED
one.vmpool.monitoring
Description: Returns all the virtual machine monitoring records.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
See one.vm.monitoring.
Sample output:
<MONITORING_DATA>
<VM>
<ID>0</ID>
<LAST_POLL>123</LAST_POLL>
...
</VM>
<VM>
<ID>0</ID>
<LAST_POLL>456</LAST_POLL>
...
</VM>
<VM>
<ID>3</ID>
<LAST_POLL>123</LAST_POLL>
...
</VM>
<VM>
<ID>3</ID>
<LAST_POLL>456</LAST_POLL>
...
</VM>
</MONITORING_DATA>
one.vmpool.accounting
Description: Returns the virtual machine history records.
32 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int Start time for the time interval. Can be -1, in which case the time interval wont have a left
boundary.
IN Int End time for the time interval. Can be -1, in which case the time interval wont have a right
boundary.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The XML output is explained in detail in the oneacct guide.
3.1.4 Actions for Hosts Management
one.host.allocate
Description: Allocates a new host in OpenNebula
Parameters
Type Data
Type
Description
IN String The session string.
IN String Hostname of the machine we want to add
IN String The name of the information manager (im_mad_name), this values are taken from the oned.conf
with the tag name IM_MAD (name)
IN String The name of the virtual machine manager mad name (vmm_mad_name), this values are taken
from the oned.conf with the tag name VM_MAD (name)
IN String The name of the virtual network manager mad name (vnm_mad_name), see the Networking
Subsystem documentation
IN Int The cluster ID. If it is -1, this host wont be added to any cluster.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated Host ID / The error string.
OUT Int Error code.
one.host.delete
Description: Deletes the given host from the pool
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
3.1. XML-RPC API 33
OpenNebula 4.6 Integration Guide, Release 4.6
one.host.enable
Description: Enables or disables the given host
Parameters
Type Data Type Description
IN String The session string.
IN Int The Host ID.
IN Boolean Set it to true/false to enable or disable the target Host.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.host.update
Description: Replaces the hosts template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.host.rename
Description: Renames a host.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.host.info
Description: Retrieves information for the host.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
34 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
one.host.monitoring
Description: Returns the host monitoring records.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The monitoring information string / The error string.
OUT Int Error code.
The monitoring information returned is a list of HOST elements. Each HOST element contains the complete xml of
the host with the updated information returned by the poll action.
For example:
<MONITORING_DATA>
<HOST>
...
<LAST_MON_TIME>123</LAST_MON_TIME>
...
</HOST>
<HOST>
...
<LAST_MON_TIME>456</LAST_MON_TIME>
...
</HOST>
</MONITORING_DATA>
one.hostpool.info
Description: Retrieves information for all the hosts in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.hostpool.monitoring
Description: Returns all the host monitoring records.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
Sample output:
3.1. XML-RPC API 35
OpenNebula 4.6 Integration Guide, Release 4.6
<MONITORING_DATA>
<HOST>
<ID>0</ID>
<LAST_MON_TIME>123</LAST_MON_TIME>
...
</HOST>
<HOST>
<ID>0</ID>
<LAST_MON_TIME>456</LAST_MON_TIME>
...
</HOST>
<HOST>
<ID>3</ID>
<LAST_MON_TIME>123</LAST_MON_TIME>
...
</HOST>
<HOST>
<ID>3</ID>
<LAST_MON_TIME>456</LAST_MON_TIME>
...
</HOST>
</MONITORING_DATA>
3.1.5 Actions for Cluster Management
one.cluster.allocate
Description: Allocates a new cluster in OpenNebula.
Parameters
Type Data Type Description
IN String The session string.
IN String Name for the new cluster.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated cluster ID / The error string.
OUT Int Error code.
one.cluster.delete
Description: Deletes the given cluster from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.update
Description: Replaces the cluster template contents.
36 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.addhost
Description: Adds a host to the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The host ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.delhost
Description: Removes a host from the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The host ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.adddatastore
Description: Adds a datastore to the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The datastore ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
3.1. XML-RPC API 37
OpenNebula 4.6 Integration Guide, Release 4.6
one.cluster.deldatastore
Description: Removes a datastore from the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The datastore ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.addvnet
Description: Adds a vnet to the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The vnet ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.delvnet
Description: Removes a vnet from the given cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The cluster ID.
IN Int The vnet ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.cluster.rename
Description: Renames a cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
38 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
one.cluster.info
Description: Retrieves information for the cluster.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.clusterpool.info
Description: Retrieves information for all the clusters in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
3.1.6 Actions for Virtual Network Management
one.vn.allocate
Description: Allocates a new virtual network in OpenNebula.
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the template of the virtual network. Syntax can be the usual
attribute=value or XML.
IN Int The cluster ID. If it is -1, this virtual network wont be added to any cluster.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.vn.delete
Description: Deletes the given virtual network from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
3.1. XML-RPC API 39
OpenNebula 4.6 Integration Guide, Release 4.6
one.vn.addleases
Description: Adds a new lease to the virtual network. Only available for FIXED networks.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String template of the lease to add. Syntax can be the usual attribute=value or XML, see
below.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
Examples of valid templates:
LEASES=[IP=192.168.0.5]
LEASES=[IP=192.168.0.5, MAC=50:20:20:20:20:20]
<TEMPLATE>
<LEASES>
<IP>192.168.0.5</IP>
</LEASES>
</TEMPLATE>
<TEMPLATE>
<LEASES>
<IP>192.168.0.5</IP>
<MAC>MAC=50:20:20:20:20:20</MAC>
</LEASES>
</TEMPLATE>
one.vn.rmleases
Description: Removes a lease from the virtual network. Only available for FIXED networks.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String template of the lease to remove. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.hold
Description: Holds a virtual network Lease as used.
Parameters
40 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String template of the lease to hold, e.g. LEASES=[IP=192.168.0.5].
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.release
Description: Releases a virtual network Lease on hold.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String template of the lease to release, e.g. LEASES=[IP=192.168.0.5].
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.update
Description: Replaces the virtual network template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.chmod
Description: Changes the permission bits of a virtual network.
Parameters
3.1. XML-RPC API 41
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.chown
Description: Changes the ownership of a virtual network.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.vn.rename
Description: Renames a virtual network.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.vn.info
Description: Retrieves information for the virtual network.
Parameters
42 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.vnpool.info
Description: Retrieves information for all or part of the virtual networks in the pool.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int When the next parameter is >= -1 this is the Range start ID. Can be -1. For smaller values this is
the offset used for pagination.
IN Int For values >= -1 this is the Range end ID. Can be -1 to get until the last ID. For values < -1 this
is the page size used for pagination.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The range can be used to retrieve a subset of the pool, from the start to the end ID. To retrieve the complete pool,
use (-1, -1); to retrieve all the pool from a specic ID to the last one, use (<id>, -1), and to retrieve the rst
elements up to an ID, use (0, <id>).
3.1.7 Actions for Datastore Management
one.datastore.allocate
Description: Allocates a new datastore in OpenNebula.
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the template of the datastore. Syntax can be the usual
attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.datastore.delete
Description: Deletes the given datastore from the pool.
Parameters
3.1. XML-RPC API 43
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.datastore.update
Description: Replaces the datastore template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.datastore.chmod
Description: Changes the permission bits of a datastore.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.datastore.chown
Description: Changes the ownership of a datastore.
Parameters
44 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.datastore.rename
Description: Renames a datastore.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.datastore.info
Description: Retrieves information for the datastore.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.datastorepool.info
Description: Retrieves information for all or part of the datastores in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
3.1.8 Actions for Image Management
one.image.allocate
Description: Allocates a new image in OpenNebula.
3.1. XML-RPC API 45
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the template of the image. Syntax can be the usual
attribute=value or XML.
IN Int The datastore ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.image.clone
Description: Clones an existing image.
Parameters
Type Data Type Description
IN String The session string.
IN Int The ID of the image to be cloned.
IN String Name for the new image.
OUT Boolean true or false whenever is successful or not
OUT Int/String The new image ID / The error string.
OUT Int Error code.
one.image.delete
Description: Deletes the given image from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.image.enable
Description: Enables or disables an image.
Parameters
Type Data Type Description
IN String The session string.
IN Int The Image ID.
IN Boolean True for enabling, false for disabling.
OUT Boolean true or false whenever is successful or not.
OUT Int/String The Image ID / The error string.
OUT Int Error code.
46 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
one.image.persistent
Description: Sets the Image as persistent or not persistent.
Parameters
Type Data Type Description
IN String The session string.
IN Int The Image ID.
IN Boolean True for persistent, false for non-persisent.
OUT Boolean true or false whenever is successful or not.
OUT Int/String The Image ID / The error string.
OUT Int Error code.
one.image.chtype
Description: Changes the type of an Image.
Parameters
Type Data Type Description
IN String The session string.
IN Int The Image ID.
IN String New type for the Image. See the existing types in the Image template reference.
OUT Boolean true or false whenever is successful or not.
OUT Int/String The Image ID / The error string.
OUT Int Error code.
one.image.update
Description: Replaces the image template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.image.chmod
Description: Changes the permission bits of an image.
Parameters
3.1. XML-RPC API 47
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.image.chown
Description: Changes the ownership of an image.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.image.rename
Description: Renames an image.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.image.info
Description: Retrieves information for the image.
Parameters
48 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.imagepool.info
Description: Retrieves information for all or part of the images in the pool.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int When the next parameter is >= -1 this is the Range start ID. Can be -1. For smaller values this is
the offset used for pagination.
IN Int For values >= -1 this is the Range end ID. Can be -1 to get until the last ID. For values < -1 this
is the page size used for pagination.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The range can be used to retrieve a subset of the pool, from the start to the end ID. To retrieve the complete pool,
use (-1, -1); to retrieve all the pool from a specic ID to the last one, use (<id>, -1), and to retrieve the rst
elements up to an ID, use (0, <id>).
3.1.9 Actions for User Management
one.user.allocate
Description: Allocates a new user in OpenNebula
Parameters
Type Data Type Description
IN String The session string.
IN String username for the new user
IN String password for the new user
IN String authentication driver for the new user. If it is an empty string, then the default (core) is
used
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated User ID / The error string.
OUT Int Error code.
one.user.delete
Description: Deletes the given user from the pool.
Parameters
3.1. XML-RPC API 49
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.user.passwd
Description: Changes the password for the given user.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new password
OUT Boolean true or false whenever is successful or not
OUT Int/String The User ID / The error string.
OUT Int Error code.
one.user.update
Description: Replaces the user template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.user.chauth
Description: Changes the authentication driver and the password for the given user.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new authentication driver.
IN String The new password. If it is an empty string, the password is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The User ID / The error string.
OUT Int Error code.
one.user.quota
Description: Sets the user quota limits.
50 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new quota template contents. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.user.chgrp
Description: Changes the group of the given user.
Parameters
Type Data Type Description
IN String The session string.
IN Int The User ID.
IN Int The Group ID of the new group.
OUT Boolean true or false whenever is successful or not
OUT Int/String The User ID / The error string.
OUT Int Error code.
one.user.addgroup
Description: Adds the User to a secondary group.
Parameters
Type Data Type Description
IN String The session string.
IN Int The User ID.
IN Int The Group ID of the new group.
OUT Boolean true or false whenever is successful or not
OUT Int/String The User ID / The error string.
OUT Int Error code.
one.user.delgroup
Description: Removes the User from a secondary group
Parameters
Type Data Type Description
IN String The session string.
IN Int The User ID.
IN Int The Group ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The User ID / The error string.
OUT Int Error code.
one.user.info
Description: Retrieves information for the user.
3.1. XML-RPC API 51
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID. If it is -1, then the connected users own info info is returned
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.userpool.info
Description: Retrieves information for all the users in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.userquota.info
Description: Returns the default user quota limits.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The quota template contents / The error string.
OUT Int Error code.
one.userquota.update
Description: Updates the default user quota limits.
Parameters
Type Data Type Description
IN String The session string.
IN String The new quota template contents. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT String The quota template contents / The error string.
OUT Int Error code.
3.1.10 Actions for Group Management
one.group.allocate
Description: Allocates a new group in OpenNebula.
Parameters
52 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN String Name for the new group.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated Group ID / The error string.
OUT Int Error code.
one.group.delete
Description: Deletes the given group from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.group.info
Description: Retrieves information for the group.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID. If it is -1, then the connected users group info info is returned
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.group.update
Description: Replaces the group template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.group.quota
Description: Sets the group quota limits.
Parameters
3.1. XML-RPC API 53
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new quota template contents. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.group.addprovider
Description: Adds a resource provider to the group
Parameters
Type Data Type Description
IN String The session string.
IN Int The group ID.
IN Int The zone ID.
IN Int The cluster ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.group.delprovider
Description: Deletes a resource provider from the group
Parameters
Type Data Type Description
IN String The session string.
IN Int The group ID.
IN Int The zone ID.
IN Int The cluster ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.grouppool.info
Description: Retrieves information for all the groups in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.groupquota.info
Description: Returns the default group quota limits.
Parameters
54 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The quota template contents / The error string.
OUT Int Error code.
one.groupquota.update
Description: Updates the default group quota limits.
Parameters
Type Data Type Description
IN String The session string.
IN String The new quota template contents. Syntax can be the usual attribute=value or XML.
OUT Boolean true or false whenever is successful or not
OUT String The quota template contents / The error string.
OUT Int Error code.
3.1.11 Actions for Zone Management
one.zone.allocate
Description: Allocates a new zone in OpenNebula.
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the template of the zone. Syntax can be the usual attribute=value
or XML.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
one.zone.delete
Description: Deletes the given zone from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.zone.update
Description: Replaces the zone template contents.
Parameters
3.1. XML-RPC API 55
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new template contents. Syntax can be the usual attribute=value or XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.zone.rename
Description: Renames a zone.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.zone.info
Description: Retrieves information for the zone.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.zonepool.info
Description: Retrieves information for all the zones in the pool.
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
3.1.12 Actions for ACL Rules Management
one.acl.addrule
Description: Adds a new ACL rule.
56 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Parameters
Type Data Type Description
IN String The session string.
IN String User component of the new rule. A string containing a hex number.
IN String Resource component of the new rule. A string containing a hex number.
IN String Rights component of the new rule. A string containing a hex number.
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated ACL rule ID / The error string.
OUT Int Error code.
To build the hex. numbers required to create a new rule we recommend you to read the ruby or java code.
one.acl.delrule
Description: Deletes an ACL rule.
Parameters
Type Data Type Description
IN String The session string.
IN Int ACL rule ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The ACL rule ID / The error string.
OUT Int Error code.
one.acl.info
Description: Returns the complete ACL rule set.
Parameters
Type Data Type Description
IN String The session string.
IN Int ACL rule ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
3.1.13 Actions for Document Management
one.document.allocate
Description: Allocates a new document in OpenNebula.
Parameters
Type Data
Type
Description
IN String The session string.
IN String A string containing the document template contents. Syntax can be the usual
attribute=value or XML.
IN Int The document type (*).
OUT Boolean true or false whenever is successful or not
OUT Int/String The allocated resource ID / The error string.
OUT Int Error code.
3.1. XML-RPC API 57
OpenNebula 4.6 Integration Guide, Release 4.6
(*) Type is an integer value used to allow dynamic pools compartmentalization.
Lets say you want to store documents representing Chef recipes, and EC2 security groups; you would allocate doc-
uments of each kind with a different type. This type is then used in the one.documentpool.info method to lter the
results.
one.document.clone
Description: Clones an existing virtual machine document.
Parameters
Type Data Type Description
IN String The session string.
IN Int The ID of the document to be cloned.
IN String Name for the new document.
OUT Boolean true or false whenever is successful or not
OUT Int/String The new document ID / The error string.
OUT Int Error code.
one.document.delete
Description: Deletes the given document from the pool.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.document.update
Description: Replaces the document template contents.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new document template contents. Syntax can be the usual attribute=value or
XML.
IN Int Update type: 0: Replace the whole template. 1: Merge new template with the existing one.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.document.chmod
Description: Changes the permission bits of a document.
Parameters
58 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int USER USE bit. If set to -1, it will not change.
IN Int USER MANAGE bit. If set to -1, it will not change.
IN Int USER ADMIN bit. If set to -1, it will not change.
IN Int GROUP USE bit. If set to -1, it will not change.
IN Int GROUP MANAGE bit. If set to -1, it will not change.
IN Int GROUP ADMIN bit. If set to -1, it will not change.
IN Int OTHER USE bit. If set to -1, it will not change.
IN Int OTHER MANAGE bit. If set to -1, it will not change.
IN Int OTHER ADMIN bit. If set to -1, it will not change.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.document.chown
Description: Changes the ownership of a document.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN Int The User ID of the new owner. If set to -1, the owner is not changed.
IN Int The Group ID of the new group. If set to -1, the group is not changed.
OUT Boolean true or false whenever is successful or not
OUT Int/String The resource ID / The error string.
OUT Int Error code.
one.document.rename
Description: Renames a document.
Parameters
Type Data Type Description
IN String The session string.
IN Int The object ID.
IN String The new name.
OUT Boolean true or false whenever is successful or not
OUT Int/String The VM ID / The error string.
OUT Int Error code.
one.document.info
Description: Retrieves information for the document.
Parameters
3.1. XML-RPC API 59
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
IN Int The object ID.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
one.documentpool.info
Description: Retrieves information for all or part of the Resources in the pool.
Parameters
Type Data
Type
Description
IN String The session string.
IN Int Filter ag - < = -3: Connected users resources - -2: All resources - -1: Connected users and his
groups resources - > = 0: UID Users Resources
IN Int When the next parameter is >= -1 this is the Range start ID. Can be -1. For smaller values this is
the offset used for pagination.
IN Int For values >= -1 this is the Range end ID. Can be -1 to get until the last ID. For values < -1 this
is the page size used for pagination.
IN Int The document type.
OUT Boolean true or false whenever is successful or not
OUT String The information string / The error string.
OUT Int Error code.
The range can be used to retrieve a subset of the pool, from the start to the end ID. To retrieve the complete pool,
use (-1, -1); to retrieve all the pool from a specic ID to the last one, use (<id>, -1), and to retrieve the rst
elements up to an ID, use (0, <id>).
3.1.14 System Methods
one.system.version
Description: Returns the OpenNebula core version
Parameters
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The OpenNebula version, e.g. 4.4.0
OUT Int Error code.
one.system.cong
Description: Returns the OpenNebula conguration
Parameters
60 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Type Data Type Description
IN String The session string.
OUT Boolean true or false whenever is successful or not
OUT String The loaded oned.conf le, in XML form
OUT Int Error code.
3.1.15 XSD Reference
The XML schemas describe the XML returned by the one.*.info methods
Schemas for Cluster
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="CLUSTER">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="HOSTS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DATASTORES">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VNETS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="cluster.xsd"/>
<xs:element name="CLUSTER_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="CLUSTER" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
3.1. XML-RPC API 61
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Datastore
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://opennebula.org/XMLSchema" elementFormDefault="qualified" targetNamespace="http://opennebula.org/XMLSchema">
<xs:element name="DATASTORE">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DS_MAD" type="xs:string"/>
<xs:element name="TM_MAD" type="xs:string"/>
<xs:element name="BASE_PATH" type="xs:string"/>
<xs:element name="TYPE" type="xs:integer"/>
<xs:element name="DISK_TYPE" type="xs:integer"/>
<xs:element name="CLUSTER_ID" type="xs:integer"/>
<xs:element name="CLUSTER" type="xs:string"/>
<xs:element name="TOTAL_MB" type="xs:integer"/>
<xs:element name="FREE_MB" type="xs:integer"/>
<xs:element name="USED_MB" type="xs:integer"/>
<xs:element name="IMAGES">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
62 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="datastore.xsd"/>
<xs:element name="DATASTORE_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="DATASTORE" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Group
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="GROUP">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="USERS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="RESOURCE_PROVIDER" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ZONE_ID" type="xs:integer"/>
<xs:element name="CLUSTER_ID" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
3.1. XML-RPC API 63
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
<xs:element name="VOLATILE_SIZE" type="xs:string"/>
<xs:element name="VOLATILE_SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DEFAULT_GROUP_QUOTAS">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
64 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
3.1. XML-RPC API 65
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="GROUP_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:choice maxOccurs="unbounded" minOccurs="0">
<xs:element name="GROUP" maxOccurs="unbounded" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="USERS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="RESOURCE_PROVIDER" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ZONE_ID" type="xs:integer"/>
<xs:element name="CLUSTER_ID" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="QUOTAS" maxOccurs="unbounded" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
66 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
<xs:element name="VOLATILE_SIZE" type="xs:string"/>
<xs:element name="VOLATILE_SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
<xs:element name="DEFAULT_GROUP_QUOTAS" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
3.1. XML-RPC API 67
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
68 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Host
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://opennebula.org/XMLSchema" elementFormDefault="qualified" targetNamespace="http://opennebula.org/XMLSchema">
<xs:element name="HOST">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="NAME" type="xs:string"/>
<!-- STATE values
INIT = 0 Initial state for enabled hosts
MONITORING_MONITORED = 1 Monitoring the host (from monitored)
MONITORED = 2 The host has been successfully monitored
ERROR = 3 An error ocurrer while monitoring the host
DISABLED = 4 The host is disabled wont be monitored
MONITORING_ERROR = 5 Monitoring the host (from error)
MONITORING_INIT = 6 Monitoring the host (from init)
MONITORING_DISABLED = 7 Monitoring the host (from disabled)
-->
<xs:element name="STATE" type="xs:integer"/>
<xs:element name="IM_MAD" type="xs:string"/>
<xs:element name="VM_MAD" type="xs:string"/>
<xs:element name="VN_MAD" type="xs:string"/>
<xs:element name="LAST_MON_TIME" type="xs:integer"/>
<xs:element name="CLUSTER_ID" type="xs:integer"/>
<xs:element name="CLUSTER" type="xs:string"/>
<xs:element name="HOST_SHARE">
<xs:complexType>
<xs:sequence>
<xs:element name="DISK_USAGE" type="xs:integer"/>
<xs:element name="MEM_USAGE" type="xs:integer"/>
<!-- ^^ KB, Usage of MEMORY calculated by ONE as the summatory MEMORY requested by all VMs running in the host -->
<xs:element name="CPU_USAGE" type="xs:integer"/>
<!-- ^^ Percentage, Usage of CPU calculated by ONE as the summatory CPU requested by all VMs running in the host -->
<xs:element name="MAX_DISK" type="xs:integer"/>
<xs:element name="MAX_MEM" type="xs:integer"/>
<!-- ^^ KB, Total memory in the host -->
<xs:element name="MAX_CPU" type="xs:integer"/>
<!-- ^^ Percentage, Total CPU in the host (# cores
*
100) -->
<xs:element name="FREE_DISK" type="xs:integer"/>
<xs:element name="FREE_MEM" type="xs:integer"/>
3.1. XML-RPC API 69
OpenNebula 4.6 Integration Guide, Release 4.6
<!-- ^^ KB, Free MEMORY returned by the probes -->
<xs:element name="FREE_CPU" type="xs:integer"/>
<!-- ^^ Percentage, Free CPU as returned by the probes -->
<xs:element name="USED_DISK" type="xs:integer"/>
<xs:element name="USED_MEM" type="xs:integer"/>
<!-- ^^ KB, Memory used by all host processes (including VMs) over a total of MAX_MEM -->
<xs:element name="USED_CPU" type="xs:integer"/>
<!-- ^^ Percentage of CPU used by all host processes (including VMs) over a total of # cores
*
100 -->
<xs:element name="RUNNING_VMS" type="xs:integer"/>
<xs:element name="DATASTORES">
<xs:complexType>
<xs:sequence>
<xs:element name="DS" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="FREE_MB" type="xs:integer"/>
<xs:element name="TOTAL_MB" type="xs:integer"/>
<xs:element name="USED_MB" type="xs:integer"/>
</xs:all>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VMS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="host.xsd"/>
<xs:element name="HOST_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="HOST" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
70 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Schemas for Image
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://opennebula.org/XMLSchema" elementFormDefault="qualified" targetNamespace="http://opennebula.org/XMLSchema">
<xs:element name="IMAGE">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TYPE" type="xs:integer"/>
<xs:element name="DISK_TYPE" type="xs:integer"/>
<xs:element name="PERSISTENT" type="xs:integer"/>
<xs:element name="REGTIME" type="xs:integer"/>
<xs:element name="SOURCE" type="xs:string"/>
<xs:element name="PATH" type="xs:string"/>
<xs:element name="FSTYPE" type="xs:string"/>
<xs:element name="SIZE" type="xs:integer"/>
<!-- STATE values,
INIT = 0, Initialization state
READY = 1, Image ready to use
USED = 2, Image in use
DISABLED = 3, Image can not be instantiated by a VM
LOCKED = 4, FS operation for the Image in process
ERROR = 5, Error state the operation FAILED
CLONE = 6, Image is being cloned
DELETE = 7, DS is deleting the image
USED_PERS = 8, Image is in use and persistent
-->
<xs:element name="STATE" type="xs:integer"/>
<xs:element name="RUNNING_VMS" type="xs:integer"/>
<xs:element name="CLONING_OPS" type="xs:integer"/>
<xs:element name="CLONING_ID" type="xs:integer"/>
<xs:element name="DATASTORE_ID" type="xs:integer"/>
<xs:element name="DATASTORE" type="xs:string"/>
<xs:element name="VMS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
3.1. XML-RPC API 71
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="CLONES">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="image.xsd"/>
<xs:element name="IMAGE_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="IMAGE" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for User
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="USER">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="GROUPS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PASSWORD" type="xs:string"/>
<xs:element name="AUTH_DRIVER" type="xs:string"/>
<xs:element name="ENABLED" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
72 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
<xs:element name="VOLATILE_SIZE" type="xs:string"/>
<xs:element name="VOLATILE_SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
3.1. XML-RPC API 73
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="DEFAULT_USER_QUOTAS">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
74 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="user.xsd"/>
<xs:element name="USER_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:choice maxOccurs="unbounded" minOccurs="0">
<xs:element name="USER" maxOccurs="unbounded" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="GROUPS">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PASSWORD" type="xs:string"/>
<xs:element name="AUTH_DRIVER" type="xs:string"/>
<xs:element name="ENABLED" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="QUOTAS" maxOccurs="unbounded" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
3.1. XML-RPC API 75
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
<xs:element name="VOLATILE_SIZE" type="xs:string"/>
<xs:element name="VOLATILE_SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
76 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
<xs:element name="DEFAULT_USER_QUOTAS">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="DATASTORE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="IMAGES" type="xs:string"/>
<xs:element name="IMAGES_USED" type="xs:string"/>
<xs:element name="SIZE" type="xs:string"/>
<xs:element name="SIZE_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="NETWORK_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="NETWORK" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="LEASES" type="xs:string"/>
<xs:element name="LEASES_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="VM_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="VM" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="CPU" type="xs:string"/>
<xs:element name="CPU_USED" type="xs:string"/>
<xs:element name="MEMORY" type="xs:string"/>
<xs:element name="MEMORY_USED" type="xs:string"/>
<xs:element name="VMS" type="xs:string"/>
<xs:element name="VMS_USED" type="xs:string"/>
</xs:sequence>
3.1. XML-RPC API 77
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="IMAGE_QUOTA" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IMAGE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string"/>
<xs:element name="RVMS" type="xs:string"/>
<xs:element name="RVMS_USED" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Virtual Machine
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="VM">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
78 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="LAST_POLL" type="xs:integer"/>
<!-- STATE values,
see http://opennebula.org/_media/documentation:rel3.6:states-complete.png
INIT = 0
PENDING = 1
HOLD = 2
ACTIVE = 3 In this state, the Life Cycle Manager state is relevant
STOPPED = 4
SUSPENDED = 5
DONE = 6
FAILED = 7
POWEROFF = 8
UNDEPLOYED = 9
-->
<xs:element name="STATE" type="xs:integer"/>
<!-- LCM_STATE values, this sub-state is relevant only when STATE is
ACTIVE (4)
LCM_INIT = 0,
PROLOG = 1,
BOOT = 2,
RUNNING = 3,
MIGRATE = 4,
SAVE_STOP = 5,
SAVE_SUSPEND = 6,
SAVE_MIGRATE = 7,
PROLOG_MIGRATE = 8,
PROLOG_RESUME = 9,
EPILOG_STOP = 10,
EPILOG = 11,
SHUTDOWN = 12,
CANCEL = 13,
FAILURE = 14,
CLEANUP_RESUBMIT = 15,
UNKNOWN = 16,
HOTPLUG = 17,
SHUTDOWN_POWEROFF = 18,
BOOT_UNKNOWN = 19,
BOOT_POWEROFF = 20,
BOOT_SUSPENDED = 21,
BOOT_STOPPED = 22,
CLEANUP_DELETE = 23,
HOTPLUG_SNAPSHOT = 24,
HOTPLUG_NIC = 25,
HOTPLUG_SAVEAS = 26,
HOTPLUG_SAVEAS_POWEROFF = 27,
HOTPLUG_SAVEAS_SUSPENDED = 28,
SHUTDOWN_UNDEPLOY = 29,
EPILOG_UNDEPLOY = 30,
PROLOG_UNDEPLOY = 31,
BOOT_UNDEPLOY = 32
-->
<xs:element name="LCM_STATE" type="xs:integer"/>
<xs:element name="RESCHED" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
3.1. XML-RPC API 79
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="DEPLOY_ID" type="xs:string"/>
<!-- MEMORY consumption in kilobytes -->
<xs:element name="MEMORY" type="xs:integer"/>
<!-- Percentage of 1 CPU consumed (two fully consumed cpu is 200) -->
<xs:element name="CPU" type="xs:integer"/>
<!-- NET_TX: Sent bytes to the network -->
<xs:element name="NET_TX" type="xs:integer"/>
<!-- NET_RX: Received bytes from the network -->
<xs:element name="NET_RX" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="USER_TEMPLATE" type="xs:anyType"/>
<xs:element name="HISTORY_RECORDS">
<xs:complexType>
<xs:sequence>
<xs:element name="HISTORY" maxOccurs="unbounded" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="OID" type="xs:integer"/>
<xs:element name="SEQ" type="xs:integer"/>
<xs:element name="HOSTNAME" type="xs:string"/>
<xs:element name="HID" type="xs:integer"/>
<xs:element name="CID" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="VMMMAD" type="xs:string"/>
<xs:element name="VNMMAD" type="xs:string"/>
<xs:element name="TMMAD" type="xs:string"/>
<xs:element name="DS_LOCATION" type="xs:string"/>
<xs:element name="DS_ID" type="xs:integer"/>
<xs:element name="PSTIME" type="xs:integer"/>
<xs:element name="PETIME" type="xs:integer"/>
<xs:element name="RSTIME" type="xs:integer"/>
<xs:element name="RETIME" type="xs:integer"/>
<xs:element name="ESTIME" type="xs:integer"/>
<xs:element name="EETIME" type="xs:integer"/>
<!-- REASON values:
NONE = 0 History record is not closed yet
ERROR = 1 History record was closed because of an error
USER = 2 History record was closed because of a user action
-->
<xs:element name="REASON" type="xs:integer"/>
<!-- ACTION values:
NONE_ACTION = 0
MIGRATE_ACTION = 1
LIVE_MIGRATE_ACTION = 2
SHUTDOWN_ACTION = 3
SHUTDOWN_HARD_ACTION = 4
UNDEPLOY_ACTION = 5
UNDEPLOY_HARD_ACTION = 6
HOLD_ACTION = 7
RELEASE_ACTION = 8
80 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
STOP_ACTION = 9
SUSPEND_ACTION = 10
RESUME_ACTION = 11
BOOT_ACTION = 12
DELETE_ACTION = 13
DELETE_RECREATE_ACTION = 14
REBOOT_ACTION = 15
REBOOT_HARD_ACTION = 16
RESCHED_ACTION = 17
UNRESCHED_ACTION = 18
POWEROFF_ACTION = 19
POWEROFF_HARD_ACTION = 20
-->
<xs:element name="ACTION" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="unqualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="vm.xsd"/>
<xs:element name="VM_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="VM" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Virtual Machine Template
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://opennebula.org/XMLSchema" elementFormDefault="qualified" targetNamespace="http://opennebula.org/XMLSchema">
<xs:element name="VMTEMPLATE">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
3.1. XML-RPC API 81
OpenNebula 4.6 Integration Guide, Release 4.6
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="REGTIME" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="vmtemplate.xsd"/>
<xs:element name="VMTEMPLATE_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="VMTEMPLATE" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
Schemas for Virtual Network
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="VNET">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
82 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:element>
<xs:element name="CLUSTER_ID" type="xs:integer"/>
<xs:element name="CLUSTER" type="xs:string"/>
<xs:element name="TYPE" type="xs:integer"/>
<xs:element name="BRIDGE" type="xs:string"/>
<xs:element name="VLAN" type="xs:integer"/>
<xs:element name="PHYDEV" type="xs:string"/>
<xs:element name="VLAN_ID" type="xs:string"/>
<xs:element name="GLOBAL_PREFIX" type="xs:string"/>
<xs:element name="SITE_PREFIX" type="xs:string"/>
<xs:element name="RANGE" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="IP_START" type="xs:string"/>
<xs:element name="IP_END" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="TOTAL_LEASES" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="LEASES" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence minOccurs="0">
<xs:element name="LEASE" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="MAC" type="xs:string"/>
<xs:element name="IP" type="xs:string"/>
<xs:element name="IP6_LINK" type="xs:string"/>
<xs:element name="IP6_SITE" type="xs:string" minOccurs="0" maxOccurs="1"/>
<xs:element name="IP6_GLOBAL" type="xs:string" minOccurs="0" maxOccurs="1"/>
<xs:element name="USED" type="xs:integer"/>
<xs:element name="VID" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:include schemaLocation="vnet.xsd"/>
<xs:element name="VNET_POOL">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="VNET" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
3.1. XML-RPC API 83
OpenNebula 4.6 Integration Guide, Release 4.6
Schemas for Accounting
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"
targetNamespace="http://opennebula.org/XMLSchema" xmlns="http://opennebula.org/XMLSchema">
<xs:element name="HISTORY_RECORDS">
<xs:complexType>
<xs:sequence maxOccurs="1" minOccurs="1">
<xs:element ref="HISTORY" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="HISTORY">
<xs:complexType>
<xs:sequence>
<xs:element name="OID" type="xs:integer"/>
<xs:element name="SEQ" type="xs:integer"/>
<xs:element name="HOSTNAME" type="xs:string"/>
<xs:element name="HID" type="xs:integer"/>
<xs:element name="CID" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="VMMMAD" type="xs:string"/>
<xs:element name="VNMMAD" type="xs:string"/>
<xs:element name="TMMAD" type="xs:string"/>
<xs:element name="DS_LOCATION" type="xs:string"/>
<xs:element name="DS_ID" type="xs:integer"/>
<xs:element name="PSTIME" type="xs:integer"/>
<xs:element name="PETIME" type="xs:integer"/>
<xs:element name="RSTIME" type="xs:integer"/>
<xs:element name="RETIME" type="xs:integer"/>
<xs:element name="ESTIME" type="xs:integer"/>
<xs:element name="EETIME" type="xs:integer"/>
<!-- REASON values:
NONE = 0 History record is not closed yet
ERROR = 1 History record was closed because of an error
USER = 2 History record was closed because of a user action
-->
<xs:element name="REASON" type="xs:integer"/>
<!-- ACTION values:
NONE_ACTION = 0
MIGRATE_ACTION = 1
LIVE_MIGRATE_ACTION = 2
SHUTDOWN_ACTION = 3
SHUTDOWN_HARD_ACTION = 4
UNDEPLOY_ACTION = 5
UNDEPLOY_HARD_ACTION = 6
HOLD_ACTION = 7
RELEASE_ACTION = 8
STOP_ACTION = 9
SUSPEND_ACTION = 10
RESUME_ACTION = 11
BOOT_ACTION = 12
DELETE_ACTION = 13
84 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
DELETE_RECREATE_ACTION = 14
REBOOT_ACTION = 15
REBOOT_HARD_ACTION = 16
RESCHED_ACTION = 17
UNRESCHED_ACTION = 18
POWEROFF_ACTION = 19
POWEROFF_HARD_ACTION = 20
-->
<xs:element name="ACTION" type="xs:integer"/>
<xs:element name="VM">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:integer"/>
<xs:element name="UID" type="xs:integer"/>
<xs:element name="GID" type="xs:integer"/>
<xs:element name="UNAME" type="xs:string"/>
<xs:element name="GNAME" type="xs:string"/>
<xs:element name="NAME" type="xs:string"/>
<xs:element name="PERMISSIONS" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="OWNER_U" type="xs:integer"/>
<xs:element name="OWNER_M" type="xs:integer"/>
<xs:element name="OWNER_A" type="xs:integer"/>
<xs:element name="GROUP_U" type="xs:integer"/>
<xs:element name="GROUP_M" type="xs:integer"/>
<xs:element name="GROUP_A" type="xs:integer"/>
<xs:element name="OTHER_U" type="xs:integer"/>
<xs:element name="OTHER_M" type="xs:integer"/>
<xs:element name="OTHER_A" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="LAST_POLL" type="xs:integer"/>
<!-- STATE values,
see http://opennebula.org/documentation:documentation:api#actions_for_virtual_machine_management
INIT = 0
PENDING = 1
HOLD = 2
ACTIVE = 3 In this state, the Life Cycle Manager state is relevant
STOPPED = 4
SUSPENDED = 5
DONE = 6
FAILED = 7
POWEROFF = 8
UNDEPLOYED = 9
-->
<xs:element name="STATE" type="xs:integer"/>
<!-- LCM_STATE values, this sub-state is relevant only when STATE is
ACTIVE (4)
LCM_INIT = 0,
PROLOG = 1,
BOOT = 2,
3.1. XML-RPC API 85
OpenNebula 4.6 Integration Guide, Release 4.6
RUNNING = 3,
MIGRATE = 4,
SAVE_STOP = 5,
SAVE_SUSPEND = 6,
SAVE_MIGRATE = 7,
PROLOG_MIGRATE = 8,
PROLOG_RESUME = 9,
EPILOG_STOP = 10,
EPILOG = 11,
SHUTDOWN = 12,
CANCEL = 13,
FAILURE = 14,
CLEANUP_RESUBMIT = 15,
UNKNOWN = 16,
HOTPLUG = 17,
SHUTDOWN_POWEROFF = 18,
BOOT_UNKNOWN = 19,
BOOT_POWEROFF = 20,
BOOT_SUSPENDED = 21,
BOOT_STOPPED = 22,
CLEANUP_DELETE = 23,
HOTPLUG_SNAPSHOT = 24,
HOTPLUG_NIC = 25,
HOTPLUG_SAVEAS = 26,
HOTPLUG_SAVEAS_POWEROFF = 27,
HOTPLUG_SAVEAS_SUSPENDED = 28,
SHUTDOWN_UNDEPLOY = 29,
EPILOG_UNDEPLOY = 30,
PROLOG_UNDEPLOY = 31,
BOOT_UNDEPLOY = 32
-->
<xs:element name="LCM_STATE" type="xs:integer"/>
<xs:element name="RESCHED" type="xs:integer"/>
<xs:element name="STIME" type="xs:integer"/>
<xs:element name="ETIME" type="xs:integer"/>
<xs:element name="DEPLOY_ID" type="xs:string"/>
<!-- MEMORY consumption in kilobytes -->
<xs:element name="MEMORY" type="xs:integer"/>
<!-- Percentage of 1 CPU consumed (two fully consumed cpu is 200) -->
<xs:element name="CPU" type="xs:integer"/>
<!-- NET_TX: Sent bytes to the network -->
<xs:element name="NET_TX" type="xs:integer"/>
<!-- NET_RX: Received bytes from the network -->
<xs:element name="NET_RX" type="xs:integer"/>
<xs:element name="TEMPLATE" type="xs:anyType"/>
<xs:element name="USER_TEMPLATE" type="xs:anyType"/>
<xs:element name="HISTORY_RECORDS">
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
86 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
</xs:schema>
3.2 Ruby OpenNebula Cloud API
This page contains the OpenNebula Cloud API Specication for Ruby. It has been designed as a wrapper for the
XML-RPC methods, with some basic helpers. This means that you should be familiar with the XML-RPC API and
the XML formats returned by the OpenNebula core. As stated in the XML-RPC documentation, you can download the
XML Schemas (XSD) here.
3.2.1 API Documentation
You can consult the doc online.
3.2.2 Usage
You can use the Ruby OCA included in the OpenNebula distribution by adding the OpenNebula Ruby library path to
the search path:
##############################################################################
# Environment Configuration
##############################################################################
ONE_LOCATION=ENV["ONE_LOCATION"]
if !ONE_LOCATION
RUBY_LIB_LOCATION="/usr/lib/one/ruby"
else
RUBY_LIB_LOCATION=ONE_LOCATION+"/lib/ruby"
end
$: << RUBY_LIB_LOCATION
##############################################################################
# Required libraries
##############################################################################
require opennebula
3.2.3 Code Sample: Shutdown all the VMs of the Pool
This is a small code snippet. As you can see, the code ow would be as follows:
Create a new Client, setting up the authorization string. You only need to create it once.
Get the VirtualMachine pool that contains the VirtualMachines owned by this User.
You can perform actions over these objects right away, like myVNet.delete();. In this example all the
VirtualMachines will be shut down.
#!/usr/bin/env ruby
##############################################################################
# Environment Configuration
##############################################################################
ONE_LOCATION=ENV["ONE_LOCATION"]
3.2. Ruby OpenNebula Cloud API 87
OpenNebula 4.6 Integration Guide, Release 4.6
if !ONE_LOCATION
RUBY_LIB_LOCATION="/usr/lib/one/ruby"
else
RUBY_LIB_LOCATION=ONE_LOCATION+"/lib/ruby"
end
$: << RUBY_LIB_LOCATION
##############################################################################
# Required libraries
##############################################################################
require opennebula
include OpenNebula
# OpenNebula credentials
CREDENTIALS = "oneuser:onepass"
# XML_RPC endpoint where OpenNebula is listening
ENDPOINT = "http://localhost:2633/RPC2"
client = Client.new(CREDENTIALS, ENDPOINT)
vm_pool = VirtualMachinePool.new(client, -1)
rc = vm_pool.info
if OpenNebula.is_error?(rc)
puts rc.message
exit -1
end
vm_pool.each do |vm|
rc = vm.shutdown
if OpenNebula.is_error?(rc)
puts "Virtual Machine #{vm.id}: #{rc.message}"
else
puts "Virtual Machine #{vm.id}: Shutting down"
end
end
exit 0
3.2.4 Code Sample: Create a new VirtualNetwork
#!/usr/bin/env ruby
##############################################################################
# Environment Configuration
##############################################################################
ONE_LOCATION=ENV["ONE_LOCATION"]
if !ONE_LOCATION
RUBY_LIB_LOCATION="/usr/lib/one/ruby"
else
RUBY_LIB_LOCATION=ONE_LOCATION+"/lib/ruby"
end
88 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
$: << RUBY_LIB_LOCATION
##############################################################################
# Required libraries
##############################################################################
require opennebula
include OpenNebula
# OpenNebula credentials
CREDENTIALS = "oneuser:onepass"
# XML_RPC endpoint where OpenNebula is listening
ENDPOINT = "http://localhost:2633/RPC2"
client = Client.new(CREDENTIALS, ENDPOINT)
template = <<-EOT
NAME = "Red LAN"
TYPE = RANGED
# Now well use the host private network (physical)
BRIDGE = vbr0
NETWORK_SIZE = C
NETWORK_ADDRESS = 192.168.0.0
# Custom Attributes to be used in Context
GATEWAY = 192.168.0.1
DNS = 192.168.0.1
LOAD_BALANCER = 192.168.0.3
EOT
xml = OpenNebula::VirtualNetwork.build_xml
vn = OpenNebula::VirtualNetwork.new(xml, @client)
rc = vn.allocate(template)
if OpenNebula.is_error?(rc)
exit -1, rc.message
else
puts "ID: #{vn.id.to_s}"
end
puts "Before info:"
puts vn.to_xml
puts
vn.info
puts "After info:"
puts vn.to_xml
3.2. Ruby OpenNebula Cloud API 89
OpenNebula 4.6 Integration Guide, Release 4.6
3.3 Java OpenNebula Cloud API
This page contains the OpenNebula Cloud API Specication for Java. It has been designed as a wrapper for the XML-
RPC methods, with some basic helpers. This means that you should be familiar with the XML-RPC API and the XML
formats returned by the OpenNebula core. As stated in the XML-RPC documentation, you can download the XML
Schemas (XSD) here.
3.3.1 Download
The Java OCA is part of the OpenNebula core distribution. If you installed from the Debian, Ubuntu or CentOS
packages it should be already installed in /usr/share/java/org.opennebula.client.jar. You also can
download the .jar le compiled using Java OpenJDK 1.7, the required libraries, and the javadoc packaged in a tar.gz
le following this link.
You can also consult the javadoc online.
3.3.2 Usage
To use the OpenNebula Cloud API for Java in your Java project, you have to add to the classpath the
org.opennebula.client.jar le and the xml-rpc libraries located in the lib directory.
3.3.3 Code Sample
This is a small code snippet. As you can see, the code ow would be as follows:
Create a org.opennebula.client.Client object, setting up the authorization string and the endpoint.
You only need to create it once.
Create a pool (e.g. HostPool) or element (e.g. VirtualNetwork) object.
You can perform actions over these objects right away, like myVNet.delete();
If you want to query any information (like what objects the pool contains, or one of the element attributes), you
have to issue an info() call before, so the object retrieves the data from OpenNebula.
For more complete examples, please check the src/oca/java/share/examples directory included. You may
be also interested in the java les included in src/oca/java/test.
// First of all, a Client object has to be created.
// Here the client will try to connect to OpenNebula using the default
// options: the auth. file will be assumed to be at $ONE_AUTH, and the
// endpoint will be set to the environment variable $ONE_XMLRPC.
Client oneClient;
try
{
oneClient = new Client();
// This VM template is a valid one, but it will probably fail to run
// if we try to deploy it; the path for the image is unlikely to
// exist.
String vmTemplate =
"NAME = vm_from_java CPU = 0.1 MEMORY = 64\n"
+ "DISK = [\n"
+ "\tsource = \"/home/user/vmachines/ttylinux/ttylinux.img\",\n"
90 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
+ "\ttarget = \"hda\",\n"
+ "\treadonly = \"no\" ]\n"
+ "FEATURES = [ acpi=\"no\" ]";
System.out.print("Trying to allocate the virtual machine... ");
OneResponse rc = VirtualMachine.allocate(oneClient, vmTemplate);
if( rc.isError() )
{
System.out.println( "failed!");
throw new Exception( rc.getErrorMessage() );
}
// The response message is the new VMs ID
int newVMID = Integer.parseInt(rc.getMessage());
System.out.println("ok, ID " + newVMID + ".");
// We can create a representation for the new VM, using the returned
// VM-ID
VirtualMachine vm = new VirtualMachine(newVMID, oneClient);
// Lets hold the VM, so the scheduler wont try to deploy it
System.out.print("Trying to hold the new VM... ");
rc = vm.hold();
if(rc.isError())
{
System.out.println("failed!");
throw new Exception( rc.getErrorMessage() );
}
// And now we can request its information.
rc = vm.info();
if(rc.isError())
throw new Exception( rc.getErrorMessage() );
System.out.println();
System.out.println(
"This is the information OpenNebula stores for the new VM:");
System.out.println(rc.getMessage() + "\n");
// This VirtualMachine object has some helpers, so we can access its
// attributes easily (remember to load the data first using the info
// method).
System.out.println("The new VM " +
vm.getName() + " has status: " + vm.status());
// And we can also use xpath expressions
System.out.println("The path of the disk is");
System.out.println( "\t" + vm.xpath("template/disk/source") );
// We have also some useful helpers for the actions you can perform
// on a virtual machine, like cancel or finalize:
rc = vm.finalizeVM();
System.out.println("\nTrying to finalize (delete) the VM " +
vm.getId() + "...");
3.3. Java OpenNebula Cloud API 91
OpenNebula 4.6 Integration Guide, Release 4.6
}
catch (Exception e)
{
System.out.println(e.getMessage());
}
3.3.4 Compilation
To compile the Java OCA, untar the OpenNebula source, cd to the java directory and use the build script:
$ cd src/oca/java
$ ./build.sh -d
Compiling java files into class files...
Packaging class files in a jar...
Generating javadocs...
This command will compile and package the code in jar/org.opennebula.client.jar, and the javadoc will
be created in share/doc/.
You might want to copy the .jar les to a more convenient directory. You could use /usr/lib/one/java/
$ sudo mkdir /usr/lib/one/java/
$ sudo cp jar/
*
lib/
*
/usr/lib/one/java/
3.4 OneFlow Specication
3.4.1 Overview
The OpenNebula OneFlow API is a RESTful service to create, control and monitor multi-tier applications or services
composed of interconnected Virtual Machines with deployment dependencies between them. Each group of Virtual
Machines is deployed and managed as a single entity, and is completely integrated with the advanced OpenNebula
user and group management. There are two kind of resources; services templates and services. All data is sent and
received as JSON.
This guide is intended for developers. The OpenNebula distribution includes a cli to interact with OneFlow and it is
also fully integrated in the Sunstone GUI
3.4.2 Authentication & Authorization
User authentication will be HTTP Basic access authentication. The credentials passed should be the User name and
password.
$ curl -u "username:password" https://oneflow.server
3.4.3 Return Codes
The OneFlow API uses the following subset of HTTP Status codes:
200 OK : The request has succeeded.
201 Created : Request was successful and a new resource has being created
92 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
202 Accepted : The request has been accepted for processing, but the processing has not been completed
204 No Content : The request has been accepted for processing, but no info in the response
400 Bad Request : Malformed syntax
401 Unauthorized : Bad authentication
403 Forbidden : Bad authorization
404 Not Found : Resource not found
500 Internal Server Error : The server encountered an unexpected condition which prevented it from fullling
the request.
501 Not Implemented : The functionality requested is not supported
> POST /service_template HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: onflow.server:2474
>
< HTTP/1.1 400 Bad Request
< Content-Type: text/html;charset=utf-8
< Content-Type:application/json;charset=utf-8
< Content-Length: 40
<
{
"error": {
"message": "Role worker cardinality must be greater than or equal to min_vms"
}
}
The methods specied below are described without taking into account 4xx (can be inferred from authorization in-
formation in section above) and 5xx errors (which are method independent). HTTP verbs not dened for a particular
entity will return a 501 Not Implemented.
3.4. OneFlow Specication 93
OpenNebula 4.6 Integration Guide, Release 4.6
3.4.4 Methods
Service
Method URL Meaning / Entity Body Response
GET /service List the contents of the SERVICE collection. 200 OK: A JSON
representation of the
collection in the http
body
GET /service/<id> Show the SERVICE resource identied by <id> 200 OK: A JSON
representation of the
collection in the http
body
DELETE /service/<id> Delete the SERVICE resource identied by <id> 201:
POST/service/<id>/action Perform an action on the SERVICE resource identied by
<id>. Available actions: shutdown, recover, chown, chgrp,
chmod
201:
PUT /service/<id>/role/<name> Update the ROLE identied by <name> of the SERVICE
resource identied by <id>. Currently the only attribute that
can be updated is the cardinality.
200 OK:
POST/service/<id>/role/<name>/action Perform an action on all the Virtual Machines belonging to
the ROLE identied by <name> of the SERVICE resource
identied by <id>. Available actions: shutdown,
shutdown-hard, undeploy, undeploy-hard, hold, release,
stop, suspend, resume, boot, delete, delete-recreate, reboot,
reboot-hard, poweroff, poweroff-hard, snapshot-create
201:
Service Template
Method URL Meaning / Entity Body Response
GET /service_template List the contents of the SERVICE_TEMPLATE
collection.
200 OK: A JSON representation of
the collection in the http body
GET /service_template/<id> Show the SERVICE_TEMPLATE resource
identied by <id>
200 OK: A JSON representation of
the collection in the http body
DELETE /service_template/<id> Delete the SERVICE_TEMPLATE resource
identied by <id>
201:
POST/service_template Create a new SERVICE_TEMPLATE resource. 201 Created: A JSON
representation of the new
SERVICE_TEMPLATE resource in
the http body
PUT /service_template/<id> Update the SERVICE_TEMPLATE resource
identied by <id>.
200 OK:
POST/service_template/<id>/action Perform an action on the
SERVICE_TEMPLATE resource identied by
<id>. Available actions: instantiate, chown,
chgrp, chmod
201:
3.4.5 Resource Representation
Service Schema
A Service is dened with JSON syntax templates.
94 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Attribute Type Mandatory Description
name string No Name of the Service
deployment string No
Deployment strategy:
none: All roles are
deployed at the same
time straight: Each
Role is deployed
when all its parent
Roles are running
Defaults to none
shutdown_action string No VM shutdown action:
shutdown or shutdown-
hard. If it is not set, the
default set in oneow-
server.conf will be used
roles array of Roles Yes Array of Roles, see below
Each Role is dened as:
Attribute Type Mandatory Description
name string Yes Role name
cardinality integer No Number of VMs to deploy. Defaults to 1
vm_template integer Yes OpenNebula VM Template ID. See the OpenNebula documentation for
VM Templates
parents array of
string
No Names of the roles that must be deployed before this one
shut-
down_action
string No VM shutdown action: shutdown or shutdown-hard. If it is not set,
the one set for the Service will be used
min_vms integer No (Yes for
elasticity)
Minimum number of VMs for elasticity adjustments
max_vms integer No (Yes for
elasticity)
Maximum number of VMs for elasticity adjustments
cooldown integer No Cooldown period duration after a scale operation, in seconds. If it is not
set, the default set in oneow-server.conf will be used
elastic-
ity_policies
array of
Policies
No Array of Elasticity Policies, see below
sched-
uled_policies
array of
Policies
No Array of Scheduled Policies, see below
To dene a elasticity policy:
3.4. OneFlow Specication 95
OpenNebula 4.6 Integration Guide, Release 4.6
At-
tribute
Type Manda-
tory
Description
type string Yes Type of adjustment. Values: CHANGE, CARDINALITY, PERCENTAGE_CHANGE
adjust in-
te-
ger
Yes Positive or negative adjustment. Its meaning depends on type
min_adjust_step in-
te-
ger
No Optional parameter for PERCENTAGE_CHAGE adjustment type. If present, the
policy will change the cardinality by at least the number of VMs set in this attribute.
expres-
sion
string Yes Expression to trigger the elasticity
pe-
riod_number
in-
te-
ger
No Number of periods that the expression must be true before the elasticity is triggered
period in-
te-
ger
No Duration, in seconds, of each period in period_duration
cooldown in-
te-
ger
No Cooldown period duration after a scale operation, in seconds. If it is not set, the one
set for the Role will be used
And each scheduled policy is dened as:
At-
tribute
Type Manda-
tory
Description
type string Yes Type of adjustment. Values: CHANGE, CARDINALITY, PERCENTAGE_CHANGE
adjust in-
te-
ger
Yes Positive or negative adjustment. Its meaning depends on type
min_adjust_step in-
te-
ger
No Optional parameter for PERCENTAGE_CHAGE adjustment type. If present, the
policy will change the cardinality by at least the number of VMs set in this attribute.
recur-
rence
string No Time for recurring adjustements. Time is specied with the Unix cron syntax
start_time string No Exact time for the adjustement
{
:type => :object,
:properties => {
name => {
:type => :string,
:required => true
},
deployment => {
:type => :string,
:enum => %w{none straight},
:default => none
},
shutdown_action => {
:type => :string,
:enum => %w{shutdown shutdown-hard},
:required => false
},
roles => {
:type => :array,
:items => ROLE_SCHEMA,
:required => true
96 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
}
}
}
Role Schema
{
:type => :object,
:properties => {
name => {
:type => :string,
:required => true
},
cardinality => {
:type => :integer,
:default => 1,
:minimum => 0
},
vm_template => {
:type => :integer,
:required => true
},
parents => {
:type => :array,
:items => {
:type => :string
}
},
shutdown_action => {
:type => :string,
:enum => [shutdown, shutdown-hard]},
:required => false
},
min_vms => {
:type => :integer,
:required => false,
:minimum => 0
},
max_vms => {
:type => :integer,
:required => false,
:minimum => 0
},
cooldown => {
:type => :integer,
:required => false,
:minimum => 0
},
elasticity_policies => {
:type => :array,
:items => {
:type => :object,
:properties => {
type => {
:type => :string,
:enum => [CHANGE, CARDINALITY, PERCENTAGE_CHANGE],
3.4. OneFlow Specication 97
OpenNebula 4.6 Integration Guide, Release 4.6
:required => true
},
adjust => {
:type => :integer,
:required => true
},
min_adjust_step => {
:type => :integer,
:required => false,
:minimum => 1
},
period_number => {
:type => :integer,
:required => false,
:minimum => 0
},
period => {
:type => :integer,
:required => false,
:minimum => 0
},
expression => {
:type => :string,
:required => true
},
cooldown => {
:type => :integer,
:required => false,
:minimum => 0
}
}
}
},
scheduled_policies => {
:type => :array,
:items => {
:type => :object,
:properties => {
type => {
:type => :string,
:enum => [CHANGE, CARDINALITY, PERCENTAGE_CHANGE],
:required => true
},
adjust => {
:type => :integer,
:required => true
},
min_adjust_step => {
:type => :integer,
:required => false,
:minimum => 1
},
start_time => {
:type => :string,
:required => false
},
recurrence => {
:type => :string,
98 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
:required => false
}
}
}
}
}
}
Action Schema
{
:type => :object,
:properties => {
action => {
:type => :object,
:properties => {
perform => {
:type => :string,
:required => true
},
params => {
:type => :object,
:required => false
}
}
}
}
}
}
3.4.6 Examples
Create a New Service Template
MethodURL Meaning / Entity Body Response
POST /service_template Create a new
SERVICE_TEMPLATE
resource.
201 Created: A JSON representation of the new
SERVICE_TEMPLATE resource in the http body
curl http://127.0.0.1:2474/service_template -u oneadmin:password -v --data {
"name":"web-application",
"deployment":"straight",
"roles":[
{
"name":"frontend",
"cardinality":"1",
"vm_template":"0",
"shutdown_action":"shutdown",
"min_vms":"1",
"max_vms":"4",
"cooldown":"30",
"elasticity_policies":[
{
"type":"PERCENTAGE_CHANGE",
"adjust":"20",
3.4. OneFlow Specication 99
OpenNebula 4.6 Integration Guide, Release 4.6
"min_adjust_step":"1",
"expression":"CUSTOM_ATT>40",
"period":"3",
"period_number":"30",
"cooldown":"30"
}
],
"scheduled_policies":[
{
"type":"CHANGE",
"adjust":"4",
"recurrence":"0 2 1-10
* *
"
}
]
},
{
"name":"worker",
"cardinality":"2",
"vm_template":"0",
"shutdown_action":"shutdown",
"parents":[
"frontend"
],
"min_vms":"2",
"max_vms":"10",
"cooldown":"240",
"elasticity_policies":[
{
"type":"CHANGE",
"adjust":"5",
"expression":"ATT=3",
"period":"5",
"period_number":"60",
"cooldown":"240"
}
],
"scheduled_policies":[
]
}
],
"shutdown_action":"shutdown"
}
> POST /service_template HTTP/1.1
> Authorization: Basic b25lYWRtaW46b23lbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: oneflow.server:2474
> Accept:
*
/
*
> Content-Length: 771
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 1990
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
100 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
<
{
"DOCUMENT": {
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
"name": "frontend",
"min_vms": 1,
"max_vms": 4,
"cardinality": 1,
"cooldown": 30,
"shutdown_action": "shutdown",
"elasticity_policies": [
{
"expression": "CUSTOM_ATT>40",
"adjust": 20,
"min_adjust_step": 1,
"cooldown": 30,
"period": 3,
"period_number": 30,
"type": "PERCENTAGE_CHANGE"
}
]
},
{
"scheduled_policies": [
],
"vm_template": 0,
"name": "worker",
"min_vms": 2,
"max_vms": 10,
"cardinality": 2,
"parents": [
"frontend"
],
"cooldown": 240,
"shutdown_action": "shutdown",
"elasticity_policies": [
{
"expression": "ATT=3",
"adjust": 5,
"cooldown": 240,
"period": 5,
"period_number": 60,
"type": "CHANGE"
}
3.4. OneFlow Specication 101
OpenNebula 4.6 Integration Guide, Release 4.6
]
}
],
"shutdown_action": "shutdown"
}
},
"TYPE": "101",
"GNAME": "oneadmin",
"NAME": "web-application",
"GID": "0",
"ID": "4",
"UNAME": "oneadmin",
"PERMISSIONS": {
"OWNER_A": "0",
"OWNER_M": "1",
"OWNER_U": "1",
"OTHER_A": "0",
"OTHER_M": "0",
"OTHER_U": "0",
"GROUP_A": "0",
"GROUP_M": "0",
"GROUP_U": "0"
},
"UID": "0"
}
Get Detailed Information of a Given Service Template
MethodURL Meaning / Entity Body Response
GET /service_template/<id> Show the SERVICE_TEMPLATE
resource identied by <id>
200 OK: A JSON representation of the
collection in the http body
curl -u oneadmin:opennebula http://127.0.0.1:2474/service_template/4 -v
> GET /service_template/4 HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
>
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 1990
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
"DOCUMENT": {
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
102 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
{
"adjust": 4,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
...
List the Available Service Templates
Method URL Meaning / Entity Body Response
GET /service_template List the contents of the
SERVICE_TEMPLATE collection.
200 OK: A JSON representation of the
collection in the http body
curl -u oneadmin:opennebula http://127.0.0.1:2474/service_template -v
> GET /service_template HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
>
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 6929
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
"DOCUMENT_POOL": {
"DOCUMENT": [
{
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-server",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
"name": "frontend",
"min_vms": 1,
"max_vms": 4,
"cardinality": 1,
"cooldown": 30,
"shutdown_action": "shutdown",
"elasticity_policies": [
{
3.4. OneFlow Specication 103
OpenNebula 4.6 Integration Guide, Release 4.6
...
Update a Given Template
Method URL Meaning / Entity Body Re-
sponse
PUT /service_template/<id>Update the SERVICE_TEMPLATE resource identied
by <id>.
200 OK:
curl http://127.0.0.1:2474/service_template/4 -u oneadmin:opennebula -v -X PUT --data {
"name":"web-application",
"deployment":"straight",
"roles":[
{
"name":"frontend",
"cardinality":"1",
"vm_template":"0",
"shutdown_action":"shutdown-hard",
"min_vms":"1",
"max_vms":"4",
"cooldown":"30",
"elasticity_policies":[
{
"type":"PERCENTAGE_CHANGE",
"adjust":"20",
"min_adjust_step":"1",
"expression":"CUSTOM_ATT>40",
"period":"3",
"period_number":"30",
"cooldown":"30"
}
],
"scheduled_policies":[
{
"type":"CHANGE",
"adjust":"4",
"recurrence":"0 2 1-10
* *
"
}
]
},
{
"name":"worker",
"cardinality":"2",
"vm_template":"0",
"shutdown_action":"shutdown",
"parents":[
"frontend"
],
"min_vms":"2",
"max_vms":"10",
"cooldown":"240",
"elasticity_policies":[
{
"type":"CHANGE",
"adjust":"5",
"expression":"ATT=3",
"period":"5",
104 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
"period_number":"60",
"cooldown":"240"
}
],
"scheduled_policies":[
]
}
],
"shutdown_action":"shutdown"
}
> PUT /service_template/4 HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
> Content-Length: 1219
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
*
Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 1995
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
"DOCUMENT": {
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
"name": "frontend",
"min_vms": 1,
"max_vms": 4,
"cardinality": 1,
"cooldown": 30,
"shutdown_action": "shutdown-hard",
...
3.4. OneFlow Specication 105
OpenNebula 4.6 Integration Guide, Release 4.6
Instantiate a Given Template
MethodURL Meaning / Entity Body Re-
sponse
POST /service_template/<id>/action Perform an action on the SERVICE_TEMPLATE resource identied
by <id>. Available actions: instantiate, chown, chgrp, chmod
201:
Available actions:
instantiate
chown
chmod
chgrp
curl http://127.0.0.1:2474/service_template/4/action -u oneadmin:opennebula -v -X POST --data {
"action": {
"perform":"instantiate"
}
}
> POST /service_template/4/action HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
> Content-Length: 49
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 2015
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
"DOCUMENT": {
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
106 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
Delete a Given Template
Method URL Meaning / Entity Body Re-
sponse
DELETE /service_template/<id>Delete the SERVICE_TEMPLATE resource identied
by <id>
201:
curl http://127.0.0.1:2474/service_template/4 -u oneadmin:opennebula -v -X DELETE
> DELETE /service_template/3 HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
>
< HTTP/1.1 201 Created
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 0
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
Get Detailed Information of a Given Service
Method URL Meaning / Entity Body Response
GET /service/<id> Show the SERVICE resource
identied by <id>
200 OK: A JSON representation of the
collection in the http body
curl http://127.0.0.1:2474/service/5 -u oneadmin:opennebula -v
> GET /service/5 HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
>
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 11092
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
"DOCUMENT": {
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"last_eval": 1374676803,
3.4. OneFlow Specication 107
OpenNebula 4.6 Integration Guide, Release 4.6
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
"vm_template": 0,
"disposed_nodes": [
],
"name": "frontend",
"min_vms": 1,
"nodes": [
{
"deploy_id": 12,
"vm_info": {
"VM": {
"CPU": "33",
"TEMPLATE": {
"CPU": "1",
"CONTEXT": {
"TARGET": "hda",
"NETWORK": "YES",
"DISK_ID": "0"
},
"MEMORY": "1024",
"TEMPLATE_ID": "0",
"VMID": "12"
},
"GNAME": "oneadmin",
"RESCHED": "0",
"NET_RX": "1300",
"NAME": "frontend_0_(service_5)",
"ETIME": "0",
"USER_TEMPLATE": {
"SERVICE_ID": "5",
"ROLE_NAME": "frontend"
},
"GID": "0",
"LAST_POLL": "1374676793",
"MEMORY": "786432",
"HISTORY_RECORDS": {
"HISTORY": {
"RETIME": "0",
"TMMAD": "dummy",
"DS_LOCATION": "/var/tmp/one_install/var//datastores",
"SEQ": "0",
"VNMMAD": "dummy",
"ETIME": "0",
"PETIME": "1374676347",
"HOSTNAME": "vmx_dummy",
"VMMMAD": "dummy",
"ESTIME": "0",
"HID": "2",
"EETIME": "0",
"OID": "12",
"STIME": "1374676347",
"DS_ID": "0",
"ACTION": "0",
"RSTIME": "1374676347",
108 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
"REASON": "0",
"PSTIME": "1374676347"
}
},
"ID": "12",
"DEPLOY_ID": "vmx_dummy:frontend_0_(service_5):dummy",
"NET_TX": "800",
"UNAME": "oneadmin",
"LCM_STATE": "3",
"STIME": "1374676345",
"UID": "0",
"PERMISSIONS": {
"OWNER_U": "1",
"OWNER_M": "1",
"OWNER_A": "0",
"GROUP_U": "0",
"GROUP_M": "0",
"GROUP_A": "0",
"OTHER_U": "0",
"OTHER_M": "0",
"OTHER_A": "0"
},
"STATE": "3"
}
}
}
],
"last_vmname": 1,
"max_vms": 4,
"cardinality": 1,
"cooldown": 30,
"shutdown_action": "shutdown-hard",
"state": "2",
"elasticity_policies": [
{
"expression": "CUSTOM_ATT>40",
"true_evals": 0,
"adjust": 20,
"min_adjust_step": 1,
"last_eval": 1374676803,
"cooldown": 30,
"expression_evaluated": "CUSTOM_ATT[--] > 40",
"period": 3,
"period_number": 30,
"type": "PERCENTAGE_CHANGE"
}
]
},
{
"scheduled_policies": [
],
"vm_template": 0,
"disposed_nodes": [
],
"name": "worker",
"min_vms": 2,
3.4. OneFlow Specication 109
OpenNebula 4.6 Integration Guide, Release 4.6
"nodes": [
{
"deploy_id": 13,
"vm_info": {
"VM": {
"CPU": "9",
"TEMPLATE": {
"CPU": "1",
"CONTEXT": {
"TARGET": "hda",
"NETWORK": "YES",
"DISK_ID": "0"
},
"MEMORY": "1024",
"TEMPLATE_ID": "0",
"VMID": "13"
},
"GNAME": "oneadmin",
"RESCHED": "0",
"NET_RX": "1600",
"NAME": "worker_0_(service_5)",
"ETIME": "0",
"USER_TEMPLATE": {
"SERVICE_ID": "5",
"ROLE_NAME": "worker"
},
"GID": "0",
"LAST_POLL": "1374676783",
"MEMORY": "545259",
"HISTORY_RECORDS": {
"HISTORY": {
"RETIME": "0",
"TMMAD": "dummy",
"DS_LOCATION": "/var/tmp/one_install/var//datastores",
"SEQ": "0",
"VNMMAD": "dummy",
"ETIME": "0",
"PETIME": "1374676377",
"HOSTNAME": "xen_dummy",
"VMMMAD": "dummy",
"ESTIME": "0",
"HID": "1",
"EETIME": "0",
"OID": "13",
"STIME": "1374676377",
"DS_ID": "0",
"ACTION": "0",
"RSTIME": "1374676377",
"REASON": "0",
"PSTIME": "1374676377"
}
},
"ID": "13",
"DEPLOY_ID": "xen_dummy:worker_0_(service_5):dummy",
"NET_TX": "600",
"UNAME": "oneadmin",
"LCM_STATE": "3",
"STIME": "1374676375",
110 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
"UID": "0",
"PERMISSIONS": {
"OWNER_U": "1",
"OWNER_M": "1",
"OWNER_A": "0",
"GROUP_U": "0",
"GROUP_M": "0",
"GROUP_A": "0",
"OTHER_U": "0",
"OTHER_M": "0",
"OTHER_A": "0"
},
"STATE": "3"
}
}
},
{
"deploy_id": 14,
"vm_info": {
"VM": {
"CPU": "75",
"TEMPLATE": {
"CPU": "1",
"CONTEXT": {
"TARGET": "hda",
"NETWORK": "YES",
"DISK_ID": "0"
},
"MEMORY": "1024",
"TEMPLATE_ID": "0",
"VMID": "14"
},
"GNAME": "oneadmin",
"RESCHED": "0",
"NET_RX": "1100",
"NAME": "worker_1_(service_5)",
"ETIME": "0",
"USER_TEMPLATE": {
"SERVICE_ID": "5",
"ROLE_NAME": "worker"
},
"GID": "0",
"LAST_POLL": "1374676783",
"MEMORY": "471859",
"HISTORY_RECORDS": {
"HISTORY": {
"RETIME": "0",
"TMMAD": "dummy",
"DS_LOCATION": "/var/tmp/one_install/var//datastores",
"SEQ": "0",
"VNMMAD": "dummy",
"ETIME": "0",
"PETIME": "1374676378",
"HOSTNAME": "kvm_dummy",
"VMMMAD": "dummy",
"ESTIME": "0",
"HID": "0",
"EETIME": "0",
3.4. OneFlow Specication 111
OpenNebula 4.6 Integration Guide, Release 4.6
"OID": "14",
"STIME": "1374676378",
"DS_ID": "0",
"ACTION": "0",
"RSTIME": "1374676378",
"REASON": "0",
"PSTIME": "1374676378"
}
},
"ID": "14",
"DEPLOY_ID": "kvm_dummy:worker_1_(service_5):dummy",
"NET_TX": "550",
"UNAME": "oneadmin",
"LCM_STATE": "3",
"STIME": "1374676375",
"UID": "0",
"PERMISSIONS": {
"OWNER_U": "1",
"OWNER_M": "1",
"OWNER_A": "0",
"GROUP_U": "0",
"GROUP_M": "0",
"GROUP_A": "0",
"OTHER_U": "0",
"OTHER_M": "0",
"OTHER_A": "0"
},
"STATE": "3"
}
}
}
],
"last_vmname": 2,
"max_vms": 10,
"cardinality": 2,
"parents": [
"frontend"
],
"cooldown": 240,
"shutdown_action": "shutdown",
"state": "2",
"elasticity_policies": [
{
"expression": "ATT=3",
"true_evals": 0,
"adjust": 5,
"last_eval": 1374676803,
"cooldown": 240,
"expression_evaluated": "ATT[--] = 3",
"period": 5,
"period_number": 60,
"type": "CHANGE"
}
]
}
],
"log": [
{
112 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
"message": "New state: DEPLOYING",
"severity": "I",
"timestamp": 1374676345
},
{
"message": "New state: RUNNING",
"severity": "I",
"timestamp": 1374676406
}
],
"shutdown_action": "shutdown",
"state": 2
}
},
"TYPE": "100",
"GNAME": "oneadmin",
"NAME": "web-application",
"GID": "0",
"ID": "5",
"UNAME": "oneadmin",
"PERMISSIONS": {
"OWNER_A": "0",
"OWNER_M": "1",
"OWNER_U": "1",
"OTHER_A": "0",
"OTHER_M": "0",
"OTHER_U": "0",
"GROUP_A": "0",
"GROUP_M": "0",
"GROUP_U": "0"
},
"UID": "0"
}
List the Available Services
Method URL Meaning / Entity Body Response
GET /service List the contents of the SERVICE
collection.
200 OK: A JSON representation of the collection in
the http body
curl http://127.0.0.1:2474/service -u oneadmin:opennebula -v
> GET /service HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
>
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 12456
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
<
{
3.4. OneFlow Specication 113
OpenNebula 4.6 Integration Guide, Release 4.6
"DOCUMENT_POOL": {
"DOCUMENT": [
{
"TEMPLATE": {
"BODY": {
"deployment": "straight",
"name": "web-application",
"roles": [
{
"scheduled_policies": [
{
"adjust": 4,
"last_eval": 1374676986,
"type": "CHANGE",
"recurrence": "0 2 1-10
* *
"
}
],
...
Perform an Action on a Given Service
Method URL Meaning / Entity Body Re-
sponse
POST /service/<id>/actionPerform an action on the SERVICE resource identied
by <id>.
201:
Available actions:
shutdown: Shutdown a service.
From RUNNING or WARNING shuts down the Service
recover: Recover a failed service, cleaning the failed VMs.
From FAILED_DEPLOYING continues deploying the Service
From FAILED_SCALING continues scaling the Service
From FAILED_UNDEPLOYING continues shutting down the Service
From COOLDOWN the Service is set to running ignoring the cooldown duration
From WARNING failed VMs are deleted, and new VMs are instantiated
chown
chmod
chgrp
curl http://127.0.0.1:2474/service/5/action -u oneadmin:opennebula -v -X POST --data {
"action": {
"perform":"shutdown"
}
}
curl http://127.0.0.1:2474/service/5/action -u oneadmin:opennebula -v -X POST --data {
"action": {
"perform":"chgrp",
"params" : {
"group_id" : 2
114 Chapter 3. System Interfaces
OpenNebula 4.6 Integration Guide, Release 4.6
}
}
}
Update the Cardinality of a Given Role
MethodURL Meaning / Entity Body Re-
sponse
PUT /service/<id>/role/<name> Update the ROLE identied by <name> of the SERVICE resource
identied by <id>. Currently the only attribute that can be updated is the
cardinality.
200
OK:
You can force a cardinality outside the dened range with the force param.
curl http://127.0.0.1:2474/service/5/role/frontend -u oneadmin:opennebula -X PUT -v --data {
"cardinality" : 2,
"force" : true
}
> PUT /service/5/role/frontend HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
> Content-Length: 41
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 0
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
Perform an Action on All the VMs of a Given Role
MethodURL Meaning / Entity Body Re-
sponse
POST /service/<id>/role/<name>/action Perform an action on all the Virtual Machines belonging to the ROLE
identied by <name> of the SERVICE resource identied by <id>.
201:
You can use this call to perform a VM action on all the Virtual Machines belonging to a role. For example, if you want
to suspend the Virtual Machines of the worker Role:
These are the commands that can be performed:
shutdown
shutdown-hard
undeploy
undeploy-hard
hold
release
3.4. OneFlow Specication 115
OpenNebula 4.6 Integration Guide, Release 4.6
stop
suspend
resume
boot
delete
delete-recreate
reboot
reboot-hard
poweroff
poweroff-hard
snapshot-create
Instead of performing the action immediately on all the VMs, you can perform it on small groups of VMs with these
options:
period: Seconds between each group of actions
number: Number of VMs to apply the action to each period
curl http://127.0.0.1:2474/service/5/role/frontend/action -u oneadmin:opennebula -v -X POST --data {
"action": {
"perform":"stop",
"params" : {
"period" : 60,
"number" : 2
}
}
}
> POST /service/5/role/frontend/action HTTP/1.1
> Authorization: Basic b25lYWRtaW46b3Blbm5lYnVsYQ==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:2474
> Accept:
*
/
*
> Content-Length: 106
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
< Content-Type: text/html;charset=utf-8
< X-XSS-Protection: 1; mode=block
< Content-Length: 57
< X-Frame-Options: sameorigin
< Connection: keep-alive
< Server: thin 1.2.8 codename Black Keys
116 Chapter 3. System Interfaces
CHAPTER
FOUR
INFRASTRUCTURE INTEGRATION
4.1 Using Hooks
The Hook Manager present in OpenNebula enables the triggering of custom scripts tied to a change in state in a partic-
ular resource, being that a Host or a Virtual Machine. This opens a wide area of automation for system administrators
to tailor their cloud infrastructures.
4.1.1 Conguration
Hook Manager conguration is set in /etc/one/oned.conf. Hooks can be tied to changes in host or virtual
machine states, and they can be executed locally to the OpenNebula front-end and remotely in the relevant worker
node.
In general, hook denition in /etc/one/oned.conf has two paremeters:
executable: path of the hook driver executable, can be an absolute path or relative to /usr/lib/one/mads
arguments: for the driver executable, can be an absolute path or relative to /etc/one/
4.1.2 Hooks for VirtualMachines
In the case of VirtualMachine hooks, the following can be dened:
name : for the hook, useful to track the hook (OPTIONAL)
on : when the hook should be executed,
CREATE, when the VM is created (onevm create)
RUNNING, after the VM is successfully booted
SHUTDOWN, after the VM is shutdown
STOP, after the VM is stopped (including VM image transfers)
DONE, after the VM is destroyed or shutdown
UNKNOWN, when the VM enters the unknown state
FAILED, when the VM enters the failed state
CUSTOM, user dened specic STATE and LCM_STATE combination of states to trigger the hook.
command : path can be absolute or relative to /var/lib/one/remotes/hooks
arguments : for the hook. You can access the following VM attributes with $
117
OpenNebula 4.6 Integration Guide, Release 4.6
$ID, the ID of the VM that triggered the hook execution
$TEMPLATE, the template of the VM that triggered the hook, in xml and base64 encoded
$PREV_STATE, the previous STATE of the Virtual Machine
$PREV_LCM_STATE, the previous LCM STATE of the Virtual Machine
remote : values,
YES, The hook is executed in the host where the VM was allocated
NO, The hook is executed in the OpenNebula server (default)
The following is an example of a hook tied to the DONE state of a VM:
VM_HOOK = [
name = "notify_done",
on = "DONE",
command = "notify.rb",
arguments = "$ID $TEMPLATE" ]
Or an more advanced example:
VM_HOOK = [
name = "advanced_hook",
on = "CUSTOM",
state = "ACTIVE",
lcm_state = "BOOT_UNKNOWN",
command = "log.rb",
arguments = "$ID $PREV_STATE $PREV_LCM_STATE" ]
4.1.3 Hooks for Hosts
In the case of Host hooks, the following can be dened:
name : for the hook, useful to track the hook (OPTIONAL)
on : when the hook should be executed,
CREATE, when the Host is created (onehost create)
ERROR, when the Host enters the error state
DISABLE, when the Host is disabled
command : path can be absolute or relative to /var/lib/one/remotes/hooks
arguments : for the hook. You can use the following Host attributes with $
$ID, the ID of the Host that triggered the hook execution
$TEMPLATE, the full Host information, in xml and base64 encoded
remote : values,
YES, The hook is executed in the host
NO, The hook is executed in the OpenNebula server (default)
The following is an example of a hook tied to the ERROR state of a Host:
118 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
#-------------------------------- Host Hook -----------------------------------
# This hook is used to perform recovery actions when a host fails.
# Script to implement host failure tolerance
# It can be set to
# -r recreate VMs running in the host
# -d delete VMs running in the host
# Additional flags
# -f force resubmission of suspended VMs
# -p <n> avoid resubmission if host comes
# back after n monitoring cycles
#------------------------------------------------------------------------------
#
#HOST_HOOK = [
# name = "error",
# on = "ERROR",
# command = "ft/host_error.rb",
# arguments = "$ID -r",
# remote = "no" ]
#-------------------------------------------------------------------------------
4.1.4 Other Hooks
Other OpenNebula entities like Virtual Networks, Users, Groups and Images can be hooked on creation and removal.
These hooks are specied with the following variables in oned.conf:
VNET_HOOK, for virtual networks
USER_HOOK, for users
GROUP_HOOK, for groups
IMAGE_HOOK, for disk images.
These hooks are always executed on the front-end and are dened by the following attributes
name : for the hook, useful to track the hook (OPTIONAL)
on : when the hook should be executed,
CREATE, when the object (virtual network, user, group or image) is created
REMOVE, when the object is removed from the DB
command : path can be absolute or relative to /var/lib/one/remotes/hooks
arguments : for the hook. You can use the following Host attributes with $
$ID, the ID of the Host that triggered the hook execution
$TEMPLATE, the full Host information, in xml and base64 encoded
The following is an example of a hook that sends and email to a new register user:
USER_HOOK = [
name = "mail",
on = "CREATE",
command = "email2user.rb",
arguments = "$ID $TEMPLATE"]
4.1. Using Hooks 119
OpenNebula 4.6 Integration Guide, Release 4.6
4.1.5 Developing your Hooks
The execution of each hook is tied to the object that trigger the event. The data of the object can be passed to the hook
through the $ID and the $TEMPLATE variables:
$TEMPLATE will give you the full output of the corresponding show command in XML and base64 encoding.
This can be easily deal with in any language. If you are using bash for your scripts you may be interested in the
xpath.rb util, check the following example:
#!/bin/bash
# Argument hook for virtual network add to oned.conf
# VNET_HOOK = [
# name="bash_arguments",
# on="CREATE",
# command=<path_to_this_file>,
# arguments="$TEMPLATE" ]
XPATH=/var/lib/one/remotes/datastore/xpath.rb
T64=$1
USER_NAME=$XPATH -b $T64 UNAME
OWNER_USE_PERMISSION=$XPATH -b $T64 PERMISSIONS/OWNER_U
#UNAME and PERMISSIONS/OWNER_U are the XPATH for the attributes without the root element
$ID you can use the ID of the object to retreive more information or to perform an action over the object. (e.g.
onevm hold $ID)
Note that within the hook you can further interact with OpenNebula to retrieve more information, or perform any other
action
4.2 Virtualization Driver
The component that deals with the hypervisor to create, manage and get information about virtual machine objects is
called Virtual Machine Manager (VMM for short). This component has two parts. The rst one resides in the core
and holds most of the general functionality common to all the drivers (and some specic), the second is the driver that
is the one able to translate basic VMM actions to the hypervisor.
4.2.1 Driver Conguration
There are two main drivers one_vmm_exec and one_vmm_sh. Both take commands from OpenNebula and exe-
cute a set of scripts for those actions, the main difference is that one_vmm_exec executes the commands remotely
(logging into the host where VM is being or is going to be executed) and one_vmm_sh does the execution of the
scripts in the frontend.
The driver takes some parameters, described here:
parameter description
-r <num> number of retries when executing an action
-t <num number of threads, i.e. number of actions done at the same time
-l <actions> (one_vmm_exec only) actions executed locally, command can be overridden for each
action
<driver_directory> where in the remotes directory the driver can nd the action scripts
These are the actions valid in the -l parameter:
120 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
attach_disk
attach_nic
cancel
deploy
detach_disk
detach_nic
kvmrc
migrate
migrate_local
poll
reboot
reset
restore
save
shutdown
snapshot_create
snapshot_delete
snapshot_revert
You can also provide an alternative script name for local execution, by default the script is called the same as the
action, in this fashion action=script_name. As an example:
-l migrate,poll=poll_ganglia,save
These arguments are specied in the oned.conf le, arguments variable:
VM_MAD = [
name = "kvm",
executable = "one_vmm_exec",
arguments = "-t 15 -r 0 -l migrate,save kvm",
default = "vmm_exec/vmm_exec_kvm.conf",
type = "kvm" ]
4.2.2 Actions
Every action should have an executable program (mainly scripts) located in the remote dir
(remotes/vmm/<driver_directory>) that performs the desired action. These scripts receive some pa-
rameters (and in the case of DEPLOY also STDIN) and give back the error message or information in some cases
writing to STDOUT.
VMM actions, they are the same as the names of the scripts:
attach_disk: Attaches a new DISK in the VM
Arguments
*
DOMAIN: Domain name: one-101
*
SOURCE: Image path
4.2. Virtualization Driver 121
OpenNebula 4.6 Integration Guide, Release 4.6
*
TARGET: Device in the guest: hda, sdc, vda, xvdc
*
TARGET_INDEX: Position in the list of disks
*
DRV_ACTION: action xml. Base: /VMM_DRIVER_ACTION_DATA/VM/TEMPLATE/DISK[ATTACH=YES]
DRIVER: Disk format: raw, qcow2
TYPE: Disk type: block, cdrom, rbd, fs or swap
READONLY: The value is YES when its read only
CACHE: Cache mode: none, writethrough, writeback
SOURCE: Image source, used for ceph
Response
*
Success: -
*
Faulure: Error message
attach_nic: Attaches a new NIC in the VM
Arguments
*
DOMAIN: Domain name: one-808
*
MAC: MAC address of the new NIC
*
BRIDGE: Bridge where to attach the new NIC
*
MODEL: NIC model to emulate, ex: e1000
*
NET_DRV: Network driver used, ex: ovswitch
Response
*
Success: -
*
Failure: Error message
cancel: Destroy a VM
Arguments:
*
DOMAIN: Domain name: one-909
Response
*
Success: -
*
Failure: Error message
deploy: Deploy a new VM
Arguments:
*
DEPLOYMENT_FILE: where to write de deployment le. You have to write whatever comes from
STDIN to a le named like this parameter. In shell script you can do: cat > $domain
Response
*
Success: Deploy id, ex: one-303
*
Failure: Error message
detach_disk: Detaches a DISK from a VM
Arguments
122 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
*
DOMAIN: Domain name: one-286
*
SOURCE: Image path
*
TARGET: Device in the guest: hda, sdc, vda, xvdc
*
TARGET_INDEX: Position in the list of disks
Response
*
Success: -
*
Failure: Error message
detach_nic: Detaches a NIC from a VM
Arguments
*
DOMAIN: Domain name: one-286
*
MAC: MAC address of the NIC to detach
Response
*
Success: -
*
Failure: Error message
migrate: Live migrate a VM to another host
Arguments:
*
DOMAIN: Domain name: one-286
*
DESTINATION_HOST: Host where to migrate the VM
*
HOST: Host where the VM is currently running
Response
*
Success: -
*
Failure: Error message
poll: Get information from a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
HOST: Host where the VM is running
Response
*
Success: -
*
Failure: Error message
reboot: Orderly reboots a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
HOST: Host where the VM is running
Response
*
Success: -
*
Failure: Error message
4.2. Virtualization Driver 123
OpenNebula 4.6 Integration Guide, Release 4.6
reset: Hard reboots a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
HOST: Host where the VM is running
Response
*
Success: -
*
Failure: Error message
restore: Restores a previously saved VM
Arguments:
*
FILE: VM save le
*
HOST: Host where to restore the VM
Response
*
Success: -
*
Failure: Error message
save: Saves a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
FILE: VM save le
*
HOST: Host where the VM is running
Response
*
Success: -
*
Failure: Error message
shutdown: Orderly shutdowns a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
HOST: Host where the VM is running
Response
*
Success: -
*
Failure: Error message
snapshot_create: Makes a new snapshot of a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
ONE_SNAPSHOT_ID: OpenNebula snapshot identier
Response
*
Success: Snapshot name for the hypervisor. Used later to delete or revert
*
Failure: Error message
124 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
snapshot_delete: Deletes a snapshot of a VM
Arguments:
*
DOMAIN: Domain name: one-286
*
SNAPSHOT_NAME: Name used to refer the snapshot in the hypervisor
Response
*
Success: -
*
Failure: Error message
snapshot_revert: Returns a VM to an saved state
Arguments:
*
DOMAIN: Domain name: one-286
*
SNAPSHOT_NAME: Name used to refer the snapshot in the hypervisor
Response
*
Success: -
*
Failure: Error message
action xml parameter is a base64 encoded xml that holds information about the VM. To get one of the values
explained in the documentation, for example from attach_disk READONLY you can add to the base XPATH the
name of the parameter. XPATH:
/VMM_DRIVER_ACTION_DATA/VM/TEMPLATE/DISK[ATTACH=YES]/READONLY
When using shell script there is a handy script that gets parameters for given XPATH in that XML. Example:
XPATH="${DRIVER_PATH}/../../datastore/xpath.rb -b $DRV_ACTION"
unset i j XPATH_ELEMENTS
DISK_XPATH="/VMM_DRIVER_ACTION_DATA/VM/TEMPLATE/DISK[ATTACH=YES]"
while IFS= read -r -d element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH $DISK_XPATH/DRIVER \
$DISK_XPATH/TYPE \
$DISK_XPATH/READONLY \
$DISK_XPATH/CACHE \
$DISK_XPATH/SOURCE)
DRIVER="${XPATH_ELEMENTS[j++]:-$DEFAULT_TYPE}"
TYPE="${XPATH_ELEMENTS[j++]}"
READONLY="${XPATH_ELEMENTS[j++]}"
CACHE="${XPATH_ELEMENTS[j++]}"
IMG_SRC="${XPATH_ELEMENTS[j++]}"
one_vmm_sh has the same script actions and meanings but an argument more that is the host where the action is
going to be performed. This argument is always the rst one. If you use -p parameter in one_vmm_ssh the poll
action script is called with one more argument that is the host where the VM resides, also it is the same parameter.
4.2. Virtualization Driver 125
OpenNebula 4.6 Integration Guide, Release 4.6
4.2.3 Poll Information
POLL is the action that gets monitoring info from the running VMs. The format it is supposed to give back information
is a line with KEY=VALUE pairs separated by spaces. Like this:
STATE=a USEDMEMORY=554632
The poll action can give back any information and it will be added to the VM information hold but there are some
variables that should be given back as they are meaningful to OpenNebula:
Variable Description
STATE State of the VM (explained later)
USEDCPU Percentage of 1 CPU consumed (two fully consumed cpu is 200)
USEDMEMORY Memory consumption in kilobytes
NETRX Received bytes from the network
NETTX Sent bytes to the network
STATE is a single character that tells OpenNebula the status of the VM, the states are the ones in this table:
state description
N/A Detecting state error. The monitoring could be done, but it returned an unexpected output.
a Active. The VM is alive, but not necessary running. Could be blocked, booting, etc.
p Paused. Self-explanatory.
e Error. The VM crashed or somehow its deployment failed.
d Disappeared. The VM is not known by the hypervisor anymore.
4.2.4 Deployment File
The deployment le is a text le written by OpenNebula core that holds the information of a VM. It is used when
deploying a new VM. OpenNebula is able to generate three formats of deployment les:
xen: deployment le suitable to be used with xen tools
kvm: libvirt format used to create kvm VMs
xml: xml representation of the VM
If the target hypervisor is not xen nor libvirt/kvm the best format to use is xml as it holds more information than the
two others. It has all the template information encoded as xml. This is an example:
<TEMPLATE>
<CPU><![CDATA[1.0]]></CPU>
<DISK>
<DISK_ID><![CDATA[0]]></DISK_ID>
<SOURCE><![CDATA[/home/user/vm.img]]></SOURCE>
<TARGET><![CDATA[sda]]></TARGET>
</DISK>
<MEMORY><![CDATA[512]]></MEMORY>
<NAME><![CDATA[test]]></NAME>
<VMID><![CDATA[0]]></VMID>
</TEMPLATE>
There are some information added by OpenNebula itself like the VMID and the DISK_ID for each disk. DISK_ID
is very important as the disk images are previously manipulated by the TM driver and the disk should be accessible at
VM_DIR/VMID/images/disk.DISK_ID.
126 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
4.3 Storage Driver
The Storage subsystem is highly modular. These drivers are separated into two logical sets:
DS: Datastore drivers. They serve the purpose of managing images: register, delete, and create empty dat-
ablocks.
TM: Transfer Manager drivers. They manage images associated to instantiated VMs.
4.3.1 Datastore Drivers Structure
Located under /var/lib/one/remotes/datastore/<ds_mad>
cp: copies/dumps the image to the datastore
ARGUMENTS: datastore_image_dump image_id
RETURNS: image_source size
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
image_source is the image source which will be later sent to the transfer manager
mkfs: creates a new empty image in the datastore
ARGUMENTS: datastore_image_dump image_id
RETURNS: image_source size
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
image_source is the image source which will be later sent to the transfer manager.
rm: removes an image from the datastore
ARGUMENTS: datastore_image_dump image_id
RETURNS: -
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
stat: returns the size of an image in Mb
ARGUMENTS: datastore_image_dump image_id
RETURNS: size
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
size the size of the image in Mb.
clone: clones an image.
ARGUMENTS: datastore_action_dump image_id
RETURNS: source
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
source the new source for the image.
4.3. Storage Driver 127
OpenNebula 4.6 Integration Guide, Release 4.6
monitor: monitors a datastore
ARGUMENTS: datastore_action_dump image_id
RETURNS: monitor data
datastore_image_dump is an XML dump of the driver action encoded in Base 64. See a decoded
example.
monitor data The monitoring information of the datastore, namely
USED_MB=...\nTOTAL_MB=...\nFREE_MB=... which are respectively the used size of the data-
store in MB, the total capacity of the datastore in MB and the available space in the datastore in
MB.
Warning: image_source has to be dynamically generated by the cp and mkfs script. It will be passed later
on to the transfer manager, so it should provide all the information the transfer manager needs to locate the image.
For instance, in FS_DRIVERS: DATASTORE_BASE_PATH + md5sum(date + id).
4.3.2 TM Drivers Structure
This is a list of the TM drivers and their action. Note that they dont return anything. If the exit code is not 0, the
driver will have failed.
Located under /var/lib/one/remotes/tm/<tm_mad>. There are two types of action scripts: the rst group
applies to general image datastores and includes (clone, ln, mv and mvds); the second one is only used in conjunc-
tion with the system datastore.
Action scripts for generic image datastores:
clone: clones the image from the datastore (non-persistent images)
ARGUMENTS: fe:SOURCE host:remote_system_ds/disk.i vm_id ds_id
fe is the front-end hostname
SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
ln: Links the image from the datastore (persistent images)
ARGUMENTS: fe:SOURCE host:remote_system_ds/disk.i vm_id ds_id
fe is the front-end hostname
SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
mvds: moves an image back to its datastore (persistent images or deferred snapshots)
ARGUMENTS: host:remote_system_ds/disk.i fe:SOURCE vm_id ds_id
128 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
fe is the front-end hostname
SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the original datastore for the image)
cpds: moves an image back to its datastore (executed for life disk snapshots)
ARGUMENTS: host:remote_system_ds/disk.i fe:SOURCE vm_id ds_id
fe is the front-end hostname
SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the original datastore for the image)
Action scripts needed when the TM is used for the system datastore:
context: creates an ISO that contains all the les passed as an argument.
ARGUMENTS: file1 file2 ... fileN host:remote_system_ds/disk.i vm_id
ds_id
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
delete: removes the either system datastores directory of the VM or a disk itself.
ARGUMENTS: host:remote_system_ds/disk.i|host:remote_system_ds/ vm_id
ds_id
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
mkimage: creates an image on-the-y bypassing the datastore/image workow
ARGUMENTS: size format host:remote_system_ds/disk.i vm_id ds_id
size size in MB of the image
format format for the image
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
4.3. Storage Driver 129
OpenNebula 4.6 Integration Guide, Release 4.6
mkswap: creates a swap image
ARGUMENTS: size host:remote_system_ds/disk.i vm_id ds_id
size size in MB of the image
host is the target host to deploy the VM
remote_system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
mv: moves images/directories across system_ds in different hosts. When used for the system datastore the script
will received the directory ARGUMENT
ARGUMENTS: hostA:system_ds/disk.i|hostB:system_ds/disk.i vm_id ds_id
OR hostA:system_ds/|hostB:system_ds/ vm_id ds_id
hostA is the host the VM is in.
hostB is the target host to deploy the VM
system_ds is the path for the system datastore in the host
vm_id is the id of the VM
ds_id is the target datastore (the system datastore)
premigrate: It is executed before a livemigration operation is issued to the hypervisor. Note that only the
premigrate script fromthe systemdatastore will be used. Any customization must be done for the premigrate
script of the system datastore, although you will probably add operations for other backends than that used by
the system datastore.
ARGUMENTS: source_host dst_host remote_system_dir vmid dsid template
src_host is the host the VM is in.
dst_host is the target host to migrate the VM to
remote_system_ds_dir is the path for the VM directory in the system datastore in the host
vmid is the id of the VM
dsid is the target datastore
template is the template of the VM in XML and base64 encoded
postmigrate: It is executed after a livemigration operation. Note that only the postmigrate script from the
system datastore will be used. Any customization must be done for the postmigrate script of the system
datastore, although you will probably add operations for other backends than that used by the system datastore.
ARGUMENTS: source_host dst_host remote_system_dir vmid dsid template
see premigrate description.
Warning: You only need to implement one mv script, but consider the arguments received when the TM is used
for the system datastore, a regular image datastore or both.
Warning: If the TM is only for regular images you only need to implement the rst group.
130 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
4.3.3 An Example VM
Consider a VM with two disks:
NAME = vm01
CPU = 0.1
MEMORY = 64
DISK = [ IMAGE_ID = 0 ] # non-persistent disk
DISK = [ IMAGE_ID = 1 ] # persistent disk
This a list of TM actions that will be called upon the events listed:
CREATE
<tm_mad>/clone <frontend>:<non_pers_image_source> <host01>:<ds_path>/<vm_id>/disk.0
<tm_mad>/ln <frontend>:<pers_image_source> <host01>:<ds_path>/<vm_id>/disk.1
STOP
<tm_mad>/mv <host01>:<ds_path>/<vm_id>/disk.0 <frontend>:<ds_path>/<vm_id>/disk.0
<tm_mad>/mv <host01>:<ds_path>/<vm_id>/disk.1 <frontend>:<ds_path>/<vm_id>/disk.1
<tm_mad_sysds>/mv <host01>:<ds_path>/<vm_id> <frontend>:<ds_path>/<vm_id>
RESUME
<tm_mad>/mv <frontend>:<ds_path>/<vm_id>/disk.0 <host01>:<ds_path>/<vm_id>/disk.0
<tm_mad>/mv <frontend>:<ds_path>/<vm_id>/disk.1 <host01>:<ds_path>/<vm_id>/disk.1
<tm_mad_sysds>/mv <frontend>:<ds_path>/<vm_id> <host01>:<ds_path>/<vm_id>
MIGRATE host01 host02
<tm_mad>/mv <host01>:<ds_path>/<vm_id>/disk.0 <host02>:<ds_path>/<vm_id>/disk.0
<tm_mad>/mv <host01>:<ds_path>/<vm_id>/disk.1 <host02>:<ds_path>/<vm_id>/disk.1
<tm_mad_sysds>/mv <host01>:<ds_path>/<vm_id> <host02>:<ds_path>/<vm_id>
SHUTDOWN
<tm_mad>/delete <host02>:<ds_path>/<vm_id>/disk.0
<tm_mad>/mvds <host02>:<ds_path>/<vm_id>/disk.1 <pers_image_source>
<tm_mad_sysds>/delete <host02>:<ds_path>/<vm_id>
non_pers_image_source: Source of the non persistent image.
pers_image_source : Source of the persistent image.
frontend: hostname of the frontend
host01: hostname of host01
host02: hostname of host02
tm_mad: TM driver of the datastore where the image is registered
tm_mad_sysds: TM driver of the system datastore
4.3.4 Helper Scripts
There is a helper shell script with some functions dened to do some common tasks. It is located in
/var/lib/one/remotes/scripts_common.sh
Here are the description of those functions.
4.3. Storage Driver 131
OpenNebula 4.6 Integration Guide, Release 4.6
log: Takes one parameter that is a message that will be logged into the VM log le.
log "Creating directory $DST_DIR"
error_message: sends an exit message to oned surrounding it by separators, use to send the error message when
a command fails.
error_message "File $FILE not found"
arg_host: gets the hostname part from a parameter
SRC_HOST=arg_host $SRC
arg_path: gets the path part from a parameter
SRC_PATH=arg_path $SRC
exec_and_log: executes a command and logs its execution. If the command fails the error message is sent to
oned and the script ends
exec_and_log "chmod g+w $DST_PATH"
ssh_exec_and_log: This function executes $2 at $1 host and report error $3
ssh_exec_and_log "$HOST" "chmod g+w $DST_PATH" "Error message"
timeout_exec_and_log: like exec_and_log but takes as rst parameter the max number of seconds the
command can run
timeout_exec_and_log 15 "cp $SRC_PATH $DST_PATH"
The are additional minor helper functions, please read the scripts_common.sh to see them.
4.3.5 Decoded Example
<DS_DRIVER_ACTION_DATA>
<IMAGE>
<ID>0</ID>
<UID>0</UID>
<GID>0</GID>
<UNAME>oneadmin</UNAME>
<GNAME>oneadmin</GNAME>
<NAME>ttylinux</NAME>
<PERMISSIONS>
<OWNER_U>1</OWNER_U>
<OWNER_M>1</OWNER_M>
<OWNER_A>0</OWNER_A>
<GROUP_U>0</GROUP_U>
<GROUP_M>0</GROUP_M>
<GROUP_A>0</GROUP_A>
<OTHER_U>0</OTHER_U>
<OTHER_M>0</OTHER_M>
<OTHER_A>0</OTHER_A>
</PERMISSIONS>
<TYPE>0</TYPE>
<DISK_TYPE>0</DISK_TYPE>
<PERSISTENT>0</PERSISTENT>
<REGTIME>1385145541</REGTIME>
<SOURCE/>
132 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
<PATH>/tmp/ttylinux.img</PATH>
<FSTYPE/>
<SIZE>40</SIZE>
<STATE>4</STATE>
<RUNNING_VMS>0</RUNNING_VMS>
<CLONING_OPS>0</CLONING_OPS>
<CLONING_ID>-1</CLONING_ID>
<DATASTORE_ID>1</DATASTORE_ID>
<DATASTORE>default</DATASTORE>
<VMS/>
<CLONES/>
<TEMPLATE>
<DEV_PREFIX><![CDATA[hd]]></DEV_PREFIX>
<PUBLIC><![CDATA[YES]]></PUBLIC>
</TEMPLATE>
</IMAGE>
<DATASTORE>
<ID>1</ID>
<UID>0</UID>
<GID>0</GID>
<UNAME>oneadmin</UNAME>
<GNAME>oneadmin</GNAME>
<NAME>default</NAME>
<PERMISSIONS>
<OWNER_U>1</OWNER_U>
<OWNER_M>1</OWNER_M>
<OWNER_A>0</OWNER_A>
<GROUP_U>1</GROUP_U>
<GROUP_M>0</GROUP_M>
<GROUP_A>0</GROUP_A>
<OTHER_U>1</OTHER_U>
<OTHER_M>0</OTHER_M>
<OTHER_A>0</OTHER_A>
</PERMISSIONS>
<DS_MAD>fs</DS_MAD>
<TM_MAD>shared</TM_MAD>
<BASE_PATH>/var/lib/one//datastores/1</BASE_PATH>
<TYPE>0</TYPE>
<DISK_TYPE>0</DISK_TYPE>
<CLUSTER_ID>-1</CLUSTER_ID>
<CLUSTER/>
<TOTAL_MB>86845</TOTAL_MB>
<FREE_MB>20777</FREE_MB>
<USED_MB>1000</USED_MB>
<IMAGES/>
<TEMPLATE>
<CLONE_TARGET><![CDATA[SYSTEM]]></CLONE_TARGET>
<DISK_TYPE><![CDATA[FILE]]></DISK_TYPE>
<DS_MAD><![CDATA[fs]]></DS_MAD>
<LN_TARGET><![CDATA[NONE]]></LN_TARGET>
<TM_MAD><![CDATA[shared]]></TM_MAD>
<TYPE><![CDATA[IMAGE_DS]]></TYPE>
</TEMPLATE>
</DATASTORE>
</DS_DRIVER_ACTION_DATA>
4.3. Storage Driver 133
OpenNebula 4.6 Integration Guide, Release 4.6
4.4 Monitoring Driver
The Monitoring Drivers (or IM drivers) collect host and virtual machine monitoring data by executing a set of probes
in the hosts. This data is either actively queried by OpenNebula or sent periodically by an agent running in the hosts
to the frontend.
This guide describes the process of customize or add probes to the hosts. It is also a starting point on how to create a
new IM driver from scratch.
4.4.1 Probe Location
The default probes are installed in the frontend in the following path:
KVM and Xen: /var/lib/one/remotes/im/<hypervisor>-probes.d
VMware and EC2: /var/lib/one/remotes/im/<hypervisor>.d
In the case of KVM and Xen, the probes are distributed to the hosts, therefore if the probes are changed, they must be
distributed to the hosts by running onehost sync.
4.4.2 General Probe Structure
An IM diver is composed of one or several scripts that write to stdout information in this form:
KEY1="value"
KEY2="another value with spaces"
The drivers receive the following parameters:
Position Description
1 hypervisor: The name of the hypervisor: kvm, xen, etc...
2 datastore location: path of the datastores directory in the host
3 collectd port: port in which the collectd is listening on
4 monitor push cycle: time in seconds between monitorization actions for the UDP-push model
5 host_id: id of the host
6 host_name: name of the host
Take into account that in shell script the parameters start at 1 ($1) and in ruby start at 0 (ARGV[0]). For shell script
you can use this snippet to get the parameters:
hypervisor=$1
datastore_location=$2
collectd_port=$3
monitor_push_cycle=$4
host_id=$5
host_name=$6
4.4.3 Basic Monitoring Scripts
You can add any key and value you want to use later in RANK and REQUIREMENTS for scheduling but there are some
basic values you should output:
134 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
Key Description
HYPERVI-
SOR
Name of the hypervisor of the host, useful for selecting the hosts with an specic technology.
TOTALCPU Number of CPUs multiplied by 100. For example, a 16 cores machine will have a value of 1600.
CPUSPEED Speed in Mhz of the CPUs.
TO-
TALMEM-
ORY
Maximum memory that could be used for VMs. It is advised to take out the memory used by the
hypervisor.
USED-
MEMORY
Memory used, in kilobytes.
FREEMEM-
ORY
Available memory for VMs at that moment, in kilobytes.
FREECPU Percentage of idling CPU multiplied by the number of cores. For example, if 50% of the CPU is
idling in a 4 core machine the value will be 200.
USEDCPU Percentage of used CPU multiplied by the number of cores.
NETRX Received bytes from the network
NETTX Transferred bytes to the network
For example, a probe that gets memory information about a host could be something like:
#!/bin/bash
total=$(free | awk /^Mem/ { print $2 })
used=$(free | awk /buffers\/cache/ { print $3 })
free=$(free | awk /buffers\/cache/ { print $4 })
echo "TOTALMEMORY=$total"
echo "USEDMEMORY=$used"
echo "FREEMEMORY=$free"
Executing it should give use memory values:
$ ./memory_probe
TOTALMEMORY=1020696
USEDMEMORY=209932
FREEMEMORY=810724
For real examples check the directories at /var/lib/one/remotes/im.
4.4.4 VM Information
The scripts should also provide information about the VMs running in the host. This is useful as it will only need one
call to gather all that information about the VMs in each host. The output should be in this form:
VM_POLL=YES
VM=[
ID=86,
DEPLOY_ID=one-86,
POLL="USEDMEMORY=918723 USEDCPU=23 NETTX=19283 NETRX=914 STATE=a" ]
VM=[
ID=645,
DEPLOY_ID=one-645,
POLL="USEDMEMORY=563865 USEDCPU=74 NETTX=2039847 NETRX=2349923 STATE=a" ]
The rst line (VM_POLL=YES) is used to indicate OpenNebula that VM information will follow. Then the information
about the VMs is output in that form.
4.4. Monitoring Driver 135
OpenNebula 4.6 Integration Guide, Release 4.6
Key Description
ID OpenNebula VM id. It can be -1 in case this VM was not created by OpenNebula
DEPLOY_ID Hypervisor name or identier of the VM
POLL VM monitoring info, in the same format as VMM driver poll
For example here is a simple script to get qemu/kvm VMs status from libvirt. As before, check the scripts from
OpenNebula for a complete example:
#!/bin/bash
echo "VM_POLL=YES"
virsh -c qemu:///system list | grep one- | while read vm; do
deploy_id=$(echo $vm | cut -d -f 2)
id=$(echo $deploy_id | cut -d- -f 2)
status_str=$(echo $vm | cut -d -f 3)
if [ $status_str == "running" ]; then
state="a"
else
state="e"
fi
echo "VM=["
echo " ID=$id,"
echo " DEPLOY_ID=$deploy_id,"
echo " POLL=\"STATE=$state\" ]"
done
$ ./vm_poll
VM_POLL=YES
VM=[
ID=0,
DEPLOY_ID=one-0,
POLL="STATE=a" ]
VM=[
ID=1,
DEPLOY_ID=one-1,
POLL="STATE=a" ]
4.4.5 System Datastore Information
Information Manager drivers are also responsible to collect the datastore sizes and its available space. To do so there is
an already made script that collects this information for lesystem and lvm based datastores. You can copy it from the
KVM driver (/var/lib/one/remotes/im/kvm-probes.d/monitor_ds.sh) into your driver directory.
In case you want to create your own datastore monitor you have to return something like this in STDOUT:
DS_LOCATION_USED_MB=1
DS_LOCATION_TOTAL_MB=12639
DS_LOCATION_FREE_MB=10459
DS = [
ID = 0,
USED_MB = 1,
TOTAL_MB = 12639,
FREE_MB = 10459
]
DS = [
136 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
ID = 1,
USED_MB = 1,
TOTAL_MB = 12639,
FREE_MB = 10459
]
DS = [
ID = 2,
USED_MB = 1,
TOTAL_MB = 12639,
FREE_MB = 10459
]
These are the meanings of the values:
Variable Description
DS_LOCATION_USED_MB Used space in megabytes in the DATASTORE LOCATION
DS_LOCATION_TOTAL_MB Total space in megabytes in the DATASTORE LOCATION
DS_LOCATION_FREE_MB FREE space in megabytes in the DATASTORE LOCATION
ID ID of the datastore, this is the same as the name of the directory
USED_MB Used space in megabytes for that datastore
TOTAL_MB Total space in megabytes for that datastore
FREE_MB Free space in megabytes for that datastore
The DATASTORE LOCATION is the path where the datastores are mounted. By default is
/var/lib/one/datastores but it is specied in the second parameter of the script call.
4.4.6 Creating a New IM Driver
Choosing the Execution Engine
OpenNebula provides two IM probe execution engines: one_im_sh and one_im_ssh. one_im_sh is used to
execute probes in the frontend, for example vmware uses this engine as it collects data via an API call executed in
the frontend. On the other hand, one_im_ssh is used when probes need to be run remotely in the hosts, which is the
case for Xen and KVM.
Populating the Probes
Both one_im_sh and one_im_ssh require an argument which indicates the directory that contains the probes.
This argument is appended with .d.
Example: For VMware the execution engine is one_im_sh (local execution) and the argument is vmware, therefore
the probes that will be executed in the hosts are located in /var/lib/one/remotes/im/vmware.d
Making Use of Colllectd
If the new IM driver wishes to use the collectd component, it needs to:
Use one_im_ssh
The /var/lib/one/remotes/im/<im_name>.d should only contain 2 les, the sames that are
provided by default inside kvm.d and xen.d, which are: collectd-client_control.sh and
collectd-client.rb.
The probes should be actually placed in the /var/lib/one/remotes/im/<im_name>-probes.d
folder.
4.4. Monitoring Driver 137
OpenNebula 4.6 Integration Guide, Release 4.6
Enabling the Driver
A new IM section should be placed added to oned.conf.
Example:
IM_MAD = [
name = "ganglia",
executable = "one_im_sh",
arguments = "ganglia" ]
4.5 Networking Driver
This component is in charge of conguring the network in the hypervisors. The purpose of this guide is to describe
how to create a new network manager driver.
4.5.1 Driver Conguration and Description
To enable a new network manager driver, the only requirement is to make a new directory with the name of the driver
in /var/lib/one/remotes/vnm/remotes/<name> with three les:
Pre: This driver should perform all the network related actions required before the Virtual Machine starts in a
host.
Post: This driver should perform all the network related actions required after the Virtual Machine starts (actions
which typically require the knowledge of the tap interface the Virtual Machine is connected to).
Clean: If any clean-up should be performed after the Virtual Machine shuts down, it should be placed here.
Warning: The above three les must exist. If no action is required in them a simple exit 0 will be enough.
Virtual Machine actions and their relation with Network actions:
Deploy: pre and post
Shutdown: clean
Cancel: clean
Save: clean
Restore: pre and post
Migrate: pre (target host), clean (source host), post (target host)
Attach Nic: pre and post
Detach Nic: clean
4.5.2 Driver Paramenters
All three driver actions have a rst parameter which is the XML VM template encoded in base64 format.
Additionally the post driver has a second parameter which is the deploy-id of the Virtual Machine e.g.: one-17.
138 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
4.6 Authentication Driver
This guide will show you how to develop a new driver for OpenNebula to interact with an external authentication
service.
OpenNebula comes with an internal user/password way of authentication, this is called core. To be able to use
other auth methods there is a system that performs authentication with external systems. Authentication drivers are
responsible of getting the user credentials from OpenNebula database and login and answer whether the authentication
is correct or not.
In the OpenNebula database there are two values saved for every user, this is username and password. When
the driver used for authentication is core (authenticated without an external auth driver) the password value holds the
SHA1 hash of the users password. In case we are using other authentication method this password eld can contain
any other information we can use to recognize a user, for example, for x509 authentication this eld contains the users
public key.
4.6.1 Authentication Driver
Authentication drivers are located at /var/lib/one/remotes/auth. There is a directory for each of authenti-
cation drivers with an executable inside called authenticate. The name of the directory has to be the same as the
users auth driver we want to authenticate. For example, if a user has as auth driver x509 OpenNebula will execute
the le /var/lib/one/remotes/auth/x509/authenticate when he performs an OpenNebula action.
The script receives three parameters:
username: name of the user who wants to authenticate.
password: value of the password eld for the user that is trying to authenticate. This can be - when the user
does not exist in the OpenNebula database.
secret: value provided in the password eld of the authentication string.
For example, we can create a new authentication method that just checks the length of the password. For this we
can store in the password eld the number of characters accepted, for example 5, and user name test. Here are some
example calls to the driver with several passwords:
authenticate test 5 testpassword
authenticate test 5 another_try
authenticate test 5 12345
The script should exit with a non 0 status when the authentication is not correct and write in stderr the error. When
the authentication is correct it should return:
Name of the driver. This is used when the user does not exist, this will be written to users the auth driver eld.
User name
Text to write in the users password eld in case the user does not exist.
The code for the /var/lib/one/remotes/auth/length/authenticate executable can be:
#!/bin/bash
username=$1
password=$2
secret=$3
length=$(echo -n "$secret" | wc -c | tr -d )
if [ $length = $password ]; then
4.6. Authentication Driver 139
OpenNebula 4.6 Integration Guide, Release 4.6
echo "length $username $secret"
else
echo "Invalid password"
exit 255
fi
4.6.2 Enabling the Driver
To be able to use the new driver we need to add its name to the list of enabled drivers in oned.conf:
AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509,length"
]
4.7 Cloud Bursting Driver
This guide will show you how to develop a new driver for OpenNebula to interact with an external cloud provider.
4.7.1 Overview
Cloud bursting is a model in which the local resources of a Private Cloud are combined with resources from remote
Cloud providers. The remote provider could be a commercial Cloud service, such as Amazon EC2, or a partner in-
frastructure running a different OpenNebula instance. Such support for cloud bursting enables highly scalable hosting
environments. For more information on this model see the Cloud Bursting overview
The remote cloud provider will be included in the OpenNebula host pool like any other physical host of your infras-
tructure:
$ onehost create remote_provider im_provider vmm_provider tm_dummy dummy
$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
2 kvm- - 0 0 / 800 (0%) 0K / 16G (0%) on
3 kvm-1 - 0 0 / 100 (0%) 0K / 1.8G (0%) on
4 remote_provider - 0 0 / 500 (0%) 0K / 8.5G (0%) on
When you create a new host in OpenNebula you have to specify the following parameters:
Name: remote_provider
Name of the host, in case of physical hosts it will the ip address or hostname of the host. In case of remote providers
it will be just a name to identify the provider.
Information Manager: im_provider
IM driver gather information about the physical host and hypervisor status, so the OpenNebula scheduler knows the
available resources and can deploy the virtual machines accordingly.
VirtualMachine Manager: vmm_provider
VMM drivers translate the high-level OpenNebula virtual machine life-cycle management actions, like deploy, shut-
down, etc. into specic hypervisor operations. For instance, the KVM driver will issue a virsh create command in the
physical host. The EC2 driver translate the actions into Amazon EC2 API calls.
Transfer Manager: tm_dummy
140 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
TM drivers are used to transfer, clone and remove Virtual Machines Image les. They take care of the le transfer from
the OpenNebula image repository to the physical hosts. There are specic drivers for different storage congurations:
shared, non-shared, lvm storage, etc.
VirtualNetwork Manager: dummy
VNM drivers are used to set the network conguration in the host (rewall, 802.1Q, ebtables, osvswitch)
When creating a new host to interact with a remote cloud provider we will use mock versions for the TM and VNM
drivers. Therefore, we will only implement the functionality required for the IM and VMM driver.
4.7.2 Adding the Information Manager
Edit oned.conf
Add a new IM section for the new driver in oned.conf:
#
*******************************************************************************
# Information Driver Configuration
#
*******************************************************************************
# You can add more information managers with different configurations but make
# sure it has different names.
#
# name : name for this information manager
#
# executable: path of the information driver executable, can be an
# absolute path or relative to $ONE_LOCATION/lib/mads (or
# /usr/lib/one/mads/ if OpenNebula was installed in /)
#
# arguments : for the driver executable, usually a probe configuration file,
# can be an absolute path or relative to $ONE_LOCATION/etc (or
# /etc/one/ if OpenNebula was installed in /)
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
#
*******************************************************************************
#-------------------------------------------------------------------------------
# EC2 Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
name = "im_provider",
executable = "one_im_sh",
arguments = "-t 1 -r 0 provider_name" ]
#-------------------------------------------------------------------------------
Populating the Probes
Create a new directory to store your probes, the name of this folder must match the name provided in the arguments
section of the IM_MAD in oned.conf:
/var/lib/one/remotes/im/<provider_name>.d
These probes must return:
Information of the host capacity, to limit the number of VMs that can be deployed in this hosts.
Information of the VMs running in this host-
4.7. Cloud Bursting Driver 141
OpenNebula 4.6 Integration Guide, Release 4.6
You can see an example of these probes in the ec2 driver (code) included in OpenNebula
You must include the PUBLIC_CLOUD and HYPERVISOR attributes as one of the values returned by your
probes, otherwise OpenNebula will consider this host as local. The HYPERVISOR attribute will be used by the
scheduler and should match the TYPE value inside the PUBLIC_CLOUD section provided in the VM template.
PUBLIC_CLOUD="YES"
HYPERVISOR="provider_name"
4.7.3 Adding the Virtual Machine Manager
Edit oned.conf
#
*******************************************************************************
# Virtualization Driver Configuration
#
*******************************************************************************
# You can add more virtualization managers with different configurations but
# make sure it has different names.
#
# name : name of the virtual machine manager driver
#
# executable: path of the virtualization driver executable, can be an
# absolute path or relative to $ONE_LOCATION/lib/mads (or
# /usr/lib/one/mads/ if OpenNebula was installed in /)
#
# arguments : for the driver executable
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
#
# default : default values and configuration parameters for the driver, can
# be an absolute path or relative to $ONE_LOCATION/etc (or
# /etc/one/ if OpenNebula was installed in /)
#
# type : driver type, supported drivers: xen, kvm, xml
#-------------------------------------------------------------------------------
VM_MAD = [
name = "vmm_provider",
executable = "one_vmm_sh",
arguments = "-t 15 -r 0 provider_name",
type = "xml" ]
#-------------------------------------------------------------------------------
Create the Driver Folder and Implement the Specic Actions
Create a new folder inside the remotes dir (/var/lib/one/remotes/vmm). The new folder should be named
providet_name, the name specied in the previous VM_MAD arguments section.
This folder must contain scripts for the supported actions. You can see the list of available actions in the Virtual
Machine Driver guide. These scripts are language-agnostic so you can implement them using python, ruby, bash...
You can see examples on how to implement this in the ec2 driver:
EC2 Shutdown action:
#!/usr/bin/env ruby
# -------------------------------------------------------------------------- #
142 Chapter 4. Infrastructure Integration
OpenNebula 4.6 Integration Guide, Release 4.6
# Copyright 2010-2013, C12G Labs S.L #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
# -------------------------------------------------------------------------- #
$: << File.dirname(__FILE__)
require ec2_driver
deploy_id = ARGV[0]
host = ARGV[1]
ec2_drv = EC2Driver.new(host)
ec2_drv.shutdown(deploy_id)
Create the New Host
After restarting oned we can create the new host that will use this new driver
$ onehost create remote_provider im_provider vmm_provider tm_dummy dummy
Create a new Virtual Machine
Create a new VM using a template with an specic section for this provider. You have to include the required informa-
tion to start a new VM inside the PUBLIC_CLOUD section, and the TYPE attribute must match the HYPERVISOR
value of the host. For example:
$ cat vm_template.one
CPU=1
MEMORY=256
PUBLIC_CLOUD=[
TYPE=provider_name
PROVIDER_IMAGE_ID=id-141234,
PROVIDER_INSTANCE_TYPE=small_256mb
]
$ onevm create vm_template
ID: 23
$ onevm deploy 23 remote_provider
After this, the deploy script will receive the following arguments:
The path to the deployment le that contains the following XML:
4.7. Cloud Bursting Driver 143
OpenNebula 4.6 Integration Guide, Release 4.6
<CPU>1</CPU>
<MEMORY>256</MEMORY>
<PUBLIC_CLOUD>
<TYPE>provider_name</TYPE>
<PROVIDER_IMAGE_ID>id-141234</PROVIDER_IMAGE_ID>
<PROVIDER_INSTANCE_TYPE>small_256mb</PROVIDER_INSTANCE_TYPE>
</PUBLIC_CLOUD>
The hostname: remote_provider
The VM ID: 23
The deploy script has to return the ID of the new resource and an exit_code 0:
$ cat /var/lib/one/remote/provider/deploy
#!/bin/bash
deployment_file=$1
# Parse required parameters from the template
..
# Retrieve account credentials from a local file/env
...
# Create a new resource using the API provider
...
# Return the provider ID of the new resource and exit code 0 or an error message
144 Chapter 4. Infrastructure Integration
CHAPTER
FIVE
REFERENCES
5.1 Custom Routes for Sunstone Server
OpenNebula Sunstone server plugins consist a set les dening custom routes. Custom routes will have priority over
default routes and allow administrators to integrate their own custom controllers in the Sunstone Server.
5.1.1 Conguring Sunstone Server Plugins
It is very easy to enable custom plugins:
1. Place your custom routes in the /usr/lib/one/sunstone/routes folder.
2. Modify /etc/one/sunstone-server.conf to indicate which les should be loaded, as shown in the
following example:
# This will load custom.rb and other.rb plugin files.
:routes:
- custom
- other
5.1.2 Creating Sunstone Server Plugins
Sunstone server is a Sinatra application. A server plugin is simply a le containing one or several custom routes, as
dened in sinatra applications.
The following example denes 4 custom routes:
get /myplugin/myresource/:id do
resource_id = params[:id]
# code...
end
post /myplugin/myresource do
# code
end
put /myplugin/myresource/:id do
# code
end
del /myplugin/myresource/:id do
# code
end
145
OpenNebula 4.6 Integration Guide, Release 4.6
Custom routes take preference over Sunstone server routes. In order to easy debugging and ensure that plugins are not
interfering with each others, we recommend however to place the routes in a custom namespace (myplugin in the
example).
From the plugin code routes, there is access to all the variables, helpers etc which are dened in the main sunstone
application code. For example:
opennebula_client = $cloud_auth.client(session[:user])
sunstone_config = $conf
logger.info("New route")
vm3_log = @SunstoneServer.get_vm_log(3)
5.2 Building from Source Code
This page will show you how to compile and install OpenNebula from the sources.
If you want to install it from your package manager, visit the software menu to nd out if OpenNebula is included in
your ofcial distribution package repositories.
Warning: Do not forget to check the Building Dependecies for a list of specic software requirements to build
OpenNebula.
5.2.1 Compiling the Software
Follow these simple steps to install the OpenNebula software:
Download and untar the OpenNebula tarball.
Change to the created folder and run scons to compile OpenNebula
$ scons [OPTION=VALUE]
the argument expression [OPTION=VALUE] is used to set non-default values for :
OPTION VALUE
syslog yes to compile syslog support. Needs log4cpp lib
sqlite_db path-to-sqlite-install
sqlite no if you dont want to build sqlite support
mysql yes if you want to build mysql support
xmlrpc path-to-xmlrpc-install
parsers yes if you want to rebuild ex/bison les
new_xmlrpc yes if you have an xmlrpc-c version >= 1.31
If the following error appears, then you need to remove the option new_xmlrpc=yes or install xmlrpc-c version >=
1.31:
error: class xmlrpc_c::serverAbyss::constrOpt has no member named maxConn
OpenNebula can be installed in two modes: system-wide, or in self-contained directory. In either
case, you do not need to run OpenNebula as root. These options can be specied when running the install script:
./install.sh <install_options>
where <install_options> can be one or more of:
146 Chapter 5. References
OpenNebula 4.6 Integration Guide, Release 4.6
OP-
TION
VALUE
-u user that will run OpenNebula, defaults to user executing install.sh
-g group of the user that will run OpenNebula, defaults to user executing install.sh
-k keep conguration les of existing OpenNebula installation, useful when upgrading. This ag should
not be set when installing OpenNebula for the rst time
-d target installation directory. If dened, it will specied the path for the self-contained install. If not
dened, the installation will be performed system wide
-c only install client utilities: OpenNebula cli, occi and ec2 client les
-r remove Opennebula, only useful if -d was not specied, otherwise rm -rf $ONE_LOCATION would
do the job
-h prints installer help
The packages do a system-wide installation. To create a similar environment, create a oneadmin user and group,
and execute:
oneadmin@frontend:~/ $> wget <opennebula tar gz>
oneadmin@frontend:~/ $> tar xzf <opennebula tar gz>
oneadmin@frontend:~/ $> cd one-4.0
oneadmin@frontend:~/one-4.0/ $> scons -j2 mysql=yes syslog=yes
[ lots of compiling information ]
scons: done building targets.
oneadmin@frontend:~/one-4.0 $> sudo ./install.sh -u oneadmin -g oneadmin
5.2.2 Ruby Dependencies
Ruby version needs to be:
ruby >= 1.8.7
Some OpenNebula components need ruby libraries. Some of these libraries are interfaces to binary libraries and the
development packages should be installed in your machine. This is the list of the ruby libraries that need a development
package:
sqlite3: sqlite3 development library
mysql: mysql client development library
curb: curl development library
nokogiri: expat development librarie
xmlparse: libxml2 and libxslt development libraries
You will also need ruby development package to be able to compile these gems.
We provide a script to ease the installation of these gems. it is located in /usr/share/one/install_gems
(system-wide mode) or $ONE_LOCATION/share/install_gems (self-contained mode). It can be called with
the components you want the gem dependencies to be installed. Here are the options:
optional: libraries that make CLI and OCA faster
quota: quota system
sunstone: sunstone graphical interface
cloud: ec2 and occi interfaces
ozones_client: CLI of ozones
ozones_server: server part of ozones, both mysql and sqlite support
5.2. Building from Source Code 147
OpenNebula 4.6 Integration Guide, Release 4.6
ozones_server_sqlite: ozones server, only sqlite support
ozones_server_mysql: ozones server, only mysql support
acct: accounting collector, both mysql and sqlite support
acct_sqlite: accounting collector, only sqlite support
acct_mysql: accounting collector, only mysql support
The tool can be also called without parameters and all the packages will be installed.
For example, to install only requirements for sunstone, ec2 and occi interfaces youll issue:
oneadmin@frontend: $> ./install_gems sunstone cloud
5.3 Build Dependencies
This page lists the build dependencies for OpenNebula.
If you want to install it from your package manager, visit the software menu to nd out if OpenNebula is included in
your ofcial distribution package repositories.
g++ compiler (>= 4.0)
xmlrpc-c development libraries (>= 1.06)
scons build tool (>= 0.98)
sqlite3 development libraries (if compiling with sqlite support) (>= 3.6)
mysql client development libraries (if compiling with mysql support) (>= 5.1)
log4cpp exible logging library (if compiling with syslog support) (>=1.0)
libxml2 development libraries (>= 2.7)
openssl development libraries (>= 0.9.8)
ruby interpreter (>= 1.8.7)
5.3.1 Debian/Ubuntu
g++
libxmlrpc-c3-dev
scons
libsqlite3-dev
libmysqlclient-dev
libxml2-dev
libssl-dev
liblog4cpp5-dev
ruby
148 Chapter 5. References
OpenNebula 4.6 Integration Guide, Release 4.6
5.3.2 CentOS 6
gcc-c++
libcurl-devel
libxml2-devel
xmlrpc-c-devel
openssl-devel
mysql-devel
log4cpp-devel
openssh
pkgcong
ruby
scons
sqlite-devel
xmlrpc-c
java-1.7.0-openjdk-devel
5.3.3 CentOS 5 / RHEL 5
scons
The version that comes with Centos is not compatible with our build scripts. To install a more recent version you can
download the RPM at:
http://www.scons.org/download.php
$ wget http://prdownloads.sourceforge.net/scons/scons-1.2.0-1.noarch.rpm
$ yum localinstall scons-1.2.0-1.noarch.rpm
xmlrpc-c
You can download the xmlrpc-c and xmlrpc-c packages from the rpm repository at http://centos.karan.org/.
$ wget http://centos.karan.org/el5/extras/testing/i386/RPMS/xmlrpc-c-1.06.18-1.el5.kb.i386.rpm
$ wget http://centos.karan.org/el5/extras/testing/i386/RPMS/xmlrpc-c-devel-1.06.18-1.el5.kb.i386.rpm
$ yum localinstall --nogpgcheck xmlrpc-c-1.06.18-1.el5.kb.i386.rpm xmlrpc-c-devel-1.06.18-1.el5.kb.i386.rpm
sqlite
This package should be installed from source, you can download the tar.gz from http://www.sqlite.org/download.html.
It was tested with sqlite 3.5.9.
$ wget http://www.sqlite.org/sqlite-amalgamation-3.6.17.tar.gz
$ tar xvzf sqlite-amalgamation-3.6.17.tar.gz
$ cd sqlite-3.6.17/
$ ./configure
5.3. Build Dependencies 149
OpenNebula 4.6 Integration Guide, Release 4.6
$ make
$ make install
If you do not install it to a system wide location (/usr or /usr/local) you need to add LD_LIBRARY_PATH and tell
scons where to nd the les:
$ scons sqlite=<path where you installed sqlite>
Ruby
Ruby package is needed during install process
$ yum install ruby
5.3.4 openSUSE 11.3
Building tools
By default openSUSE 11 does not include the standard building tools, so before any compilation is done you should
install:
$ zypper install gcc gcc-c++ make patch
Required Libraries
Install these packages to satisfy all the dependencies of OpenNebula:
$ zypper install libopenssl-devel libcurl-devel scons pkg-config sqlite3-devel libxslt-devel libxmlrpc_server_abyss++3 libxmlrpc_client++3 libexpat-devel libxmlrpc_server++3 libxml2-devel
Ruby
We can install the standard packages directly with zypper:
$ zypper install ruby ruby-doc-ri ruby-doc-html ruby-devel rubygems
rubygems must be >=1.3.1, so to play it safe you can update it to the latest version:
$ wget http://rubyforge.org/frs/download.php/45905/rubygems-1.3.1.tgz
$ tar zxvf rubygems-1.3.1.tgz
$ cd rubygems-1.3.1
$ ruby setup.rb
$ gem update --system
Once rubygems is installed we can install the following gems:
gem install nokogiri rake xmlparser
xmlrpc-c
xmlrpc-c must be built by downloading the latest svn release and compiling it. Read the README le included with
the package for additional information.
150 Chapter 5. References
OpenNebula 4.6 Integration Guide, Release 4.6
svn co http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/super_stable xmlrpc-c
cd xmlrpc-c
./configure
make
make install
5.3.5 MAC OSX 10.4 10.5
OpenNebula frontend can be installed in Mac OS X. Here are the dependencies to build it in 10.5 (Leopard)
Requisites:
xcode (you can install in from your Mac OS X DVD)
macports http://www.macports.org/
Getopt
This package is needed as getopt that comes with is BSD style.
$ sudo port install getopt
xmlrpc
$ sudo port install xmlrpc-c
scons
You can install scons using macports as this:
$ sudo port install scons
Unfortunately it will also compile python an lost of other packages. Another way of getting it is downloading the
standalone package in http://www.scons.org/download.php. Look for scons-local Packages and download the Gzip tar
le. In this example I am using version 1.2.0 of the package.
$ mkdir -p ~/tmp/scons
$ cd ~/tmp/scons
$ tar xvf ~/Downloads/scons-local-1.2.0.tar
$ alias scons=python ~/tmp/scons/scons.py
5.3.6 Gentoo
When installing libxmlrpc you have to specify that it will be compiled with thread support:
# USE="threads" emerge xmlrpc-c
5.3.7 Arch
They are listed in this PKGBUILD.
5.3. Build Dependencies 151

You might also like