You are on page 1of 15

Future Generation Computer Systems 29 (2013) 4660

Contents lists available at SciVerse ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Towards an optimized abstracted topology design in cloud environment


Rosy Aoun a, , Chinwe E. Abosi b , Elias A. Doumith c , Reza Nejabati b , Maurice Gagnaire c ,
Dimitra Simeonidou b
a

Department of Computer Science, Notre Dame University-Louaize, Zouk Mikael, Lebanon

School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK

Telecom ParisTech - LTCI - UMR 5141 CNRS, 46, rue Barrault F 75634 Paris Cedex 13, France

article

info

Article history:
Received 25 May 2011
Received in revised form
25 March 2012
Accepted 29 March 2012
Available online 23 April 2012
Keywords:
Cloud computing
Topology abstraction
Resource virtualization
Service provisioning
Scheduled traffic
Distributed storage
Integer linear programming
Heuristic

abstract
The rapid development and diversification of Cloud services occurs in a very competitive environment.
The number of actors providing Infrastructure as a Service (IaaS) remains limited, while the number
of PaaS (Platform as a Service) and SaaS (Software as a Service) providers is rapidly increasing. In
this context, the ubiquity and the variety of Cloud services impose a form of collaboration between
all these actors. For this reason, Cloud Service Providers (CSPs) rely on the availability of computing,
storage, and network resources generally provided by various administrative entities. This multi-tenant
environment raises multiple challenges such as confidentiality and scalability issues. To address these
challenges, resource (network, computing, and storage) abstraction is introduced. In this paper, we focus
on network resource abstraction algorithms used by a Network Service Provider (NSP) for sharing its
network topology without exposing details of its physical resources. In this context, we propose two
network resource abstraction techniques. First, we formulate the network topology abstraction problem
as a Mixed-Integer Linear Program (MILP). Solving this formulation provides an optimal abstracted
topology to the CSP in terms of availability of the underlying resources. Second, we propose an innovative
scalable algorithm called SILK-ALT inspired from the SImple LinK (SILK) algorithm previously proposed by
Abosi et al. We compare the MILP formulation, the SILK-ALT algorithm, and the SILK algorithm in terms
of rejection ratio of users requests at both the Cloud provider and the network provider levels. Using
our proposed algorithms, the obtained numerical results show that resource abstraction in general and
network topology abstraction in particular can effectively hide details of the underlying infrastructure.
Moreover, these algorithms represent a scalable and sufficiently accurate way of advertising the resources
in a multi-tenant environment.
2012 Elsevier B.V. All rights reserved.

1. Introduction
Cloud computing is a service paradigm that has emerged as a
result of distributed Information Technology (IT) resources across
the Internet. It provides access to heterogeneous IT resources, which
can either be physical or virtual, as services over the Internet [1].
Examples of provided resources include storage resources such
as those provided by Amazon S3 [2], computational resources
such as Amazon EC2 [3] and applications such as the Google App
Engine [4].

This work has been supported by the Bone Network of Excellence.

Corresponding author. Tel.: +961 76334464.


E-mail addresses: raoun@ndu.edu.lb, rosyaoun@gmail.com (R. Aoun),
ceabos@essex.ac.uk (C.E. Abosi), elias.doumith@telecom-paristech.fr
(E.A. Doumith), rnejab@essex.ac.uk (R. Nejabati),
maurice.gagnaire@telecom-paristech.fr (M. Gagnaire), dsimeo@essex.ac.uk
(D. Simeonidou).
0167-739X/$ see front matter 2012 Elsevier B.V. All rights reserved.
doi:10.1016/j.future.2012.03.024

In the last few years, the number of providers offering IT


Cloud services has increased very rapidly [5]. Furthermore, these
Cloud Service Providers (CSPs) are supposed to provide the endusers a scalable distributed computing environment capable of
achieving Quality of Service (QoS) targets in terms of availability
and response time. In order to accommodate the rising demand for
Cloud computing resources and the heterogeneous requirements
of emerging applications, CSPs may have to connect to, share,
or rent additional resources from other service providers [6]. For
a given CSP, all other providers that offer access to additional
resources and services are referred to as Third-Party Service
Providers (TPSPs). For example, IBM [7] and Google are beginning
to join their research in order achieve a higher profit from
Clouds [8]. In this case, IBM is considered as a TPSP for Google.
Through the aggregation and/or renting of resources, CSPs may
have access to an almost unlimited pool of computing, storage and
network facilities. The infrastructure owned separately by each
of these providers may have to be transparently interconnected
using high capacity and low latency connections. Wavelength

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

Division Multiplexing (WDM) transmission systems and the


recent advancement in multi-granular switching technology [9,10]
provide the bandwidth capacity and flexibility needed to achieve
these connections. Web-based Cloud services are offered at a large
scale, for instance nationwide or at the scale of a continent. By
their very nature, optical networks are the best cost-efficient way
to provide high bit rate and low latency interconnection links
between the remote tenants cooperating towards the same Cloud
service.
In this context, generic resources assignment rules have to be
specified, for instance by means of Service Level Agreement (SLA)
standardization. To meet required SLAs, a TPSP may have to share
resource-state information. Meanwhile, as they are competitors
on the same large scale market, TPSPs are in general reluctant
to expose detailed information about their resources due to
confidentiality and trust concerns. In addition, disseminating the
full view of the underlying infrastructure of a single or several NSPs
should generate a heavy volume of signaling traffic. To address
these challenges, resource abstraction is proposed.
Resource abstraction facilitates Cloud computing by dealing
with resource heterogeneity, scalability, and confidentiality issues
that impact, in general, the sharing and renting of distributed
resources. Resource abstraction allows TPSPs to hide the details of
their resources by summarizing physical resource information by
means of reduced approximations. Resource abstraction provides
a way for TPSPs to represent, advertise, and rent out their
available resources in a uniform and scalable manner. The resulting
abstract representation is chosen such that enough information is
provided to client CSPs, enabling them to provide services over the
abstracted infrastructure at a targeted QoS.
In a Cloud computing context, resource abstraction refers to
network and IT resources abstraction. In this article, we focus on
network abstraction. The Network Provider (NP) that acts as a
TPSP computes an abstracted topology of its network domain. This
abstraction transforms the physical topology into a reduced graph
that can be offered as an Infrastructure as a Service (IaaS) to other
CSPs. Thus, in the remainder of this article, we refer to the NP as a
Network Service Provider (NSP) that serves as TPSP to the benefit
of the CSP.
In this paper, we propose an original exact Mixed-Integer Linear
Programming (MILP) formulation to enable us to determine the
best abstracted network topology to be offered as a service to
the CSP. Indeed, as it is explained below, an abstracted topology
may be acceptable for the CSP in terms of physical connectivity
but unsuited due to bandwidth expectations and restrictions. We
compare for similar scenarios, the abstracted topology obtained
by means of the proposed MILP formulation with those obtained
by means of two approximate algorithms. The first of these two
algorithms is proposed in [11], while the second one is an improved
version of this same algorithm and is proposed in this paper.
Three different types of requests generated by the end-users are
considered: computing requests, storage requests and point-topoint data transfer requests. We assume that IT resources that are
interconnected via the optical network are abstracted using the
single-aggregate scheme [11]. Finally, we propose a provisioning
algorithm based on the Simulated Annealing (SA) meta-heuristic
to satisfy users requests over the adopted abstract infrastructure.
A request may be accepted in terms of requested connectivity and
bandwidth on the abstract topology but be subject to rejection
by the NSP at its instant of activation, if the bandwidth capacity
reserved to this request by the CSP over an abstracted link appears
to be unavailable at the physical path serving this abstracted link.
We call such a request service disruption crank-back. In order
to compare the performance of the three abstraction approaches,
we evaluate the number of accepted requests and their crank-back
ratio for different traffic loads. We define the crank-back ratio as

47

the ratio of the number of rejected requests by the NSP (or the
crank-back number) to the number of accepted requests by the
CSP.
The remainder of this paper is structured as follows. In
Section 2, we provide further background on the Cloud computing
service delivery paradigm and discuss some of the work done in
the literature on resource abstraction. In Section 3, we detail the
interaction between the CSP and the NSP (the NSP acting as a
TPSP). We also specify the outcomes expected from the proposed
abstraction algorithms. Section 4 introduces the formal model
of the Cloud environment, including the infrastructure model
and the traffic model. Section 5 introduces the two approximate
abstraction algorithms, namely SILK and SILK-ALT, in addition to
the exact MILP formulation. For completeness, Section 6 presents
the IT abstraction model considered in this paper. In Section 7, we
explain the two resource provisioning algorithms used by the CSP
and the NSP, respectively. The simulations conducted in order to
evaluate these algorithms consider a 6-node bottleneck topology
and are presented in Section 8. Finally, our conclusions are drawn
in Section 9.
2. Background
2.1. Cloud computing
Cloud computing is a rapidly evolving Internet service delivery
paradigm. It allows providers to offer Software as a Service (SaaS),
databases and Virtual Machines (VM) for building and running
custom applications known as Platform as a Service (PaaS), and
network resources known as Infrastructure as a Service, (IaaS) [12].
All these resources are mutualized between multiple end-users.
SaaS provides access to remote computer applications via the
Internet, rather than installing and running the application on
users own computers. SaaS currently include online project management tools from Clarizen [13], as well as customer relationship management and human resource applications available from
Salesforce [14]. A number of Cloud office applications are available
as desktop tools including word processing, building databases,
creating spreadsheets, and presentations, as it is the case with
Google Docs [4].
PaaS delivers a computing platform and solution stack as a
service, often consuming Cloud infrastructure and maintaining
Cloud applications. It facilitates the deployment of applications
without the cost and complexity of buying and managing
the underlying hardware. At the PaaS level, not only is an
execution environment provided, but also a set of infrastructure
services (given by IaaS providers). A platform should be able
to provide an environment comprising the End-to-End (E2E)
life cycle of developing, testing, deploying, and hosting Web
applications. Amazon was the first vendor to provide PaaS services
by launching Amazon Web Services (AWS) [15]. The AWS
services rely on Amazon infrastructure services. Another major
PaaS vendor is Google that has launched a service called App
Engine where developers can run Web applications on Googles
infrastructure [4]. Microsoft also developed a platform called
Azure as an online service offering flexible, familiar environment
for developers to create Cloud applications and services [16].
In addition to SaaS and PaaS, Cloud computing also includes
the development of IaaS where the underlying infrastructure,
such as computer processing capacity, network bandwidth, and
equipment, is offered to end-users as well as to other CSPs
as a service. Rather than buying their own servers, data-center
spaces, or network equipment, end-users access these resources
as a fully outsourced service. IaaS can be achieved thanks to
the abstraction and virtualization of resources, where a logical
infrastructure, which may be a slice of the physical infrastructure,

48

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

is made available to other providers for use. In this case, the


provider keeps control of its infrastructure but gives well defined
interfaces for other providers or users to access it. The GEYSERS
project [17] proposes a novel infrastructure management by
providing network and IT services. Commercially, Amazon offers
a Web service called EC2 where users can purchase computing
power online on the basis of the number of required processors,
storage capacity, and data transfer facilities for each instance
of their application [3]. Similarly, Amazon S3 [2] provides storage
services to end-users.
2.2. Network topology abstraction
In the literature, the terms topology abstraction and topology
aggregation have been used to refer to the process of summarizing
domain information. In our work, we refer to it as Topology
Abstraction (TA). TA has been widely studied in the literature
[1822]. The first objective of these studies is the dissemination
of topological information to customers seeking a dynamic
bandwidth allocation service. This is achieved by creating an
abstracted topology model that represents the physical topology
offering the required connectivity. This abstract topology must
remain compact for scalability and confidentiality concerns. An
alternative method to deal with confidentiality is through an
Authorization, Authentication and Accounting (AAA) framework,
which is implemented within a multi-domain broker [23]. Many
strategies exist for implementing network resource abstraction.
Each of these strategies consists of two steps: structural abstraction
and parametric abstraction. Structural abstraction converts the
physical structure of the topology into an abstract graph.
Parametric abstraction computes the QoS parameters of each
abstract link that best represents the physical sourcedestination
path-pairs of the physical topology. Several methods of abstraction
have been proposed [24], such as the simple node approach, the
star approach, and the full-mesh approach [20,25].

Simple node abstraction is the simplest of all abstraction


schemes. In this scheme, the structural topology is compacted
to a simple node with a number of abstract links equal to the
number of border node of the physical topology. This type of
abstraction is the least accurate type of abstraction with the
minimum information about the topology made visible [20].
Star abstraction is an extension of the simple node approach.
It defines a pseudo center node in the abstracted topology and
an abstract link connecting the center node to each border node
[26].
Full-mesh is the most accurate type of abstraction. In full
mesh abstraction, the physical topology is reduced to a fullmesh graph that connects all the border nodes of the physical
topology. It is adequate for efficient routing and resource
allocation. The downside is that the number of advertised links
increases by the square of the number of border nodes [20,27].
Spanning Tree converts the full-mesh topology to a spanning
tree representation [28]. However, if there is more than one
link parameter, for example, bandwidth and delay, then two
spanning trees are required: one for bandwidth and the other
for delay [29]. Thus, it creates and advertises a spanning tree
for each link parameter, so it creates and advertises k spanning
trees if a link is represented with k parameters.
Other types of abstractions that have been investigated in
literature include hybrid-aggregate [30], source-oriented [31],
tree-based [32] and partial [29]. A complete analysis of TA
techniques can be found in [18].
So far, the application of TA in literature has focused on the
exchange of information for hierarchical inter-domain routing

[20,21,29,3335], for Managed VPN Services (MVPNS) [19], and to


facilitate the delivery of E2E future Internet services composed of
network and IT resources [11,36].
In hierarchical routing large heterogeneous internetworks are
divided into routing domains also known as peer groups in ATM
networks [29,33], autonomous systems in IP networks [34,35]
and domains in WDM networks [20,21]. Each routing domain
is a collection of neighboring network nodes sharing the same
routing information database for route computation within
the domain (intra-domain). Between domains (inter-domain),
abstracted topological information about neighboring domains are
shared in order to (i) reduce the overall complexity in computing
network paths across domains [29,33], (ii) hide internal state from
outside users for security and confidentiality reasons [24], and
(iii) exchange reachability information needed to create E2E
paths [34]. In inter-domain routing, the abstract topology does
not include any information about the inner topology of each
domain. However, each abstract topology should be accurate
enough to construct a graph of inter-domain connectivity that
results in efficient E2E routing. These three requirements of
complexity reduction, confidentiality, and reachability in interdomain routing can be seen as the first concrete justification of
network abstraction.
The authors in [19,24] propose topology abstraction as a
means to provide an MVPNS [19]. An MVPNS is an on-demand,
dynamic Virtual Private Network (VPN) service provided by a
service provider over an IP/MPLS core network [19,24]. A VPN
is a private secure network that interconnects multiple locations
belonging to the same organization for data transfer [19]. The
MVPNS concept was developed to address the need of emerging
bandwidth intensive applications such as high definition video and
mass on-line interactive gaming that require significant amount of
bandwidth for short periods of time [19].
Future Internet is defined as a network that interconnects distributed IT resources. It addresses a wide range of users and
supports applications which require networked-IT and networkedmedia services. In this context, the authors in [36] propose the use
of network abstraction as a mechanism for network and IT resource
providers to confidently share structurally abstracted versions of
their infrastructure to facilitate the collaborative use of these resources. The work in this paper has been inspired by the work
in [36,11]. In their work, the network, computing, and storage entities are seen as heterogeneous domains. Thus, the abstract domain (network, storage, and computing power) information from
multiple providers is communicated to a service plane element
that houses third party infrastructure information and considers
high-end Internet application requests, such as Grid and Ultra High
Definition media requests. At the service plane level, the abstract
resource information is used to compose E2E services which may
span multiple heterogeneous domains. As part of their work, they
propose a SILK algorithm for the network abstraction.
In this work, we address network abstraction from the Cloud
computing environment. This paper proposes a novel algorithm as
an alternative to the SILK algorithm that aims to improve on the
performance of the SILK algorithm. In addition, unlike the work
in [36,11], an optimal solution is proposed and formulated using
a MILP model.
In Cloud computing environment, an NSP manages a routing
domain onto which multiple computing and storage service
providers are connected, each identified as a different neighboring
domain. However, the abstract information in the case of a Cloud
computing network is not limited to the connectivity between the
various neighboring domains but should also provide an evaluation
of the offered bandwidth between the computing and storage
clusters.

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

49

2.3. IT resource abstraction


Cloud application requests are composed of IT resources
(storage and computing power). Information about the availability
and capacity of IT resources are imperative for the request
admission procedure. For these reasons, aggregating or abstracting
IT resource information is essential not just for confidentiality and
trust concerns, but also for scalability and homogeneity purposes.
The abstraction of these resources must possess sufficient levels of
accuracy.
In this matter, the work done in literature deals mainly with
aggregation of network information and its impact on the routing
process. In comparison, the advancement in IT resource abstraction
specification is still moderate. For large computing domains, this
concept was studied in the context of Grid environment [37]. In
this same context, the authors in [11] propose three IT resource
abstraction algorithms, namely the Detailed, the Single-aggregate,
and the Multiple-aggregate schemes. These three algorithms
reflect three levels of details, as follows:

The Detailed scheme consists of a detailed description of the


resources (clusters, number of processors, number of storage
devices, etc.) that exist in each computing/storage node. In this
case, no aggregation is required.
The Single-aggregate scheme provides an aggregated representation of all the computing or storage clusters connected per
resource node. Under this scheme, the abstracted capacity of a
resource node is computed by summing up the total capacities
of the clusters attached to the considered node. The capacity of
the computing (resp. storage) cluster associated to a node being
itself the total capacity of the inherent processors (resp. storage
devices).
The Multiple-aggregate scheme is seen as an intermediate
level of abstraction between the Detailed and the Singleaggregate schemes. It consists of an aggregate representation
of the node per cluster entity. In other terms, details on the
number of clusters per computing node and their respective
capacities are provided to the CSP.
3. Providers interaction model
3.1. General providers interaction
As previously discussed, CSPs may depend on multiple TPSPs
to create global-reach services. Without loss of generality, in this
paper, we consider that the CSP is interfaced with three service
providers, namely a single Network Service Provider (NSP), a single
Storage Service Provider (SSP), and a single Computing Service
Provider (CoSP). In the remaining of this paper, we refer to the SSP
and the CoSP as the IT Service Providers (ITSPs). Fig. 1 depicts the
sequence diagram of the interaction between ITSPs and the CSP.
End-users generate a set of Scheduled Requests (SRs) having
stringent time constraints. The CSP collects user requests periodically during successive time periods of duration T (see in Fig. 1).
In the following we refer to this time period as the scheduling period. These requests are supposed to be executed during the next
scheduling period. At the beginning of each period, T , the CSP
asks the TPSPs for a view of their available resources. In response,
the NSP provides only an abstracted topology of its own network.
Similarly, ITSPs provide an abstracted view of their available resources (see in Fig. 1). The CSP then orchestrates the different
basic services provided by these TPSPs in order to satisfy the users
request received during the previous scheduling period.
At the beginning of each scheduling period and based on the
information given by TPSPs and user requirements, the CSP runs a
resource allocation algorithm to determine, for each request, the
required IT resources and the routes to be used for Input/Output

Fig. 1. Global interaction between providers.

(I/O) data transfer (see in Fig. 1). We assume that in practice,


the duration of steps and in Fig. 1, are negligible compared
to the value of the scheduling period T . Typically, we consider
that the scheduling period is 24 h, whereas the provisioning
period (steps and in Fig. 1) lasts at most few minutes.
The provisioning of user requests is repeated periodically each
T hours. The CSP aims to satisfy the highest number of endusers. In this paper, we assume that the CSP uses the Simulated
Annealing (SA) meta-heuristic for provisioning the requests. Since
the abstracted capacities collected from TPSPs are not guaranteed,
some of the requests may be selected/pre-accepted in advance
for execution by the CSP but rejected afterwards (at their instant
of activation) by TPSPs due to lack of resources. At the instant
of activation of a pre-accepted request (see in Fig. 1), the CSP
communicates the remote execution node and the chosen routes
to be reserved by the TPSPs. At this step, ITSPs check the availability
of the required resources. Simultaneously, the NSP checks that
enough bandwidth exists on the physical topology for routing the
request. If both reservations succeed, the request is definitively
accepted and executed. In other words, if all TPSPs send a positive
acknowledgment, the CSP then sends to the NSP the data for
routing and the resources are effectively consumed. Otherwise,
the request is rejected. It is worth noting that in parallel to the
provisioning and execution of a set of requests, the CSP collects
new requests that are supposed to be executed during the next
scheduling period. In other words, step 1 of the next scheduling
period happens in parallel with steps 2, 3, 4 of the scheduling
period at hand. The aim of step 5 is to emphasis on the fact that
for each new scheduling period, the steps are repeated from step 2
with new set of requests and new scheduling period. The execution

50

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

Fig. 3. Example of routing a request on the physical topology by the NSP.

Fig. 2. Detailed interaction between CSP and NSP.

of the collected requests during this next period continues at


step 2 but with a new timing window. At this stage, after the
scheduling period T has expired, the CSP asks the TPSPs to provide
an updated view of their resource state information (see in Fig. 1).
This update is repeated periodically each T hours. It is worth
noting that in parallel to the provisioning and execution of a set
of requests, the CSP collects new requests that are supposed to be
executed during the next scheduling period.
We denote by Ra the number of accepted requests at the CSP
level and by Ra the number of requests that are effectively accepted
by the TPSPs. We define the crank-back number as the number
of accepted requests at the CSP level that are rejected afterwards
at the TPSPs level. This number is evaluated as follows:

= Ra Ra .

(1)

Moreover, we define the crank-back ratio as the ratio of the


crank-back number to the number of accepted requests at the CSP
level:
R Ra

= = a .
Ra

Ra

(2)

3.2. Focus on CSPNSP interaction


Fig. 2 presents a detailed view of the interaction between the
CSP and the NSP. At each update, the NSP runs an algorithm
that transforms its physical network graph G(V , E ) into a subgraph G (V , E ) (see in Fig. 2). We recall that the abstraction
algorithm transforms a physical path between two border nodes of

the network into one abstracted (also called virtual or logical) link.
The NSP has full knowledge of the mapping between the abstracted
links and their inherent physical paths. The NSP provides the CSP
with this abstracted graph. The CSP at his end, does not know the
mapping of the two topologies, since one of the aims of topology
abstraction is to hide, for confidentiality reasons, the details of the
infrastructure. The CSP, according to the information given by the
NSP, selects (pre-accepts) a set of requests and chooses for them a
set of abstracted links. At each request start time, the CSP sends
a signal to the NSP in order to reserve the required bandwidth
over the chosen abstracted links (see in Fig. 2). The NSP uses
the mapping obtained by its abstraction algorithm and verifies that
enough bandwidth exists on the physical paths that correspond to
the selected abstracted links. If the previous paths have enough
resources available, the NSP sends a positive acknowledgment to
the CSP and the routing process begins (see in Fig. 2). Otherwise,
the NSP tries to find an alternative path using a set of pre-computed
K -shortest paths between the source node of the request and the
chosen destination node given by the CSP (see in Fig. 2). If an
available path is found, the NSP sends a positive acknowledgment
to the CSP and the data transfer can be started. If no alternative
path is available during the active time of the request, the NSP
sends a negative acknowledgment to the CSP and the request is
rejected. It should be highlighted that the effective physical route
of the request is invisible to the CSP.
For clarity reasons, we provide a small example of routing
a request between a sourcedestination pair chosen by the CSP
according to its knowledge of the abstracted resources. Let us
suppose we have a 6-node network topology with 3 border nodes
(A, B, C) and 16 unidirectional links, as depicted in Fig. 3 (each
undirected link in the figure corresponds to a pair of contradirectional optical fiber links). For the sake of simplicity, we
consider all network links have 10 Gbps capacity. Suppose we have
a request that needs to be routed from node A to node C and needs
to reserve 1 Gbps bandwidth. At the CSP level, the provisioning
algorithm has chosen abstracted links (A, B) and (B, C) for data
transfer. The CSP communicates this information to the NSP that
verifies the mapping table in order to check the paths inherent to
the former links. In our example, abstracted link (A, B) uses physical
links (A, D) and (D, B); and abstracted link (B, C) uses physical
links (B, E) and (E, C). As we can see from the figure, the NSP does
not find enough bandwidth on the physical paths inherent to each
abstracted link since links (D, B) and (E, C) do not have enough
bandwidth. Thus, the NSP tries to find an alternative path between
source A and destination C. The NSP chooses a path from the set of
pre-computed K -shortest paths starting with the shortest one. In
our example, we can see that only two paths are available: path k1 ,
using links (A, F) and (F, C); and path k2 , using links (A, D), (D, F), and
(F, C). The shortest path is calculated according to the number of
hops used between the source and the destination. Consequently,
the NSP will choose alternative shortest path k1 for routing the

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

request. In summary, the request that was supposed to use links


(A, D), (D, B), (B, E), and (E, C), will effectively use links (A, F) and
(F, C).
4. Cloud environment
We present a Cloud environment that includes Cloud traffic
requiring real-time access to Cloud computing infrastructure.
The Cloud computing infrastructure is composed of the optical
network, data storage devices, processing devices, and servers that
provide on-demand resources for end-users.
4.1. Infrastructure model
We consider a Cloud infrastructure domain represented by a
network graph G(V , E ) in which V = {vn } is the set of network
nodes and E = {e(u,v) |(u, v) V 2 } is the set of links, with V =
N and E = L. A link e(u,v) is characterized by a capacity B(u,v)
in bits per second. In a first step, the graph G(V , E ) is transformed
by the proposed abstraction into a sub-graph G (V , E ) containing
the border (or edge) nodes V = {vb }, V V , with V = N .
These border nodes are interconnected via a fully-meshed set of
virtual (abstract) links E = {e(s,d) |(s, d) V 2 } with E =
L = N (N 1), where e(s,d) connects two border nodes vs
and vd within the given domain. This first step represents the
structural abstraction of the physical topology. In a second step, the
parametric abstraction is conducted by computing the estimated
bandwidth for each of these virtual links. It is an aggregation of
the bandwidth that may exist on several paths between the border
nodes of this virtual link according to the topology abstraction
schemes. A logical link e(s,d) in the abstracted topology is then
characterized by a capacity B(s,d) in bits per second. We neglect
the impact of the transit delay of I/O data exchanged between endusers and the remote nodes. This assumption is justified by the fact
that the transmission delays of I/O data flows are much larger than
the round-trip time of the network.
As for IT resource abstraction, the single aggregation scheme is
used [11]. We classify border nodes of the physical topology (or
nodes of the abstracted topology) into four non-disjoint subsets:
a subset C V of computing nodes, a subset S V of storage
nodes, and a subset V V of content delivery nodes and access
nodes, at which requests are generated vn V .

A computing node or cluster vn C is an heterogeneous


system made up of processors of different architectures. The
cluster components are normally connected through a high
speed Local Area Network (LAN). In our model, each computing
node is characterized by a processing capacity Pn per time unit
that can be shared by multiple requests.
A storage node vn S is a set of hard disks localized in the same
bay. Each storage node is characterized by a storage capacity Sn
that can be shared by multiple requests.
A content delivery/server node vn V is a repository that
provides access to various types of data such as documents,
videos, figures, etc. The data is already stored onto these nodes
and needs only to be retrieved and routed through the network.
An example of such a node is an origin server1 for the Amazon
CloudFront [38].
The final classification, vn V corresponds to access nodes that
are capable of generating end-users requests. We note that all
border nodes can be used as access points for users requests.

1 An origin server is the location of the definitive version of the data.

51

4.2. Traffic model


A Scheduled Request (SR) [39] has both its setup and teardown
times known in advance. The routing of SRs in an optical network
may be viewed as an intermediate between static network
planning and dynamic traffic engineering. This type of traffic has
been extensively studied in the literature especially in network
dimensioning [40,41] and resources provisioning in Grids [4247].
In this paper, we have limited our investigation to three types of
requests:
1. Computing request: This type of request considers computing
and bandwidth services. The destination node of the request is
not known in advance and is yet to be defined. The end-user
sends the input data needed for computation. This data is routed
on the abstracted topology via the CSP and on the physical
topology via the NSP, from the source node of the request to
the chosen remote computing node. The output data resulting
from the computing process is sent back to the end-user. The
execution is considered to be done in a streaming process and
the reservation time of I/O network resources and computing
resources are simultaneous.
2. Storage request: This type of request uses storage and bandwidth services. Similar to the previous type, the destination
node of a storage request is not known in advance and is yet
to be defined. No output data are generated in this case. As opposed to the previous type, the data storage duration is different
from the input data transmission delay.
3. P2P data transfer: This type of request is a simple point-topoint data transfer and uses only bandwidth services. In this
case, the single destination node is known in advance. This type
of requests could refer to the retrieval of stored data by an enduser.
We consider a set R of R generated requests, where each request
r R is represented by the tuple (r , r , r , rs , bir , bor , pr ,
sr , ar , Dr ), where:

For a computing request, we assume data transfer is carried out

simultaneously while processing the request itself (streaming).


We introduce parameters r and r which respectively
represent the activation and teardown times of r . They limit
the period of time during which computing resources as well as
network resources used for I/O data transfer are reserved. Let
r = r r be this period. Thus, r also corresponds to the
execution time of the computing request.
For a storage request, data has to be stored onto one or several
remote nodes for a period rs . For all the other types of requests,
we have rs = 0.
For a computing request, bir and bor represent the bandwidth
required for input and output data transfer towards and from
the processing node, respectively. For all the other types of
requests, since data transfer is carried out in only one direction,
we have bor = 0. In these cases, bir represents the bandwidth
required for data transfer between the initial location of the
data and its final destination(s).
pr represents the processing power in MIPS required by a
computing request. For all other types of requests, we have
pr = 0.
sr represents the storage capacity in TB required by a storage
request. For all other types of requests, we have sr = 0. Disk
space is not an issue to be addressed for P2P requests since we
assume that the block of data sent by these requests is exploited
on-the-fly at the destination during its transmission.
ar represents the access node of r where input data is located.
For a computing request, this same node also corresponds to the
destination of the computation output result.
Dr = {dr } is the set of all destination nodes of r . For computing
and storage requests, this set is empty since the destination
remote nodes are set by the CSP. Whereas for P2P requests, this
set contains a single element.

52

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

5. Network topology abstraction


The abstract state information represents the virtual view of the
underlying network topology, which is used by the CSP to compute
E2E path connections. The algorithms presented in this article
summarize resource state information into abstracted forms. We
use the undirected full-mesh topology representation due to its
high accuracy which is the main goal of topology abstraction
algorithms.

Input: Physical topology G(V , E ),


Set of border nodes V V ,
Set of all K SP between all border nodes pairs
(* i.e. the set of K SP between (vs , vd ): P(s,d) = {pk(s,d) } where 1 k K *)
Output: Abstracted topology G (V , E )
Procedure AbstractTopology (G)
Begin
1
for vs = 1 to N do
1.1
for d = 1 to N do
1.1.1
1.1.1.1

Notations:

(u, v) denotes a pair of any two nodes of the physical topology,


(vu , vv ) V 2 .
(s, d) denotes a sourcedestination pair of two border nodes of
the physical topology, thus sourcedestination pair of any two
nodes of the abstracted topology, (vs , vd ) V 2 .
For the two algorithms SILK and SILK-ALT, we denote by pk(s,d)
the kth path (k [1 . . . K ]) from a set P(s,d) of K -Shortest Paths
(K SP) pre-computed offline between a sourcedestination pair
(vs , vd ).
For the MILP formulation, we denote by pk(s,d) the kth
path (k [1 . . . Pmax ]) from a set P(s,d) defined by the
MILP formulation between a sourcedestination pair (vs , vd ).
These paths may correspond, or not, to the shortest paths
between this sourcedestination pair. It is to be noted that
the path with the highest abstracted bandwidth (for a given
sourcedestination pair (vs , vd )) is the one stored on the
mapping table of the NSP as the inherent physical path of the
abstracted link (vs , vd ).
5.1. SImple LinK SILK
In this section, we recall the SImple LinK (SILK) mechanism
proposed in [36]. The idea behind the SILK algorithm is to
assign to the logical link of each sourcedestination pair on the
abstract graph, G (V , E ), the best bandwidth capacity among
all paths between that sourcedestination pair on the physical
topology, G(V , E ). We first start by pre-computing offline the set
1,(s,d),k
of K SP, P(s,d) = {p1(s,d) , p2(s,d) . . . pK(s,d) } where pk(s,d) = {e(u,v) ,
2,(s,d),k

Algorithm 5.1 SILK Algorithm

Lk(s,d) ,(s,d),k

e(u,v) e(u,v)
} and Lk(s,d) = pk(s,d) is the number of
links in the path. The steps in the SILK are as follows for each
sourcedestination pair (vs , vd ) of border nodes on the physical
topology:
1. Calculate path bandwidth capacity B(ks,d) of all K SP by computing the minimum link bandwidth capacity B(u,v) of all links in
the path (Algorithm 5.1: Line 1.1.1.1.1).
2. Select the path with the highest bandwidth capacity B(s,d) (Algorithm 5.1: Line 1.1.1.2).
On termination, the selected path bandwidth capacity B(s,d) is used
to represent the bandwidth of the logical link e(s,d) in the abstract
topology.

if s =
d then
for k = 1 to K do

B(ks,d) min (B(u,v) )|e(u,v) pk(s,d)

1.1.1.1.1

endfor

1.1.1.2

B(s,d) max (B(ks,d) )|pk(s,d) Ps,d

endif
endfor
endfor
2
return G (V , E )
End.

5.2.1. Calculation of link weights


The first part of the algorithm, (Algorithm 5.2: Step 1), calculates
the weight of each link in the topology. It takes as input the physical
topology and the K SP for each sourcedestination pair. Each time
a link, e(u,v) , is traversed as part of a path for a sourcedestination
pair (Algorithm 5.2: Line 1.1.1.1.1), its weight is increased by a
(s,d),k
factor w(u,v) , depending on which kth path it belongs to. This
K (k1)

factor is computed by
(Algorithm 5.2: Line 1.1.1.1.1.1).
K
Thus, the kth shortest path has a higher weight than the (k + 1)th
path, and so on. To ensure that the total bandwidth of each logical
link does not exceed the bandwidth of each physical link, we take
the minimum, of all path weights and calculate the final weight
of each link (Algorithm 5.2: Line 3.1).
5.2.2. Calculation of logical link bandwidths
The abstracted link bandwidth B(s,d) of the abstract topology is
calculated according to the following steps:
1. For all links in the physical topology, we calculate the weighted
bandwidth B (u,v) (Algorithm 5.2: Line 4.1) by multiplying the
physical link bandwidths by the link weights calculated in
Section 5.2.1.
2. For all border node pairs in the physical topology, find the
shortest path
p(s,d) (Algorithm 5.2: Line 5.1.1.1) where
p(s,d) =
{e(u1 ,v1 ) , e(u2 ,v2 ) e(uL ,vL ) } and L =
p(s,d) is the number of
links in the path.
3. For all border node pairs in the physical topology, calculate the
bandwidth capacity of the shortest path, B(s,d) , by computing
the minimum weighted link bandwidth capacity B(u,v) in the
path.
At the end, the calculated path bandwidth capacity B(s,d) is used
to represent the bandwidth of logical link e(s,d) in the abstract
topology.
5.3. MILP formulation

5.2. SImple LinK ALTternative SILK-ALT


The SILK-ALT is introduced as an improvement to the SILK
algorithm. In this algorithm, a weight is introduced on all links
in the physical topology. The weight is a function of the number
of times each sourcedestination pair uses each link e(u,v) of the
physical topology.

In this section, we present an exact MILP formulation for


abstracting network resources. The main idea formulated is that
several abstracted links using the same physical link have to share
the bandwidth of that link. In other words, several abstracted links
sharing the same physical link cannot have a total bandwidth
that exceeds the bandwidth of that link. In this way, we aim to
guarantee that if a request is routed over an abstracted link, it will

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

Algorithm 5.2 SILK-ALT algorithm

53

Parameters:

Input: Physical topology G(V , E ),


Set of border nodes V V ,
Set of all K SP between all border nodes pairs
(* i.e. the set of K SP between (vs , vd ): P(s,d) = {pk(s,d) }
where 1 k K *)
Output: Abstracted topology G (V , E )

An infrastructure graph G(V , E ) previously defined in Section 4.1.

V : set of nodes in the physical topology.


V : a sub-set of V corresponding to border nodes.
E : the set of physical links e(u,v) |(vu , vv ) V 2 in the physical
topology.

Procedure AbstractTopology (G)

B(u,v) : the bandwidth capacity of link e(u,v) on the physical

Begin
Step 1. Calculation of Link Weights of G(V , E )
1
for s = 1 to N do
1.1
for d = 1 to N do
1.1.1
if s = d then
1.1.1.1
for k = 1 to K do
1.1.1.1.1
for all e(u,v) pk(s,d) do

The proposed algorithm aims at maximizing the abstracted

d),k
w((us,,v)

1.1.1.1.1.1

bandwidth between any two border nodes. However, one might


want to spread this abstracted bandwidth between border
nodes evenly. For this purpose, we introduce an additional
non-negative real parameter, , that we use in the objective
function. This parameter helps to find a trade-off between
maximizing the total amount of bandwidth in the abstract
topology and guaranteeing a minimum amount of bandwidth
between each pair of border nodes.
The parameter 1 is used for linearizing some constraints.

K (k1)
K

d),k
w(u,v) w(u,v) + w((us,,v)

1.1.1.1.1.2

topology.

endfor
endfor
endif
endfor
endfor
min {w(u,v) |e(u,v) E }
for all e(u,v) E do

w(u,v)

3.1

Variables:

w(u,v)

endfor
Step 2. Calculation of Logical Link Bandwidth of G (V , E )
4
for all e(u,v) E do
4.1

B (u,v) B(u,v) w(u,v)

B(s,d) min (B (u,v) )|e(u,v)


p(s,d)

the kth path.

: a real positive value representing the minimum amount of


data flow transferred between all sourcedestination pairs.

endfor
5
for s = 1 to N do
5.1
for d = 1 to N do
5.1.1
if s = d then
5.1.1.1
compute shortest path
p(s,d) from vs to vd
5.1.1.2

Zsk,d is a non-negative real variable representing the amount of


bandwidth to be reserved on the kth path from source vs to
destination vd .
d
Xsu,,v,
k is a binary variable which is set to 1 if the traffic between
d
(vs , vd ) uses link e(u,v) on the kth path. Otherwise, Xsu,,v,
k = 0.
s ,d
Yu,v,k is a non-negative real variable representing the amount
of bandwidth to be reserved between (vs , vd ) over link e(u,v) on

endif
endfor
endfor
6
return abstracted topology G (V , E )
End.

not be rejected later by the NSP due to lack of resources. However,


we also consider that a single path between each node pair may
leave some unused bandwidth in the network. For this reason,
we allow several physical paths for the same sourcedestination
pair of border nodes. We recall that the path with the highest
allocated bandwidth is the one stored in the mapping table of the
NSP. Unlike the SILK and SILK-ALT algorithms, we do not select one
of these several paths, but instead we aggregate the capacities of
these paths together to be delivered to the CSP. Thus, we consider
a set P(s,d) of physical paths between a sourcedestination pair,
where P(s,d) = Pmax . The abstracted path considered is the one
offering the highest available bandwidth. However, the abstracted
bandwidth of this path is the total amount of bandwidth of all the
paths in P(s,d) . In this way, when the NSP is routing a request and
the abstracted route chosen by the CSP for routing this request
is congested, the NSP will try to find another path with available
bandwidth to route the request. Ideally, this alternative path will
coincide with one of the paths in P(s,d) considered by the MILP
formulation. The constraints considered are explained along with
the equations of the MILP formulation.

Objective:

max

max
P

s,d
Zk

+ .

(3)

vs V vd V k=1

Eq. (3) aims at maximizing the total abstracted bandwidth. The


second part of the equation forces the formulation to maximize
the lower bound of the abstracted bandwidth in the network.
Parameter allows to balance between these two parts of the
equation. For very high values of , the traffic flow is equally
distributed between all sourcedestination pairs, at the price of a
decrease in the total amount of the traffic routed between border
nodes.
Constraints:

vj V ,

k [1 . . . Pmax ],

1
s,d
s,d
Xu,j,k
Xj,v,k = 1
vu

vv

if j = d
if j = s
otherwise.

(4)

For each vj V , the flow conservation constraints in Eq. (4)


guarantee that the data flow between two border nodes is assigned
a unitary value. Moreover, we can choose up to Pmax different paths.
A traffic flow cannot use a link in both directions. This statement is
formulated in Eq. (5).
s,d

s,d

Xu,v,k + Xv,u,k 1 e(u,v) E , k [1 . . . Pmax ]

P
max

vs V vd V k=1

s ,d

s,d

Zk Xu,v,k B(u,v) e(u,v) E .

(5)
(6)

Eq. (6) states that the total traffic routed over the same link for all
border nodes pairs cannot exceed the capacity of that link. Eq. (6)
is a non-linear constraint since it is the product of two variables.

54

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660


s ,d

s,d

s,d

For this reason, we introduce Yu,v,k = Zk Xu,v,k that we define


by the following three linear constraints:

(s, d) V 2 , e(u,v) E , k [1 . . . Pmax ],


s,d
s ,d

Yu,v,k Zk
s,d
s,d
Yu,v,k Xu,v,k

s,d
s ,d
s ,d
Yu,v,k Zk (Xu,v,k 1) .

(7)

Using the three linear constraints of Eq. (7), Eq. (6) can then
replaced by the following linear constraint:
max
P

vs V vd V k=1

s ,d

Yu,v,k B(u,v) e(u,v) E .

(8)

In Eq. (9), we define the lower bound of the traffic routed in the
network between all sourcedestination pairs.

P
max

s,d

Zk

(s, d) V 2 .

(9)

k=1

Output: Abstracted bandwidth.


The bandwidth of each abstracted link e(s,d) is the total
bandwidth of all considered paths between border node pair
(vs , vd ). It is formulated by the following equation:
B(s,d) =

P
max

s ,d

Zk .

(10)

k=1

Moreover, the abstracted path is the one with the highest abstracted bandwidth.
5.4. Complexity of proposed network topology abstraction algorithms

5.4.2. MILP formulation


We now compute the complexity of the MILP formulation. Let
Nv ar and Ncons denote the number of variables and the number of
constraints of the MILP formulation, respectively. We have:

Nv ar = Pmax N 2 + 2 N 2 N 2 Pmax + 1
= N 2 (Pmax + 2 N 2 Pmax ) + 1

(11)

Ncons = N Pmax + L Pmax + L


+ 3 N 2 L Pmax + N 2
= N 2 (1 + 3 L Pmax ) + N Pmax
+ L Pmax + L.

(12)

Consequently, our MILP formulation has a complexity of O(N 2


(Pmax +2N 2 Pmax )+1, N 2 (1+3LPmax )+N Pmax +LPmax +L). We
notice that this complexity varies quadratically with the number of
abstracted nodes N .
It is worth mentioning that the computation time of the MILP
increases explosively as the number of variables and/or constraints
increases, while the computation time of the SILK and SILK-ALT
algorithm is of few seconds even for very large networks.
6. IT resource abstraction
We consider a number of IT resource sites (C S = {vn } V )
where each resource site may have computing as well as storage
capacities. Each computing/storage node vn has Nnclust clusters.
j

Let vn denote the jth cluster at computing/storage node vn . Each


j
proc
computing cluster vn has a number of processors Nj . Let cjk
denote the kth processor at the jth cluster having computing power
j,k
j
of Pn in MIPS. Similarly, each storage cluster vn has a number of
stor
k
storage devices denoted by Nj . Let sj be the kth storage device at
j ,k

The abstraction problem consists in choosing, for each couple


of edge nodes, one or several physical paths connecting them
along with an appropriate bandwidth to be reserved over each
selected path subject to capacity constraints on the network
links. If no restriction is imposed on the number of paths
that can be used between any pair of edge nodes, it is then
feasible to spread the abstracted bandwidth over a large number
of different paths where some of them may carry tiny flows.
The latter problem is commonly known in the literature as
the traditional maximum multi-commodity flow problem and
turns out to be solvable in polynomial time if fractional flows
are allowed [48]. However, in our investigated problem, the
abstracted bandwidth can only be split into a bounded number
of paths (k). This problem is referred, in the literature, to as the
k-splittable flow problem [49]. When k = 1, the problem reduces
to the classical and difficult non bifurcated routing problem, while
when k is infinite, we obtain the relatively easier multi-commodity
flow problem. Thus, the k-splittable flow problem can be viewed
as the general case of the multi-commodity flow problem and the
routing problem. It is shown in [50], that the k-splittable flow
problem is NP-hard. Even the simpler single-commodity case of
this problem is NP-hard and approximate solutions are hard to
find.
5.4.1. SILK and SILK-ALT algorithms
An approximate time complexity of the K -shortest path
algorithm is O(N 2 log |N | + K N 2 ) [51] where K is the number
of shortest paths computed. However, both the SILK and SILK-ALT
algorithms assume that the set of k-shortest paths is provided. We
also recall that L is the number of links. Thus, the approximate
time complexity for the SILK is O(N (N 1) K ) and the SILK-ALT
algorithms is O(N (N 1) K L + L + N (N 1)).

the jth cluster having storage capacity of Sn in GB. We consider the


single-aggregate IT resource abstraction model. Under this scheme,
dynamic metrics of each resource node is aggregated by summing
up the total capacity of the clusters. Thus, for each node vn , we
respectively have:
proc

Nnclust Nj

Pn =

j =1

(13)

k=1

stor
Nnclust Nj

Sn =

Pnj,k vn C

Snj,k vn S.

(14)

j=1 k=1

In this case, each node is seen as a single resource having large


capacity.
7. Service provisioning algorithm
Service provisioning under the proposed model is done in two
steps, at the CSP level and at the NSP level. The CSP has a view
of the network infrastructure and thus is expected to return in a
more efficient selection of resources. The CSP receives an end-user
request which does not include details of the destination node at
which to process or store the end-users data. The CSP uses the
information it has, which includes the abstract network topology
to decide on the best IT site to select. It then sends a request to the
NSP with the end-users request to which it includes the optimal
destination and a suggested abstract path it has chosen (Algorithm
7.2). At the NSP level, the NSP maps the abstracted path to a
physical path then attempts to route the request on the path that
the CSP has chosen using Algorithm 7.1. The NSP first verifies that
there are enough resources on the physical route corresponding

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

55

to the abstracted route. If there is, it allocates resources for the


request on this path. If not, the NSP then tries to find an alternative
path with sufficient bandwidth, which it then uses. In case of lack
of resources, the request is rejected.
The algorithms of this process is described in detail below.
7.1. At the CSP level
In this section, we consider the abstracted topology obtained by
any of the three algorithms introduced in Section 5 along with the
abstracted IT resources defined in Section 6. As stated earlier, the
CSP is in charge of service provisioning. The CSP gathers requests
that are known in advance and is supposed to provide the following
decisions:
1. Accept each request or not.
2. Choose for each accepted computing request, a computing node
and the distinct routes used for I/O data transfer.
3. Choose for each accepted storage request, one or several storage
nodes and the routes to be used for data transfer between the
initial location of the data and these nodes.
4. Choose for each accepted P2P request, the route to be used for
data transfer between the initial location of the data and the
destination node defined by the user.
The scheduling period of this algorithm corresponds to the
time period T delimited by two updates of the resource state
information given by TPSPs. Since the characteristics of SRs are
known in advance, the problem of allocating resources to jobs can
be solved by means of global optimization tools. The proposed
optimization technique chosen for service provisioning by the
CSP is based on the Simulated Annealing (SA) algorithm. The SA
algorithm is a generic probabilistic meta-algorithm for the global
optimization problem. It provides a good approximation of the
global maximum of a given multi-dimensional function in a large
search space while avoiding a local maxima [52]. Introduced by
Metropolis et al. [53], it is used to solve very large combinatorial
optimization problems. The SA algorithm starts by constructing
an initial solution having an initial cost. A perturbation process
is applied to the current solution in order to obtain a new
solution. New solutions that give better results than the current
solution are accepted automatically. In order to avoid a local
optima, solutions providing unfavorable results may be accepted
during the search, according to a probability function given
by a Boltzmann distribution and which is directly proportional
to a control parameter called temperature T . However, the
probability with which these more expensive solutions are chosen
decreases with the iteration index.
In our provisioning algorithm, we maximize the number of
accepted requests by the CSP. As an input to our algorithm, we
consider the abstracted topology previously defined, in addition to
a pre-computed set of possible K SP between all sourcedestination
pairs of the abstracted topology. The cost of a possible solution of
our SA algorithm is defined as the number of accepted requests.
In our heuristic, we aim to maximize the latter number. For a
given solution obtained by means of the heuristic, we compute the
associated objective function C as follows:

C=

xi

(15)

where, xi is a binary variable representing the acceptance or


rejection of a request i . Thus, xi = 1 if i is accepted, 0 otherwise.
Two key parameters condition the speed of convergence of the
SA algorithm, namely the initial solution and the nature of the
perturbation process:

Fig. 4. Perturbation process of the SA algorithm.

Since three types of requests are considered (computing,


storage, and P2P requests), the initial solution is constructed
by scheduling each computing (resp. storage) request on its
access node, if sufficient computing (resp. storage) resources
exist. As for P2P requests, they are routed over the shortest path
available between their source and their destination.
The flowchart of the perturbation used in our heuristic for each
iteration is described in Fig. 4. At the beginning of the algorithm,
we randomly choose a set of Ninit = R2 requests from the
set R. As the number of iterations within the SA algorithm
increases, the temperature decreases. The number of chosen
requests during an iteration is decreased accordingly and is
given by: N = Ninit T T . Some of these N requests can have
init
resources reserved for them in the previous iteration (xi = 1),
whereas the others were rejected in the previous iteration due
to lack of resources (xi = 0). Among the accepted requests at
hand, we choose to directly reject 20% of them. Simultaneously,
we apply a perturbation on the 80% requests left, whether
they were accepted or rejected in the previous iteration. The
individual perturbation of each request differs according to the
type of the request. For a computing request, a new random
computing node is chosen; for a storage requests, one or several
new storage nodes are chosen; and for a P2P request, we choose
a random path among the set of K SP. The routing of computing
and storage requests data is done using a random path of the
set of K SP. We note that a computing request can be executed
on one computing node. Whereas a storage request can be
split and stored onto multiple nodes. In this last case, the
bandwidth required to transfer the data between its current
location and the final destinations is also split among multiple
routes between the source and the multiple chosen storage
nodes. The SA first tries to store the request on one site. In case
it fails, it splits the storage requirement of the request in two,
and so on. After the storage site(s) is (are) chosen, the SA tries
to find an available route between the source and each of the
chosen sites.
We note that at the beginning of the first scheduling
period we consider that no backbone traffic exists and the

56

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

bandwidth provided to the CSP is drawn from a dedicated


infrastructure. However, it may happen that some requests may
span over multiple scheduling periods. This is why, for subsequent
scheduling periods, resources reserved in previous scheduling
periods are kept unchanged until the teardown times of the active
requests.
7.2. At the NSP level
Upon the reception of a request (at its activation time) from
the CSP, the NSP first verifies that there is enough bandwidth
on the physical route inherent to the abstracted route chosen by
the CSP. We recall that the mapping between these two routes is
already defined by the abstraction algorithm during each update.
This mapping is represented by a matrix M = {M(s,d) }, where M(s,d)
is the set of physical links e(u,v) used by abstracted link e(s,d) . In case
of lack of bandwidth on this physical path, the NSP tries then to
find among the set of pre-computed K SP an alternative path that
has sufficient bandwidth. If this fails, the request is rejected.
It is worth noting that for each selected request, the CSP
provides the NSP with the chosen abstracted route and the
bandwidth needed to be reserved on that route during request
active period. The CSP does not specify the type of the request
since the NSP is responsible only for the routing process. At its end,
the NSP routes the request from its source(s) to its destination(s).
We notice that a request can have multiple sourcedestination
pairs, bandwidth requirement, and chosen abstracted links. This
is due to the fact that executing a computing request implies two
simultaneous data transfers in opposite directions: the first one
routes the data from users access node to the chosen computing
node, while the second one routes the data generated during
computation from the chosen computing node back to users
access node. Moreover, a storage request may have multiple
destinations having same source node and same bandwidth
requirements. Consequently, the former request implies multiple
simultaneous data transfers between its source node and the
multiple storage nodes. Finally, a P2P request has only one source
and one destination and thus implies a single data transfer.
For this purpose, we define for each request, the set T =
{(ai , di , bi , Li )} representing a group of data transfers that are
supposed to be done simultaneously during requests active period
[ , ]. More specifically, the tuple (ai , di , bi , Li ) represents a data
transfer of bandwidth bi between source ai and destination di using
the set of abstracted links Li . In Algorithm 7.2, we summarize the
E2E data transfer procedure between a known sourcedestination
pair. The routing of a request is then done using Algorithm 7.1.
The NSP returns an acknowledgment to the CSP indicating if the
resources are available, or not, for routing the request.
Algorithm 7.1 NSP: network provisioning of a request

Input: ( , , T = {(ai , di , bi , Li )}).


Output: ReturnNetAv ail(): positive (1) or negative (0)
acknowledgment
Begin
1
ReturnNetAv ail() 1
2
for each (ai , di , bi , Li ) T do
2.1
r Route( , , ai , di , bi , Li )
2.2
if r = 0 then
2.2.1
ReturnNetAv ail() 0
endif
endfor
3
return ReturnNetAv ail()
End.

Algorithm 7.2 End-to-end routing

Input: { , , a, d, b, L} (* data transfer of b between a and d


during [ , ] on abstracted link set L *) ,
Physical topology G(V , E ),
M = {M(s,d) } (* Mapping matrix between abstracted
and physical links *) ,
Set of all K SP between all border nodes pairs.
Output: r (* a binary value where r = 1 if an available
route exists *)
Procedure Route ( , , a, d, b, L)
Begin
Step 1. route using chosen abstracted links
1
r 1
2
for all e(s,d) L do
2.1
get M(s,d) from M
2.2
if for all links e(u,v) M(s,d) , Bt(u,v) b during
[ , ] then
2.2.1
Bt(u,v) Bt(u,v) b
else
2.2.2
r 0
2.2.3
restore links bandwidth
2.2.4
go to Step 2
endif
endfor
Step 2. route using an alternative physical path
3
if r = 0 then
3.1
for k = 1 to K (* for all pre-computed paths
between source a and destination d *) do
3.1.1
if for all links e(u,v) of path pk(a,d) , Bt(u,v) b
during [ , ] then
3.1.1.1
r 1
3.1.1.2
Bt(u,v) Bt(u,v) b
3.1.1.3
return (r )
endif
endfor
endif
4
return (r )
End.
End Procedure

8. Numerical results
8.1. Simulation setup
We consider, for the investigation of the three proposed
algorithms, a 6-node bottleneck topology. Four of the six nodes
are considered border nodes. End-user requests enter the network
through border nodes. The abstracted topology is made out of
the four border nodes connected via a full-mesh network, as
presented in Fig. 5. We apply the three proposed algorithms
on the physical topology. Requests are generated according to a
Poisson process with an average arrival rate . The holding time
for each request is exponentially distributed with mean 1 . Each
node generates its own requests throughout the simulation time
of 60 h. The type of service requested is chosen randomly. Source
and destination pairs are selected randomly among the edge nodes,
with the exception that nodes with computational (resp. storage)
resources do not generate computational (resp. storage) requests.
The distribution for the request parameters used are as follows: the
bandwidth requirement for computational and storage requests
follow a uniform distribution within [6, 26] Gbps. To elaborate, the
required bandwidth is assumed to be constant during the duration
of the request. Bandwidth requirement for data transfer requests
follows a discrete probability density function (pdf) that defines

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

57

We clearly see that no requests are rejected at the NSP level in

Fig. 5. Physical and abstracted network topology.

the probability weighting for two possible bandwidth outcome


{2, 10} Gbps with probability 0.625 and 0.375 respectively.
Storage capacity requirement is uniformly distributed within
[1, 5] GB and stored for a duration which is uniformly distributed
between [4, 6] h. Computational requests require a processing
capacity measured in MIPS which is uniformly distributed between
[10, 50] MIPS across a number of processors which is also uniformly
distributed within 1, 3. We assume that the NSP provides the CSP
with an abstracted view of the network every 10 h. Thus, we rerun
the abstraction algorithm every 10 h using the state of resources
at that time. The resources reserved in the previous period are
kept unchanged. An accepted request using a specified path is
not rerouted. We generate four different set of requests having an
average of 200, 300, 400, and 600 requests per scheduling period,
respectively. The holding rate, and the arrival rate is also
selected such that the pdf of the load for storage, computational,
and P2P requests is {0.2, 0.2, 0.6}, respectively. We note that for
the MILP formulation, we set Pmax = 2, since for the network
topology aggregating the bandwidth of two physical paths, for a
sourcedestination pair, is enough. For the SILK and SIlK-ALT, we
set the number of shortest paths pre-computed offline between
a sourcedestination pair to K = 4, since we use a weighted
function to calculate abstracted links bandwidth, where longer
paths have lower weights than shorter paths (see Algorithm 5.2).
The complexity of the MILP formulation for the chosen network
topology and traffic characteristics is O(2337, 1414).
We investigate, at both the CSP and the NSP levels, the number
of accepted requests. In addition, we compute the crank-back ratio
obtained for the three proposed algorithms. All the results are
compared with the ideal case where the CSP has a full knowledge
of the network. In this case, no abstraction is done and the CSP has
a view of the six network nodes and the links between them. We
refer to this case as the Physical case. We also note that for the
SA provisioning algorithm used at the CSP level, we allow for the
storage of the data of a storage request onto a maximum of two
storage nodes in our simulations.
8.2. Simulation results
In Fig. 6, we plot for the different loads and for each time
intervals (10 h) the number of accepted requests by the CSP
represented by the full bars, their respective number of rejected
requests at the NSP level represented by the black colored bars,
and the total number of requests present during that time period
represented by the dashed lines. Many remarks can be drawn from
this figure:

the Physical case, which is a trivial result since the CSP has
a full knowledge of the network. The route taken by a request
chosen by the CSP matches the route taken on the physical one.
In addition, the Physical case accepts and routes the highest
number of requests.
We notice that the number of rejected requests at the NSP level
is higher in the case of the SILK than the case of SILK-ALT,
independently from the traffic load. Moreover, both SILK and
SILK-ALT undergo higher number of rejected requests at the
NSP level than the case of MILP.
The SILK algorithm presents the highest number of accepted
requests at the CSP level, since SILK over estimates the available
bandwidth given to the CSP. This is the reason why at the NSP
level, a high number of these pre-accepted requests is rejected.
The SILK-ALT and the MILP in return, pre-accept a lower number
of requests but have less rejected requests at the NSP level. We
notice that in this context, the MILP has the best performance
between the three algorithms.
We can also see that the MILP formulation provides a higher
number of actually routed requests than SILK-ALT. Both the
former and the latter route higher number of requests than SILK.
All the previous remarks show that the SILK-ALT clearly
improves the performance of the SILK. In addition, the MILP
formulation outperforms the previous two algorithms and present
the closest results to the Physical case in terms of accepted
requests and in terms of crank-back number of requests. In order
to highlight these two conclusions, we plot for the three proposed
algorithms in Fig. 7 the total number of accepted requests during
the global time period for the different traffic matrices. In addition,
Fig. 8 plots the evolution of the crank-back ratio of the three
proposed algorithms with respect to requests load. Many remarks
can be drawn from these two figures:

We can see from Fig. 7 that the MILP performs similarly to the
Physical case at low and medium loads. Moreover, we can see
from Fig. 8 that at these loads the MILP formulation presents no
crank-back. At higher loads, the MILP starts rejecting requests
and presents a positive (very low) value for the crank-back.
One may wonder why the MILP presents a crank-back at the
first place, since it is supposed to provide the optimal value
for the CSP that reflects the actual available bandwidth for the
accepted requests. This crank-back is due to the unawareness
of the CSP for the physical paths associated with the abstracted
links. Since the MILP formulation uses several physical path for
the bandwidth calculation of the virtual link, we can have an
amount of bandwidth used to route a request at the CSP level
that is split among several physical paths at the NSP level. This
is the only case for the MILP where the NSP is unable to find
available resources (there are enough resources but not on a
same path) to route a request accepted by the CSP. For higher
loads, the probability of facing such a situation increases and
the crank-back consequently increases.
We notice that the SILK presents a positive crank-back at low
load. The value of the crank-back increases dramatically with
the load. For instance, we see that at a Very High load with a
total of 3600 requests (6 scheduling periods * 600 requests per
scheduling period), we have a crank-back ratio of 0.22 which
means that 22% of the pre-accepted requests by the CSP are
rejected at the NSP level.
The SILK-ALT algorithm presents a relatively small crank-back
ratio with respect to SILK. We notice that the crank-back
increases at a much slower rate than the case of the SILK. This
result highlights the significant improvement of SILK-ALT with
respect to the SILK algorithm.
For the Very High load, the MILP formulation and the SILK-ALT
reduce the crank-back ratio by 95.5% and 77.8% with respect to
SILK, respectively.

58

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

Fig. 6. Number of accepted requests and their respective crank-backs for the four network models, under different traffic loads.

Fig. 7. Total number of accepted requests for the simulation time, for the four
network models.

9. Conclusion
In todays Cloud environment, Cloud Service Providers (CSPs)
do not own physical infrastructure but need to deliver global-reach
services. They can use, share, or rent resources from other service
providers. For scalability and confidentiality concerns, these other
service providers only share an abstracted view of their resources.
In this paper, we focus on topology abstraction techniques as a

Fig. 8. Crank-back ratio evolution of the three abstraction algorithms with respect
to the traffic load.

means through which CSPs can rent physical infrastructure from


multiple providers to deliver global services.
We have compared in this article three different abstraction
algorithms that may be used by an NSP. Two of these algorithms
are based on approximate techniques, namely the SILK and the
SILK-ALT algorithms. The third technique consists in an exact
approach based on an MILP formulation of the problem. As
for IT resources, we have considered the single aggregate IT
resource abstraction scheme. We have considered a regular update

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660

in resource state information provided by third-party service


providers. As a reference scenario, we have considered a 6-node
bottleneck topology. We have compared the performance of the
three network abstraction algorithms, in terms of crank-back ratio
and number of executed requests at the CSP and the NSP levels,
with respect to the ideal case where the CSP has a complete
knowledge of all resources.
Our simulation results underline the fact that the MILP
formulation outperforms the SILK and SILK-ALT algorithms in all
the aspects. This exact formulation provides results equivalent to
the aforementioned ideal case. In addition, the SILK-ALT shows
significant improvement in comparison with the SILK algorithm.
Under high load, the crank-back ratio increases linearly under the
SILK algorithm. As for the SILK-ALT, it presents results comparable
to those obtained by the MILP formulation. Since the MILP
formulation is not scalable to real size networks, we can consider
that our numerical results validate the SILK-ALT approach that can
be used in the context of more realistic network configurations.
References
[1] F.B. Piero Castoldi, Barbara Martini, Challenges for enabling cloud computing
over optical networks, Presentation, OFC/NFOEC 2009, San Diego, USA, March
2009.
[2] Amazon simple storage service (amazon s3). URL http://aws.amazon.com/s3.
[3] Amazon elastic compute cloud (amazon ec2). URL http://aws.amazon.com/
ec2.
[4] Google app engine. URL http://appengine.google.com.
[5] D. Hilley, Cloud computing: a taxonomy of platform and infrastructure-level
offerings, Technical Report GIT-CERCS-09-13, Georgia Institute of Technology.
[6] R. Buyya, R. Ranjan, R. Calheiros, Intercloud: utility-oriented federation of
cloud computing environments for scaling of application services, in: C.H. Hsu, L. Yang, J. Park, S.-S. Yeo (Eds.), Algorithms and Architectures for
Parallel Processing, in: Lecture Notes in Computer Science, vol. 6081, Springer,
Berlin, Heidelberg, 2010, pp. 1331.
[7] Ibm cloud computing. URL http://www.ibm.com/ibm/cloud/.
[8] Google and ibm join in cloud computing research. URL http://www.nytimes.
com/2007/10/08/technology/08cloud.html.
[9] G. Zervas, M. De Leenheer, L. Sadeghioon, D. Klonidis, Y. Qin, R. Nejabati, D. Simeonidou, C. Develder, B. Dhoedt, P. Demeester, Multi-granular
optical cross-connect: design, analysis, and demonstration, Optical Communications and Networking, IEEE/OSA Journal of 1 (1) (2009) 6984.
http://dx.doi.org/10.1364/JOCN.1.000069.
[10] G. Zervas, V. Martini, Y. Qin, E. Escalona, R. Nejabati, D. Simeonidou, F.
Baroncelli, B. Martini, K. Torkmen, P. Castoldi, Service-oriented multigranular optical network architecture for clouds, Optical Communications and Networking, IEEE/OSA Journal of 10 (2) (2010) 883891.
http://dx.doi.org/10.1364/JOCN.2.000883.
[11] C. Abosi, R. Nejabati, D. Simeonidou, A service plane architecture for future
optical internet, in: Optical Network Design and Modeling, 2009. ONDM 2009.
International Conference on, 2009, pp. 16.
[12] B. Rochwerger, D. Breitgand, E. Levy, A. Galis, K. Nagin, I.M. Llorente, R.
Montero, Y. Wolfsthal, E. Elmroth, J. Caceres, M. Ben-Yehuda, W. Emmerich,
F. Galan, The reservoir model and architecture for open federated cloud
computing, IBM Journal of Research and Development 53 (4) (2009) 4:14:11.
http://dx.doi.org/10.1147/JRD.2009.5429058.
[13] Clarizen Work Management solution. URL http://www.clarizen.com/.
[14] Salesforce Cloud provider. URL http://www.salesforce.com/.
[15] Amazon webservices. URL http://aws.amazon.com/.
[16] Microsoft live mesh. URL http://www.microsoft.com/windowsazure/.
[17] GEYSERS, Generalised Architecture for dynamic infrastructure services. URL
http://www.geysers.eu/.
[18] S. Uludag, K.-S. Lui, K. Nahrstedt, G. Brewster, Analysis of topology aggregation
techniques for qos routing, ACM Computing Surveys 39 (3) (2007) 7.
http://doi.acm.org/10.1145/1267070.1267071.
[19] R. Ravindran, C. Huang, K. Thulasiraman, A dynamic managed vpn service:
architecture and algorithms, in: Communications, 2006. ICC 06. IEEE
International Conference on, vol. 2, 2006, pp. 664669. http://dx.doi.org/10.
1109/ICC.2006.254783.
[20] Q. Liu, M.A. Kk, N. Ghani, A. Gumaste, Hierarchical routing in multidomain optical networks, Computer Communications 30 (1) (2006) 122131.
http://dx.doi.org/10.1016/j.comcom.2006.08.003.
[21] E. Marin-Tordera, X. Masip-Bruin, S. Sanchez-Lopez, J. Sol-Pareta, J. DomingoPascual, A hierarchical routing approach for optical transport networks,
Computer Networks 50 (2) (2006) 251267. http://dx.doi.org/10.1016/j.
comnet.2005.05.019.
[22] T. Korkmaz, M. Krunz, Source-oriented topology aggregation with multiple
qos parameters in hierarchical networks, ACM Transactions on Modelling
and Computer Simulation 10 (4) (2000) 295325. http://doi.acm.org/10.1145/
369534.369542.

59

[23] S. van Oudenaarde, Z. Hendrikse, F. Dijkstra, L. Gommans, C. de Laat, R.J. Meijer,


Dynamic paths in multi-domain optical networks for grids, Future Generation
Computer Systems 21 (4) (2005) 539548.
[24] R.S. Ravindran, C. Huang, K. Thulasiraman, Topology abstraction as vpn service,
in: Proc. of ICC, Korea, 2005.
[25] R. Ravindran, P. Ashwood-Smith, H. Zhang, G.-Q. Wang, Multiple abstraction
schemes for generalized virtual private switched networks, in: Electrical
and Computer Engineering, 2004. Canadian Conference on, vol. 1, 2004,
pp. 519522. vol.1. doi:10.1109/CCECE.2004.134507 9.
[26] W. Htira, O. Dugeon, M. Diaz, A novel bandwidth broker architecture based on
topology aggregation in delay, bandwidth sensitive networks, in: Proceedings
of the 7th International IFIP-TC6 Networking Conference on AdHoc and Sensor
Networks, Wireless Networks, Next Generation Internet, NETWORKING08,
Springer-Verlag, Berlin, Heidelberg, 2008, pp. 482493.
[27] N.S.V.R.Q. Liu, N. Ghani, T. Lehman, Multi-domain multi-granularity service
provisioning in hybrid DWDM/SONET networks, in: High-Speed Networks
Workshop, IEEE, 2007, 2007.
[28] L. Lei, Y. Ji, A spanning tree-based QoS aggregation algorithm in hierarchical
ASON, Communications Letters, IEEE 9 (5) (2005) 459461.
[29] S. eok Jeon, Topology aggregation: merged-star method for multiple nonisomorphic topology subgraphs, Computer Communications 29 (11) (2006)
19591962.
[30] F. Hao, E. Zegura, On scalable QoS routing: performance evaluation of topology
aggregation, in: INFOCOM 2000. Nineteenth Annual Joint Conference of the
IEEE Computer and Communications Societies. Proceedings. IEEE, vol. 1, 2000,
pp. 147156. vol. 1.
[31] M. Korkmaz, T. Krunz, Source-oriented topology aggregation with multiple
QoS parameters in hierarchical ATM networks, in: Quality of Service, 1999.
IWQoS 99. 1999 Seventh International Workshop on, IEEE, 1999, pp. 137146.
[32] B.K. Baruch Awerbuch, Yi Du, Y. Shavitt, Routing through networks with
hierarchical topology aggregation, Journal of High-Speed Networks 7 (1)
(1998) 5773.
[33] I. Iliadis, Optimal PNNI complex node representations for restrictive costs and
minimal path computation time, Networking, IEEE/ACM Transactions on 8 (4)
(2000) 493506. http://dx.doi.org/10.1109/90.865077.
[34] K.-S. Lui, K. Nahrstedt, S. Chen, Hierarchical qos routing in delay-bandwidth
sensitive networks, in: Local Computer Networks, 2000. LCN 2000. Proceedings. 25th Annual IEEE Conference on, 2000, pp. 579588.
[35] F.L. Verdi, M.F.M. aes, E.R.M. Madeira, A. Welin, Using virtualization to provide
interdomain QoS-enabled routing, Journal of Networks 2 (2) (2007) 2332.
[36] C. Abosi, R. Nejabati, D. Simeonidou, A novel service composition mechanism
for future optical internet, Optical Communications and Networking, IEEE/OSA
Journal of 1 (2009) A106A120.
[37] P. Kokkinos, E. Varvarigos, Resource information aggregation in hierarchical grid networks, in: Cluster Computing and the Grid, 2009. CCGRID 09. 9th IEEE/ACM International Symposium on, 2009, pp. 268275.
http://dx.doi.org/10.1109/CCGRID.2009.63.
[38] Amazon cloudfront (amazoncf). URL http://aws.amazon.com/cloudfront/.
[39] J. Kuri, N. Puech, M. Gagnaire, E. Dotaro, R. Douville, Routing and wavelength
assignment of scheduled lightpath demands, JSAC 21 (8) (2003) 12311240.
[40] M. Gagnaire, M. Koubba, N. Puech, From network planning to traffic
engineering in wdm all-optical networks, IEEE JSAC.
[41] E.A. Doumith, M. Gagnaire, O. Audouin, R. Douville, From network planning
to traffic engineering for optical vpn and multi-granular random demands, in:
Proc. of IEEE IPCCC, AZ, USA, 2006.
[42] R. Aoun, M. Gagnaire, Towards a fairer benefit distribution in grid environments, in: Proc. of Gridcom, Morocco, 2009.
[43] R. Aoun, M. Gagnaire, Service differentiation based on flexible time constraints
in market-oriented grids, in: Proc. of GLOBECOM, HI, USA, 2009.
[44] H. Nguyen, M. Gurusamy, L. Zhou, Provisioning lightpaths and computing
resources for location-transparent scheduled grid demands, in: Proc. of ONDM,
Spain, 2008.
[45] B. Volckaert, P. Thysebaert, M. De Leenheer, F. De Turck, B. Dhoedt, P.
Demeester, Network aware scheduling in grids, in: Proc. of NOC, Netherlands,
2004.
[46] C. Curti, T. Ferrari, L. Gommans, S. van Oudenaarde, E. Ronchieri, F. Giacomini,
C. Vistoli, On advance reservation of heterogeneous network paths, Future
Generation Computer Systems 21 (4) (2005) 525538.
[47] E. Elmroth, J. Tordsson, Grid resource brokering algorithms enabling advance
reservations and resource selection based on performance predictions, Future
Generation Computer Systems 24 (6) (2008) 585593.
[48] S. Even, A. Itai, A. Shamir, On the complexity of timetable and multicommodity
flow problems, SIAM Journal on Computing 5 (4) (1976) 691703.
[49] C. Duhamel, P. Mahey, Multicommodity flow problems with a bounded
number of paths: a flow deviation approach, Networks 49 (2007) 8089.
[50] G. Baier, E. Kohler, M. Skutella, On the k-splittable flow problem, Algorithmica
(2002) 101113.
[51] D. Eppstein, Finding the k shortest paths, SIAM Journal on Computing 28 (2)
(1998) 652673.
[52] S. Kirkpatrick, C. Gelatt, M. Vecchi, Optimisation by simulated annealing,
Science 220 (1983) 671680.
[53] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller,
Equations of state calculations by fast computing machines, Journal of
Chemical Physics 21 (6) (1953) 10871092.

60

R. Aoun et al. / Future Generation Computer Systems 29 (2013) 4660


Rosy Aoun is an assistant professor in the Computer
Science Department at Notre Dame University-Louaize,
Lebanon. She received her B.E. and M.S. Degrees in 2006
from the Lebanese University in Roumieh, Lebanon and
from the cole Nationale dIngnieur de Brest(ENIB) in Brest,
France respectively. In 2006, she joined Telecom ParisTech
(ENST) in Paris, France, where she got her Ph.D. from
the Computer Science and Networks Department. Her
research interests include Cloud environment, resource
virtualization, resource abstraction, Mobile Clouds, Green
Clouds, and market-oriented Clouds.
Chinwe E. Abosi received her Ph.D. from the Electronic
Systems Engineering Department of the University of
Essex, United Kingdom in 2011. She received her M.S.
Degree in Information Networking from Carnegie Mellon
University, Pittsburgh, Pennsylvania, in association with
the Athens Information Technology Centre, Athens, Greece
in 2006 and her B.Eng. Degree in Electrical and Electronics
Engineering from the University of Botswana in 2005. Her
research interests include future Internet frameworks and
architectures, cloud computing, resource virtualization,
resource orchestration, semantic information modeling

and optical networks.


Elias A. Doumith is Assistant Professor in the Department
of Networks and Computer Science at TELECOM ParisTech,
France. He received his M.Sc. Degree (2003) and Ph.D.
Degree (2007) from the cole Nationale Suprieure des
Tlcommunications, France. Between 2007 and 2009,
he worked as Junior Research Engineer at the Institute
of Communication Networks and Computer Engineering
at the University of Stuttgart, Germany. His domain of
interest covers network planning and traffic engineering
for networks, ranging from access networks to core
networks including embedded networks. His current
research works deal with cloud computing, radio over fiber, monitoring in optical
networks, and scalable optical network design.
Reza Nejabati joined University of Essex in 2002 and he is
currently a member of Photonic Network Group at the University of Essex. During the past 10 years he has worked
on ultra high-speed optical networks, service oriented and
application-aware networks, network service virtualization, control and management of optical networks, highperformance network architecture and technologies for
e-science. Reza Nejabati holds a Ph.D. in optical networks
and an M.Sc. with distinction in telecommunication and
information systems from University of Essex, Colchester,
United Kingdom.

Maurice Gagnaire is Professor at the Computer Science


and Networks Department of Telecom ParisTech, Paris,
France where he leads the Network, Mobility and Services
research group. His field of expertise covers optical
core networks design and traffic engineering, hybrid
optical-wireless access systems, resource virtualization
and pricing strategies in Cloud environment. His research
activities are carried out in the context of European
projects (BONE network of excellence, DICONET project),
national research projects.
He is the author or co-author of several books in
English on broadband access systems (Artech House, 2003), (Kluwer, 2000), IP over
WDM (AddisonWesley, 2002) and on optical traffic grooming (Springer, 2007). He
has also authored several books in French. He has been co-guest editor of several
special issues of the Computer Networks journal (Elsevier, 2000); Proceedings of the
IEEE titled (IEEE Press, September 2004) and Annals of Telecoms (Springer, 2010).
He has chaired the IEEE High Performance Switching and Routing conference (2009)
and the IEEE Globecom Symposium on Advanced Technologies and Protocols for
Transparent Optical Networks (2006). He is in the steering committee or TPC of
several IEEE-IFIP conferences. He has been appointed as an expert by the Flemish
Government of Belgium (1998) and the National Science Foundation of the USA
(2001, 2004). He is a member of the Optical Network Technical Committee of the
IEEE and of the IFIP WG6.10 working group on photonic networking. He is graduated
from INT Evry-France. He received the DEA Degree (University Paris 6), the Ph.D.
Degree from ENST and the Habilitation (University of Versailles) in 1999.

Dimitra Simeonidou is currently a Professor at the


University of Essex. She has over 15 years experience in the
field of optical transmission and optical networks. In 1987
and 1989 she received a B.Sc. and M.Sc. from the Physics
Department of the Aristotle University of Thessalonica,
Greece and in 1994 a Ph.D. Degree from the University of
Essex.
From 1992 to 1994 she was employed as Senior
Research Officer at University of Essex in association
with the MWTN RACE project. In 1994 she joined
Alcatel Submarine Networks as a Principle Engineer and
contributed to the introduction of WDM technologies in submerged photonic
networks. She participated in standardization committees and was an advising
member of the Alcatel Submarine networks patent committee.
Professor Simeonidou is the author of over 350 papers and she holds 11
patents relating to photonic technologies and networks. Her main research interests
include optical wavelength and packet switched networks, network control and
management, Grid networking and High Definition Networked-Media.

You might also like