You are on page 1of 8

EPSO: A Heuristic Approach Based on PSO

with Effective Parameters for Independent


Tasks Scheduling in Cloud Computing
Arash Ghorbannia Delavar, Batool Dashti
Abstract Finding an optimal solution for tasks scheduling problem in distributed computing systems such as cloud and grid
with the aim of reducing the overall completion time of requests and appropriate load balancing is an important issue. In this
work we provide the conditions to increase the efficiency of scheduling algorithm while considering effective parameters against
compared algorithms which in normal conditions have failed to meet system demands properly with real effectiveness. We
propose a heuristic approach based on PSO with effective parameters (EPSO) for independent tasks scheduling in cloud
computing environment. We create the proper load distribution on computing nodes with integrating effective parameters in
fitness function, such as request time, acknowledge time and delay between them. And we provide the conditions, that the
overall completion time of requests (Makespan) and the convergence rate to the optimal response is reduced. Simulation
results and comparison with other algorithms indicates EPSO has better performance in solving tasks scheduling problem.
Using EPSO we have increased efficiency and dependability that help to have quick response to requests in cloud computing
environments.
Index Terms Cloud computing, Independent tasks scheduling, Particle Swarm Optimization, Load balancing.



1 INTRODUCTION
LOUD computing is the next step in the evolution of
the internet. Cloud computing provide everything
(computing power for computational infrastructure,
applications, business processes for working people, etc)
that can be delivered as a service to the user anywhere
and anytime. Cloud computing can provide a wide range
of services to users. These services currently are com-
posed of three categories: software as a service (SaaS),
platform as a service (PaaS) and infrastructure as a service
(IaaS). Also cloud includes hardware, networks, services,
interfaces, application delivery, infrastructure and storage
resource on the internet (to form a completely separate
components or a complete platform) based on application
request. It has many participants such as, end users who
really do not know anything about the underlying tech-
nology of cloud, business managers, and cloud service
suppliers [1].
Tasks scheduling problem is one of the most challeng-
ing issues in parallel and distributed computing envi-
ronments such as cloud and has attracted more attention
over the years. This problem involves mapping a set of
computing tasks and data priority in parallel computing
systems. Tasks scheduling problem is divided in two
main groups: meta-tasks scheduling (a set of independent
tasks from different users which must be assigned to a set
of computational nodes) and workflow scheduling prob-
lem. The goal of tasks scheduling in cloud computing is
the allocation of tasks to processors or computational
nodes accessible by the cloud service suppliers (such as
Amazon, Cisco, Eucalyptus), hence the priority re-
quirements can be full filled for these tasks to the best
possible. Many approaches have been presented for solv-
ing tasks allocation in computational environments. They
can be divided into four main groups: graph theory, li-
near programming, state space search, and heuristic and
meta-heuristic approaches.
The rest of paper is organized as follow: section 2
presents related works. Section 3 describes problem defi-
nition. In Section 4, we introduce PSO algorithm and our
proposed method (EPSO). Section 5 gives the simulation
results and compares its performance with other algo-
rithms. And finally conclusions are made in Section 6.
2 RELATED WORKS
Tasks scheduling in computational environments is one
of the NP-complete problems. Most works in the past
provided the optimal solution for tasks scheduling prob-
lem on resources based on minimizing the total comple-
tion time of requests (Makespan) or having a higher
processing capability.
In recent years, many linear heuristic and meta-
heuristics approaches have been presented for effective
tasks scheduling in heterogeneous distributed computing
environments which can be easily implemented. Some of
these heuristics are: Min-Min, Max-Min, and Suffrage
heuristics [2], LJFR and SJFR [3], and meta-heuristics with
higher quality solutions such as Genetic Algorithms (GA),
Tabu Search (TS) and Simulated Annealing (SA) [4]. And
Swarm Intelligence [5] that includes, Particle Swarm Op-
timization (PSO) [6] and Ant Colony Optimization (ACO)
[7] algorithms.
Salman et al. [8] illustrated a new task assignment al-

- Arash Ghorbannia Delavar, Department of Computer, Payame Noor Uni-
versity, PO BOX 19395-3697, Tehran, IRAN.
- Batool Dashti. Department of Computer, Payame Noor University, PO
BOX 19395-3697, Tehran, IRAN.
C
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 31


gorithm that is based on the principles of particle swarm
optimization (PSO). Zhang et al. [9] presented the PSO-
based algorithm for tasks scheduling that its performance
is much better than the genetic algorithm. Mathiyalagan
et al. [10] have presented a new PSO algorithm with bet-
ter convergence rate and performance by improving the
iteration parameter in [9]. Izakian et al. [11] have pro-
posed a version of PSO approach for meta-task schedul-
ing in heterogeneous computing systems. Their scheduler
goal is to minimize makespan .Omidi and Rahmani [12]
investigated independent tasks scheduling on multiple
processors with a new algorithm based on PSO. Buyya et
al. [13] presented an approach with tasks grouping, so
that the simulation time, and the task submission time
have significantly been reduced. Pandey et al. [14] pre-
sented a PSO based scheduling heuristic for data inten-
sive applications that take into account both computation
cost and data transmission cost.
Recently, Page et al. [15] presented a multi-heuristic
evolutionary task allocation algorithm to map tasks to
processors in a heterogeneous distributed system. They
have combined the genetic algorithm with 8 common
heuristics in their approach to minimize the overall ex-
ecution time. Kang and He [16] introduced a new PSO
algorithm for meta-tasks allocation in heterogeneous
computing systems (HCS).
In this paper, we consider effective parameters for
solving tasks scheduling problem, and we provide quick
response to requests based on statistics and computing
science with prioritizing and grouping requests. Finally
we evaluate the proposed approach by comparison with
the presented algorithms in [15] and [16].
3 PROBLEM DEFINITION
While requests are entered into the cloud system, we se-
lect T = T
1
, T
2
, , T
n
as a set of n independent tasks
from these requests, and consider R = {R
1
, R
2
, , R
m
} as a
set of m computing nodes (resources) within the cloud
environment.
The goal of scheduling problem is to provide a solu-
tion to allocate the tasks set T to the resources set R, so
that all tasks can be performed by nodes in less time. It
should be noted that a task at any time, can only be as-
signed to one resource. Moreover, any task cant migrate
between the resources. Different objectives can be consi-
dered for allocation of tasks to distributed computational
nodes in a cloud. The most important of them are mini-
mizing computing cost and minimizing overall comple-
tion time of requests.
We define used parameters in Table 1 as follow:

TABLE 1
USED PARAMETERS FOR EQUATIONS
Quantity Symbol
Length of task T
i
T
i
len

Processing capacity of resource R
j
R
j
MIP

Previous workload of resource R
j
before task T
i

allocated to it
R
j
w

File size of task T
i

T
i
f

Output file size of task T
i
T
i
o

Bandwidth of resource R
j
R
j
bw

Required time for sending request of task T
i
from data-
center to computational resource R
j
t
ij
r

Required time for sending acknowledge from
computational resource R
j
to task T
i
in data-center
t
ij
a

Delay between the time of receiving a request at R
j
and
the time of sending an acknowledge to T
i
D
ij


Suppose that T
ij
c
is the time required for processing
task T
i
on computing resource R
j
, we would like to de-
crease this time. Hence, the T
ij
c
value is calculated from
the following equation:


(1)


And we define T
ij
trans
as transfer cost for task alloca-
tion T
i
to resource R
j
as follow:


(2)


Where

defined as follow and composed of parame-


ters that are effective in transfer cost.

(3)



Now, if A is a scheduling schema in order to allocate a
specific set of tasks to a specific set of computing ma-
chines, we suppose Cost
totl
(N)
j
is the total computation
cost of the requests which are assigned to a computing
machine N
j
in schema A. It can be calculated from follow-
ing equation:

(4)

Let Cost
exe
(N)
j
be the total execution cost of all tasks
allocated to a computing node N
j
(4), and Cost
com
(N)
j
be
the total communication cost between tasks which are
allocated to a computing node N
j
(5). These parameters
are calculated using the following equations:

(5)


(6)

Hence, if the goal of tasks assignment problem is find-
ing an optimal solution with minimum completion cost,
we define the overall completion time of requests in spe-
cific scheduling schema as follow:



R
w
j
R
MIP
j
T
len
i
T
c
ij
+ =
S j
R
bw
j
T
o
i
T
f
i
T
trans
ij
.
+
=
|
|
.
|

\
|
+ +
=
D
ij
t
a
ij
t
r
ij
S
j
1
) ( ) ( ) ( N
j
Cost
com
N
j
Cost
exe
N
j
Cost
total
+ =
( ) n i T
T
i
i
T
c
ij
N
j
Cost
exe
s s e = 1 , ) (
( ) n i T
T
i
i
T
trans
ij
N
j
Cost
com
s s e = 1 , ) (
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 32





(7)
4 PARTICLE SWARM OPTIMIZATION
PSO is a nature-inspired evolutionary computing algo-
rithm that was introduced by Kennedy and Eberhart [6].
This algorithm is inspired by social behavior of animals,
like birds and fish. That is similar to many evolutionary
algorithms like Genetic algorithm. But there is no direct
re-combination of the population. Instead, it relies on the
social behavior of particles. And unlike the genetic algo-
rithm doesnt have evolutionary operators (Mutation and
Crossover).
PSO approach is used to solve many optimization
problems and its composed of a certain number of par-
ticles that are randomly initialized in a search space. Each
particle in the population represents an optimal solution,
and in every generation, adjusts its own path based on its
best position (local best) and the position of the best par-
ticle (global best) of entire population. The stochastic na-
ture of particle increases quick convergence to an optimal
global minimum as a reasonable solution [14].
Every particle i, has its own position and velocity that
respectively are modeled with a position vector X
k
i
=
{X
k,1
i
, X
k,2
i
, , X
k,n
i
} and a velocity vector
V
k
i
= {V
k,1
i
, V
k,2
i
, , V
k,n
i
} at iteration k. Particles move itera-
tively within the n-dimensional search space of the prob-
lem to achieve an optimal solution. In each iteration, the
fitness function is calculated for every solution. Therefore
the performance of provided solutions by each particle
can be evaluated.
Vector P
Lbest
i
= P
Lbest ,1
i
, P
Lbest ,2
i
, , P
Lbest ,n
i
represents
the best position that is met by particle i so far as the local
best position. And vector
P
Gbest
= {P
Gbest ,1
, P
Gbest ,2
, , P
Gbest ,n
} represents the position
of the best particle in the whole population as the global
best position. Each particle updates its position and veloc-
ity while moving along n-dimensional search space of
problem and based on vectors P
Lbest
i
and P
Gbest
and (8), (9).
Hence the search is done toward the best position in the
solution space.



(8)

(9)

X
k
i
is the position of particle i and V
k
i
is the velocity of
particle i at iteration k. C
1
and C
2
are positive constant
parameters which are called cognitive coefficients. r
1
and
r
2
are two uniformly distributed random real numbers in
range [0,1]. Also W is inertia weight parameter that is
specified by the user. And this parameter, along with
C
1
and C
2
control impact of previous values of particle
velocities on the current value of them. V
k
i
is in range
[- V
max
, V
max
]. The algorithm runs until the termination
condition is met which can be either achieving a certain
number of iterations or any other stopping criterion that
is defined by the user.
5 PROPOSED ALGORITHM
In this section, we present a heuristic scheduling ap-
proach (EPSO) to allocate independent tasks to resources
in cloud computing environment. Our approach, in addi-
tion to reduce makespan, will increase quality of response
to requests by using the tasks sorting, and attempt to bal-
ance the load distribution on resources. This is of great
importance for suppliers of cloud system resources be-
cause they can use the maximum processing power of
resources which are shared, at any time, and has an im-
portant role in reduction of current costs.
The proposed approach consists of two steps: at first
step, the tasks which are entered into the system and re-
sources are sorted and grouped according to some impor-
tant parameters in the request processing. Then at the
second step, the initial population is created for the PSO
algorithm based on sorting and grouping requests and
resources, and PSO technique is used to obtain an optimal
scheduling solution. Fig. 1 depicts a particle in our me-
thod (EPSO).

T10 T17 T22 T4 T16 T9 T8
R5 R6 R9 R1 R5 R4 R4
Fig. 1. A sample of scheduling solution as particle in EPSO

We make every sample as a scheduling solution by
particles in EPSO. We use position of every particle to
determine a resource that has to be assigned to each task.
5.1 First Step: Sorting and Grouping
In this step, due to the need for quick response to requests
in a cloud computing system, the tasks entered into the
system are prioritized and sorted in ascending order de-
pending on their workloads. On the basis of this sorting,
requests with lower workload have higher priority, and
will be implemented and scheduled earlier. Next, re-
quests with higher priority will be done, and finally re-
sults are presented to the users. Then, in order to make
better use of the processing capabilities of computing re-
sources provided by cloud service suppliers, resources
are sorted based on their processing capacities (characte-
ristic MIPS, million instructions per second) in ascending
order.
By analyzing statistical samples of tasks entered into
the system which have different workloads, it can be seen
that in a specified range the tasks are very close to each
other in terms of their workloads. These close values in-
dicate more density of the incoming requests in a particu-
lar area. After sorting tasks and resources, the interval
that the incoming tasks workloads are closer to each other
or the Confidence Interval (CI) is identified. This interval
is calculated by some parameters such as average and
standard deviation of the sample space of tasks work-
( )
( ) m j R
N
i
N
j
Cost
total
imum Makespan
s s e
=
1 ,
) ( max
( )
( )
X
i
k
P
Gbest
r C
X
i
k
P
i
Lbest
r C V
i
k
W
V
i
k

+ + =
+
2 2
1 1 1
V
i
k
X
i
k
X
i
k 1 1 +
+ =
+
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 33


loads using the following formula:
Confidence Interval:

(10)


Where

is the average of sample space data, S is


standard deviation, n is sample space size, and
1,

is
critical value of the standard normal distribution with
1 degrees of freedom which is found in the table of
the standard normal distribution, table

[18] with
99.95% confidence. After identifying this interval, re-
quests are divided into three groups; First group includes
the requests that their workloads are less than requests
workloads which are in the confidence interval (CI). This
group often has tasks with less workload that must be
answered sooner. Second group includes the tasks that
their workloads are in the range identified by the interval
(CI). In this group, tasks have very little difference in
terms of workload amount. Therefore, density of incom-
ing requests in this range is high. And finally, third group
includes the tasks that their workloads are greater than
the amounts in the identified interval (CI). This group
often has tasks with high workload. They need to be allo-
cated to resources which have high processing capabili-
ties. In order to utilize the processing capacities of cloud
system resources best, and based on tasks classification,
processing capabilities required for implementation all
tasks, number of existing tasks in each group and their
workloads, computational resources are divided into
three groups as follow: Suppose that (TG)
s
indicates
group s of the tasks:



(11)



w
s
Indicates the weight for each group based on work-
loads of the existing tasks in that group and the total
workloads entered into the system. Now assuming that
(RG)
s
= {R
u
, R
u+1
, , R
u+l
} denotes the group s of re-
sources, that is made corresponding to group (TG)
s
of the
tasks. And with considering the following relation:


(12)

For each group of resources, it is necessary to establish
the following condition:


(13)


L is the number of available resources in the group
(RG)
s
, E
s
is the weight of the (RG)
s
based on the
processing capacities of existing resources in the group.
Note that all tasks in each group are allocated to ap-
propriate resources according to their workloads and
classification and processing capability of resources.

TABLE 2
NOTATIONS
Description Symbol
Number of tasks N
Number of resources M
Size of the PSO population P
Position vector of the i
th
particle in the PSO
population
X[i]
Resource vector for the i
th
particle (thus, R[i][j]
presents the resource index that is assigned to
the j
th
task in the i
th
particle)
R[i]
Velocity vector of i
th
particle V[i]
Global best position during the implementation
of the algorithm
P
Gbest
[]
Local best position visited by i
th
particle P
Lbest
[i]

5.2 Second Step: Create Initial Population and
Scheduling Based on PSO
After sorting and grouping requests and resources, the
population consisting of P particles is created as the initial
population in n-dimensional search space for PSO algo-
rithm. Each particle in the population at the search space
is the scheduling solution for allocation of tasks set
T = {T
1
, T
2
, , T
n
} to computational resources set R =
{R
1
, R
2
, , R
n
}.
According to PSO algorithm, where the vector
X
k
i
= [
k1
i
,
k2
i
, . ,
kn
i
] represents position of the particle i
in the population at iteration k, the values of this vector
are used to specify vector R
k
i
= [
k1
i
,
k2
i
, . ,
kn
i
]. R
kj
i
Indi-
cates that the resource is assigned to request T
j
in pro-
posed solution by the particle i. Note that the following
conditions must always be established in presented algo-
rithm:
1. Any request at any moment can only be allocated
to one resource.
2. A resource can perform more than one requests
(i.e. can be assigned to more than one task) at a
time.
3. According to what was described in the first step,
each task that is located in each group can only be
allocated to the corresponding resource group.
Tasks execution order will not change during algo-
rithm replications. Vector R
k
i
is determined by particles
position parameter based on tasks and resources group-
ing, and then the appropriate resources are selected. Pro-
posed algorithm continues with computing the fitness
function for each particle (solution) in the population, and
analyzes the fitness of the proposed solutions.
Moreover, the values of vectors X
k
i
and V
k
i
are updated
in algorithm according to (8) and (9). Also, the values of
the vector R
k
i
are obtained from X
k
i
values while consider-
ing restrictions. Algorithm is repeated until termination
conditions have been reached, predefined maximum
(

t
n
n
S
X
t
n
n
S
X
2
, 1
,
2
, 1
o o
( )
3 , 2 , 1 ,
1
= e

= s TG
s
T
i
n
i
T
len
i
j
T
len
j
w
s
3 , 2 , 1
1
=
=
= s
m
i
R
MIP
i
w
s
E
s
( )

+
=
= e s
1
3 , 2 , 1 ,
u
u j
s RG
s
R
j
E
s
R
MIP
j
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 34


EPSO ()
Begin
1. Sorting and Grouping
1.1. The scheduler receives the tasks list -> T[]
1.2. The scheduler sorts the tasks according to their work-
loads in ascending order -> Tsort[]
1.3. The Scheduler receives the resources list -> R[]
1.4. The scheduler sorts the resources according to their
processing capacities (MIPS) in ascending order->
Rsort[]
1.5. Create a statistical Sample Space according to the
workloads of tasks in Tsort[] -> SS[]
1.6. Calculate the Confidence Interval (CI) for SS[] using
statistical calculations according to (10)
1.7. Create tasks groups according to CI -> TG
k
[] for
(k=1,2,3)
1.8. Create resources groups according to (11), (12) and
(13) -> RG
k
[] for (k=1,2,3)

2. Create an initial population and scheduling based on
PSO
2.1. For each TG
k
[] with (k=1 to 3) in i
th
particle ( )
2.1.1. For each T
j
TG
k
[i]
2.1.1.1 initialize V[i][j] randomly
2.1.1.2. initialize R[i][j] randomly from RG
k
[i]
2.1.1.3. initialize X[i][j] with copy of R[i][j]

2.2. Repeat algorithm until a stopping criterion is satis-
fied
2.2.1. Find P
Gbest
such that fitness[P
Gbest
] > fitness[i]
2.2.2. For each particle P
Lbest
[] = X[i] if ( fitness[i] >
fitness P
Lbest
[]) ( i P)
2.2.3. For each particle update V[i] and X[i] according
to (8), (9)
2.2.4. For each particle calculate R[i] according to ob-
tained X[i]
2.2.5. Evaluate fitness[i] ( i P)

END
number of iterations or when there is no appreciable im-
provement in fitness function values.
Finally, the presented solution by the global best P
Gbest

is selected as the solution of independent tasks schedul-
ing problem in cloud computing environment.
Fig. 2. EPSO algorithm
5.1 Fittness Function
We have to evaluate quality of presented solutions by
particles in PSO algorithm with fitness function. Solution
that has better fitness (depending on whether the goal is
maximizing or minimizing a specific amount) is selected
as an optimal solution. In EPSO, we select the appropriate
resource for each task by the mentioned Equations in sec-
tion 3. Therefore, we define the fitness value of a particle
according to (6) as follow:
= ()
6 SIMULATION AND EXPRIMENTAL RESULTS
To evaluate the performance of EPSO, we compare the
results of our simulations with the simulation results of
the algorithm (PN) that is presented in [15]. It is the com-
bination of GA and 8 heuristics, and also the simulation
results of the PSO-based algorithm (DPSO) that is intro-
duced in [16]. In order to obtain the optimal solution for
the scheduling problem and also to increase the conver-
gence rate of the respond, we consider the following val-
ues for various parameters in our experiments. Popula-
tion size =100. C
1
, C
2
=2.0, the inertia weight is dynamical-
ly calculated as follow:

(14)

and

determine the initial and final values


for a range of inertia weight values, which are initialized
with 0.2 and 1.4, respectively. Also is calculated from
the following equation, parameter is current iteration
value, and shows maximum iteration number,
or generation number. Also, is a constant that is initia-
lized with 4.0.




(15)


The used parameters for simulation experiments are
shown in the Table 3:

TABLE 3
EXECUTION PARAMETERS FOR SIMULATING IN EXPERIMENTS
Quantity Parameter
20000 ~ 40000 (Number of Million
Instructions )
Task Length
400 ~ 800 (MB) Task File size and
Output Size
1 ~ 100 (MB) Task Memory Size
100 ~ 500 (Million Instructions Per
Second )
Node Processing
Capacity
256 ~ 512 (MB) Node Memory Size
500 ~ 10000 (MB) Node Storage Size
100 ~ 500 (Mbps) Bandwidth
0.6 ~ 0.8 (Sec) Request Time
0.7 ~ 0.9 (Sec) Acknowledge Time
0.1 ~ 0.3 (Sec) Delay

Table 4 presents the comparison results of our pro-
posed approach performance with other algorithms.
Every three tested algorithms are randomized approach-
es, which have been implemented on specific issues. For
each case, we run each algorithm 10 times, and we pre-
sented obtained results by calculating the average Ma-
kespan as
Avg
, the best Makespan as

Best
and also the percent of deviation for each
algorithm in comparison with our algorithm. As you can
|
|
|
|
.
|

\
|
+
=
itr total
itr o
|
.
1
1
( ) |
w
End
w
Start
w
Start
w + =
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 35


see, EPSO algorithm presents a better response in com-
parison with compared algorithms. On average the im-
provement of EPSO in makespan over DPSO and PN is
32.35% and 18.75% in all cases, respectively.

TABLE 4
COMPARISON OF COMPLETION TIME BETWEEN DIFFERENT ALGORITHMS WITH SEVERAL NUMBERS OF CLOUDLETS

Graph obtained from the results of Table 4, displays aver-
age makespan on axis y and number of requests entered
into the cloud system by different algorithms on axis x in
Fig. 3. You can see in the chart, better performance of al-
gorithm EPSO to reduce the makespan.
Fig. 3. Simulation results from different algorithms with several
numbers of cloudlets
Table 5 also displays the results of EPSO simulation
with 100 incoming requests and 20 computational nodes
in comparison with PN and DPSO algorithms, in several
iterations (or several generations in GA). And Fig. 4
shows the graph obtained from these results. As you can
see, although the number of iterations is increased, the
makespan is reduced. It can be observed from the results
shown in Table 5 and Fig. 4 that EPSO greatly increases
the rate of convergence to the optimal response. For ex-
ample, as you can see, EPSO with 300 iterations is very
close to the optimum solution, while other evolutionary
algorithms that are compared with EPSO have to be re-
peated more to achieve the optimal solution. It can be
said, on average, the convergence rate of EPSO to final
and optimal response is better than PN algorithm about
12.75%, and also is better than the DPSO algorithm about
22.87%.
TABLE 5
COMPARISON OF COMPLETION TIME BETWEEN DIFFERENT
ALGORITHMS IN SEVERAL ITERATIONS
Number
of Itera-
tion
EPSO PN DPSO

PD

PD
100 668.02 714.34 6.93 765.05 11.19
200 612.19 734.26 19.93 762.32 24.5
300 588.31 718.59 22.14 769.5 30.7
400 614.59 703.41 14.45 805.7 31.09
600 608.60 646.81 6.27 730.49 20.02
800 602.61 616.82 2.35 723.06 19.98
1000 606.07 710.37 17.2 743.24 22.63
Fig. 4. Simulation results from different algorithms in several itera-
tions
Table 6 shows distribution of workloads of 30 requests
(915766 MI) entered into the system on 10 computing
nodes based on their processing capacities. Graph ob-
tained from the results of this experiment plotted in Fig.
5.
Fig. 5 shows that EPSO has better performance to dis-
tribute the incoming workloads on computing nodes
based on their processing capacities in comparison with
PN and DPSO algorithms. This will reduce the makespan
Number
of
Cloudlet
EPSO PN DPSO


Percentage
Deviation


Percentage
Deviation
50 374.34 358.39 401.16 381.11 7.16 515.66 449.74 37.70
100 712.45 651.07 791.11 723.41 11.04 823.50 731.46 15.58
150 1019.96 956.67 1199.95 1062.96 17.64 1285.46 1175.19 26.09
200 1340.01 1249.90 1488.90 1370.47 11.11 1868.88 1645.04 39.46
300 2452.08 2304.13 3269.44 2992.37 33.33 3390.22 3138.50 38.25
400 2665.98 2409.28 3137.17 2938.14 17.67 3584.56 3136.40 34.45
500 3301.69 2961.59 4402.26 4101.13 33.30 4455.91 4114.30 34.95
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 36


and will increase the efficient use of processing power of
resources that will reduce the load processing costs for
cloud service suppliers.

TABLE 6
COMPARISON OF WORKLOADS DISTRIBUTION ON DIFFERENT
PROCESSING NODES ACCORDING TO THEIR PROCESSING CA-
PACITY IN DIFFERENT ALGORITHMS
Processing Capacity
(MIPS)
Workload on Every Node (MI)

EPSO PN DPSO
104 24084 20487 39443
150 53903 31293 69558
172 48335 50027 57438
174 72592 33430 36618
219 75030 53160 134004
277 105250 68090 93558
358 114936 131365 28359
465 137327 72734 152380
491 137608 282049 154853
494 146701 173131 149555
Fig. 5. Simulation results from workload distribution on different
processing node according to their processing capacity
7 CONCLUTION
In this paper, we proposed an effective heuristic approach
(EPSO) based on PSO for tasks scheduling problem in
cloud computing. Our proposed method presents more
improved results from other compared algorithms.
From The results of the simulation method, we can see
EPSO will improve makespan and execution speed of
scheduling algorithm and convergence rate in compari-
son with the similar proposed algorithms. This achieved
from tasks and computational resources classification and
that existing independent tasks in each group have to be
allocated to the corresponding resource group.
EPSO is also very efficient in balancing workloads on
computational nodes that leads to efficient use of re-
sources. This is especially important, due to the efficient
use of provided resources by cloud suppliers. Also the
effective distribution of the workloads will cause some
resources not to be overloaded while some others are un-
der loaded. It increases the scalability, dependability, flex-
ibility, efficiency, availability and cost reduction. In the
future we intend to use the benefits of other evolutionary
algorithms like Genetic algorithm and Ant colony optimi-
zation for tasks scheduling problem in cloud computing
environment.
REFERENCES
[1] J. Hurwitz, Cloud Computing for Dummies. R. Bloor, M. Kauf-
man, Dr. F. Halper. Wiley Publishing, Canada, pp 7-46, 2009.
[2] T.D. Braun, H.J. Siegel, N. Beck, L.L. Boloni, M. Maheswaran,
A.I. Reuther, J.P. Robertson, M.D. Theys, B. Yao, D. Hensgen,
and R.F Freund, A Comparison of Eleven Static Heuristics for
Mapping a Class of Independent Tasks onto Heterogeneous
Distributed Computing Systems, Journal of Parallel and Distri-
buted Computing, vol . 61, Issue. 6, pp. 810-837, Jun 2001.
[3] A. Abraham, R. Buyya, and B. Nath, Natures heuristics for
scheduling jobs on computational grids, IEEE International
Conference on Advanced Computing and Communications, India,
pp. 45-52, 2000.
[4] T.D. Braun, H.J. Siegel, N. Beck, D.A. Hensgan, R.F. Freund, A
Comparison of Eleven Static Heuristics for Mapping a Class of
Independent Tasks on Heterogeneous Distributed Systems,
Journal of Parallel and Distributed Computing, pp. 810-837, 2001.
[5] J. Kennedy and R.C Eberhart, Swarm Intelligence. Morgan
Kaufman, 2001.
[6] J. Kennedy and R. Eberhart, Particle Swarm Optimization,
IEEE International Conference on Neural Networks, vol. 04, pp.
1942-1948, 1995.
[7] L. Liu, Y. Yang, L. Lian and S. Wanbin, Using Ant Colony
Optimization for Job Scheduling in Computational Grid, IEEE
Conference on Services Computing (APSCC06), 2006.
[8] A. Salman, I. Ahmad, S. Al-Madani, Particle Swarm Optimiza-
tion for Task Assignment Problem, Microprocessors and Micro-
systems, ELSEVIER, 26(8), pp. 363-371, Nov 2002.
[9] L. Zhang, Y. Chen, R. Sun, S. Jing and B. Yang, A Task Sche-
duling Algorithm Based on PSO for Grid Computing, Interna-
tional Journal of Computational Intelligence Research, vol. 04, no. 1,
pp. 37-43, 2008.
[10] P. Mathiyalagan, R. Dhepthie, S.N. Sivanandam, Grid Sche-
duling Using Enhanced PSO algorithm, International Journal of
Computer Science and Engineering, vol. 02, no. 02, pp. 140-145,
2010.
[11] H. Izakian, A. Abraham, Vclav Snel, Scheduling Meta-tasks
in Distributed Heterogeneous Computing Systems: A Meta-
Heuristic Particle Swarm Optimization Approach, IEEE Inter-
national Conference on Computer Science and Information Technolo-
gy, 2009.
[12] A. Omidi, A.M. Rahmani, Multiprocessor Independent Task
Scheduling Using a Novel Heuristic PSO Algorithm, IEEE In-
ternational Conference on Hybrid Intelligent Systems, 2009.
[13] R. Buyya, S. Date, Y. Miizuno-Matsumoto, S. Venogopal and D.
Abramason, Neuroscience Instrumentation and Distributed
Analysis of Brain Activity Data: A Case for eScience on Global
Grids, Journal of Concurrency and Computation: Practice and Ex-
perience, vol. 17, no. 15, pp. 1783-1798, 2004.
[14] S. Pandey, L. Wu, S. Guru, R. Buyya, A Particle Swarm Opti-
mization-Based Heuristic for Scheduling Workflow Applica-
tions in Cloud Computing Enviroments,IEEE International
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 37


Conference on Advanced Information Networking and Applications
(AINA), pp. 400-407, 2010.
[15] A.J. Page, T.M. Keane, T.J. Naughton, Multi-heuristic Dynamic
Task Allocation Using Genetic Algorithms in a Heterogeneous
Distributed System, Journal of Parallel and Distributed Compu-
ting 70, ELSEVIER, pp. 758_766, 2010.
[16] Q. Kang, H. He, A Novel Discrete Particle Swarm Optimiza-
tion Algorithm for Meta-task Assignment in Heterogeneous
Computing Systems, Journal of Microprocessors and Microsys-
tems 35, ELSEVIER, pp. 10_17, 2011.
[17] A. Ghorbannia Delavar, Y. Aryan, A Synthetic Heuristic Algo-
rithm for Independent Task Scheduling in Cloud Systems, In-
ternational Journal of Computer Science Issues (IJCSI), vol. 08, is-
sue. 06, no. 02, Nov 2011.
[18] R.V. Hogg, A.T. Craig, Introduction to Mathematical Statistics,
New York, Macmillan, 1978.

Arash Ghorbannia Delavar received the M.Sc.
and Ph.D. degrees in computer engineering
from Sciences and Research University, Tehran,
IRAN, in 2002 and 2007. He obtained the top
student award in Ph.D. course. He is currently
an assistant professor in the Department of
Computer Science, Payam Noor University,
Tehran, IRAN. He is also the Director of Virtual
University and Multimedia Training Department
of Payam Noor University in IRAN. Dr. Arash Ghorbannia Delavar is
currently editor of many computer science journals in IRAN. His
research interests are in the areas of computer networks, micropro-
cessors, data mining, Information Technology, and E-Learning.

Batool Dashti received her B.Sc. in computer
engineering from Payam Noor University,
Mashhad, IRAN, in 2007, and is a M.Sc. stu-
dent in computer engineering in Payam Noor
University, Tehran. Her research interests are in
distributed and computational Systems, Grid
computing and cloud compting.






JOURNAL OF COMPUTING, VOLUME 4, ISSUE 5, MAY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 38

You might also like