You are on page 1of 15

Adaptive Memetic Particle Swarm Optimization

Alfredo Milani, Valentino Santucci


Department of Mathematics and Computer Science, University of Perugia

Abstract This paper introduces the notion of memetic evolution of local
search operators in the framework of Particle Swarm Optimization (PSO)
schemes. The operators dynamically adapt to the local characteristics of the
search space using a co-evolving PSO approach where each operator is a par-
ticle with a trajectory in the operators space. An original probabilistic tech-
nique for managing components with discrete domains which appear in op-
erator descriptions has been also introduced. The adaptive memetic PSO
approach has been experimented on numerical optimization benchmark
problems, the convergence results competitive with respect to standard PSO
and to memetic PSO algorithms in the literature.
1 Introduction and Related Work
Memetic Algorithms (MAs) have recently received a great attention as effective
meta-heuristics to improve general Evolutionary Algorithm (EA) schemes applied
to optimization problems [1].
Population based evolutionary approaches such as Genetic Algorithms (GAs)
or Particle Swarm Optimization (PSO) are able to detect in a fast way the main re-
gions of attraction while their local search abilities represent a major drawback [2]
from the point of view of the solution accuracy and of the convergence behavior,
especially when applied to multimodal problems. Memetic approaches aim at im-
proving the convergence and the accuracy of EAs, building on the idea of combin-
ing population-based optimization algorithms with local search procedures [1,3,4].
A typical memetic scheme has a main component which consists in a core EA
that produces iteratively a population of individuals/particles, i.e. candidate solu-
tions, located in the most promising regions of the search space, and a local search
component, which is used to improve the accuracy of the solutions in the detected
regions. The scheme has been successfully applied to improve different meta-
heuristic procedures such as Simulated Annealing [4], GAs [5], and PSO [6].
2
A distinction should be made among first generation MAs, where a single static
local search component is used, and approaches [7,8] where many different local
search components, i.e. memes, are available and are chosen by the algorithm ac-
cording to certain EAs strategies. In more recent proposals [9,10] the memes
themselves undergo an evolutionary process. In this latter approach the memes are
not statically designed for a problem but they evolve and adapt to the characteris-
tics of the search space.
Although many GAs based memetic algorithms and other EAs approaches
which employ memes evolution have been proposed in the literature [11,12,13],
the PSO memetic algorithms proposed so far fails to employ memes evolution,
probably for the difficult to design descriptor of local search operators, which are
usually statically tailored on the problem or the class of problems at hand
[14,15,16] and exhibit, at most, only some form of temporary adaptation which is
not preserved or transmitted through the generations/iterations as a meta-
Lamarkian meme [7].
In this paper we investigate the possibility of designing, in the framework of
memetic meta-heuristics, PSO algorithms for numerical optimization which allow
to evolve and transmit local search operators, in the form of memes which can bet-
ter adapt to the problem landscape and ultimately provide better performances.
After recalling, in the next paragraph, some general issues concerning Memetic
Algorithms and Particle Swarm Optimization, the Adaptive Memetic PSO is in-
troduced in Section 3 by describing the proposed co-evolving particles-memes
scheme and a probabilistic technique for PSO evolution of discrete components.
Experimental results and comparison of the proposed algorithm with classical
PSO and state-of-the-art static memetic PSO are presented and discussed in Sec-
tion 4. A discussion of the advantages of the proposed approach and a description
of lines of future research concludes the paper.
2 Memetic Algorithms and Particle Swarm Optimization
The two main components of the proposed algorithm are a memetic meta-heuristic
combined with a PSO approach.
In this section an overview of the different approaches in the MAs framework
is given, the main features of PSO are also briefly recalled.
2.1 Memetic Algorithms
MAs, first proposed by Moscato [4], represent one of the recent growing areas of
research in evolutionary computation.
3
The MA meta-heuristics essentially combine an EA with some local search
techniques that can be seen as a learning procedure which makes individuals ca-
pable of performing local refinements. In fact, MAs take inspiration from the
Dawkins notion of meme [1] that represents a unit of cultural evolution that can
exhibit refinement.
MAs are usually distinguished in the three main classes. First generation MAs
[1] were characterized by a single fixed meme, i.e. a given local search operator is
applied to candidate solutions of the main EA according to different selection and
application criteria (see for example [14]). The criteria used in the selection of
candidates and the frequency of the local search application, are parameters of this
kind of MA. Second generation MAs, also called meta-Lamarckian MAs [7], re-
flect the principle of memetic transmission and selection. This kind of MAs are
typically characterized by a pool of given local search operators (memes) that
compete, based on their past merits, to be selected for future local refinements.
The choice of the best memes can be based on different criteria, such as the abso-
lute fitness of the candidate solutions associated to the meme or the fitness im-
provement due to the past applications of the memes. The most recent generation
of MAs meta-heuristics are the co-evolving MAs [9], which introduce the idea of
memes evolution. In this case the pool of memes is not known a priori, but the
memes are represented and evolved by means of EA methods similar to those ap-
plied for candidate solutions. The AMPSO scheme proposed in this work falls in
this latter class of MAs.
2.2 Particle Swarm Optimization
PSO, introduced by Kennedy and Eberhart [6], got inspiration from particles mod-
el of objects and simulation of collective behavior of flocks of birds.
A PSO swarm is composed by a set of particles P = {p
1
, , p
n
} interconnected
in a graph that defines a neighborhood relation among them, i.e. for each particle
p
i
the set of its neighbors N
i
_ P is defined. The position vector x
i,t
of a particle p
i

at time t, represents a point in a m-dimensional space, i.e. a candidate solution of
the given optimization problem represented by the objective/fitness function
f : O 9 with O _ 9
m
(the region of feasible solutions) to be minimized (or
maximized). As the iterations proceed the particles explore the search space ad-
justing their positions according to its own experience as well as the experience of
their neighbors particle. In this way PSO focus the search of the swarm toward the
most promising areas by combining the so called cognitive and social strategies.
At any step t, every particle p
i
is associated with different m-dimensional vec-
tors (where m is the dimensionality of the search space):
- x
i,t
, i.e. the particle position,
- v
i,t
, i.e. the particle velocity,
- b
i,t
, i.e. the particle personal best position ever visited until time t,
4
- l
i,t
, i.e. the best position ever found among its neighbors until time t.
In the more standard PSO implementation a complete network is adopted as
neighborhood graph. In this case the l
i,t
values are replaced by a unique global best
position l
t
that is the same for all the particles.
PSO initially distributes the particles in random positions within the feasible
region O. Moreover, velocities are randomly initialized to small values in order to
prevent particles from leaving the feasible search space in the early iterations.
During the iterations loop velocities and positions are iteratively updated accord-
ing to rule (1) and (2) until a stop criterion is met.

t i t i t i
v x b
, , 1 ,
+
+
(1)
( ) ( )
t i t i t t i t i t t i t i
x l x b v v
, , , 2 2 , , , 1 1 , 1 ,
+ +
+
| | e (2)
Position update rule (1) simply moves the particle in the search space accord-
ing to the velocity vector. The weights in velocity update rule (2) respectively rep-
resent the inertia e, the acceleration factors
1
,
2
and the random numbers |
1,t
,
|
2,t
which are uniformly distributed in [0,1]. The three terms of the velocity update
rule (2) characterize the behavior of the particles: the first term, called the inertia
or momentum, acts as a memory of the previous direction and prevents a particle
from drastically changing direction. The second term, called the cognitive compo-
nent, models the tendency of the particle to return to the best position found so far.
Finally, the third term, i.e. the social component, quantifies the velocity contribu-
tion relative to neighbors particles.
The (personal and social) best positions are updated, as obvious, following the-
se rules:

, 1 , 1 ,
, 1
,
if ( ) ( )
otherwise
i t i t i t
i t
i t
x f x f b
b
b
+ +
+
<

(3)

( )
, 1 , 1
:
argmin
j i
i t j t
j p N
l f b
+ +
e

(4)
In the most common case of a topology formed by a fully connected graph the
rule (4) becomes:

( )
1 , 1
1
argmin
t i t
i n
l f b
+ +
s s

(5)
Finally, note that sometimes the particles can go outside the feasible search
space O. In our work, in order to solve this problem, we restart the out-of-bounds
5
components of a particle in a position randomly chosen between the old position
and the exceeded bound [17].
3 Adaptive Memetic PSO
Adative Memetic PSO (AMPSO) introduced in this section is an hybrid algorithm
that combines PSO with the evolution of local search memetic operators which
adapt to the objective function landscape.
AMPSO evolution can be seen as the combination of two co-evolving popula-
tions: the particles and the memes. The PSO particles represent the candidate solu-
tions and evolve using the usual PSO laws. The memes represent the local search
operators applicable to the particles. In AMPSO memes are also evolved by a PSO
scheme modified by an innovative technique designed to deal with the discrete
domains present in the meme representation.
Another difference with respect to standard PSO scheme is the introduction of
a diversity control mechanism that prevents the premature convergence of the par-
ticles population in a local minimum.
In the following the general AMPSO scheme (Section 3.1), the particles evolu-
tion behavior with the diversity control mechanism (Sections 3.2 and 3.3), the
memes representation (Section 3.4), memes evolution (Section 3.5) and the novel
technique for managing discrete components in memes PSO (Section 3.6), are pre-
sented and described.
3.1 AMPSO Scheme
In the AMPSO general scheme, a population of n particles, i.e. P = { p
1
, , p
n
},
navigates the search space following PSO dynamics while trying to find the mini-
mum of a given fitness function f. At each iteration a local search is possibly ap-
plied to a subset of the particles in order to improve the best value of f. This phase
is realized by a meme population, i.e. M = { m
1
, , m
n
}, of local search operators.
Each m
i
is associated to particle p
i
, and is evolved by a PSO-like approach. The it-
erations of the particles PSO stop when the particles converge to the optimal or
other termination conditions are met.
Particles in P are encoded using the usual m-dimensional vectors of the PSO
scheme, where m is the problem dimensionality. At each iteration t the particle po-
sition vector x
p,t
, the velocity vector v
p,t
, the personal best vector b
p,t
, and the glob-
al best particle l
p,t
are maintained.
Memes are encoded in a similar way, by hybrid vectors x
m,t
, v
m,t
, b
m,t
, l
m,t
, i.e.
vectors which combine both discrete and continuous component as described in
Section 3.4.
6
The general scheme of AMPSO is described by the pseudo-code reported in
Figure 1a.

Fig. 1 Pseudo-code of (a) AMPSO and (b) PSO_MEME.
The population of particles P and memes M are randomly initialized in their re-
spective feasible regions. In the main loop each particle is evolved following rules
(1) and (2) for position and velocity update, and rules (3) and (5) for personal and
global best update. Note that the parameters in (2) have been set according to [18]
and [19], i.e. e = 0.729,
1
=
2
= 1.49445.
LOCAL_SEARCH(,|) activates the memetic part of AMPSO. The local
search operators defined by the memes are applied probabilistically: m
i
is applied
to the personal best position of particle m
i
with a probability every | iterations of
the particles PSO algorithm, the local search application is denoted by m
i
(p
i
) or
equivalently by x
m,i
(b
p,i
). Moreover, every iteration local search is applied to global
best particle in position l
t
, the global best particle and its meme are also indicated
by p
g
and m
g
. Before being applied, the memes are evolved by the PSO_MEME
evolution scheme described in Section 3.4. If the meme application leads to a fit-
ness improvement, i.e. if f(x
m,i
(b
p,i
)) < f(b
p,i
), then the new candidate solution ob-
tained replaces both the personal best that the particle current position, i.e.
b
p,i
x
m,i
(b
p,i
) and x
p,i
b
p,i
. The global best is also updated accordingly.
It is worth noticing that this phase of AMPSO depends only on parameters: fre-
quency | and probability of local search application and swarm size n.
3.2 Diversity Control
A diversity control mechanism is implemented in order to avoid a premature
convergence to a local minimum. When a premature converge is detected,
7
DIVERSITY_CONTROL() determines a set of particles D whose positions will be
reinitialized. The method consist of two main components: a diversity measure o
and a diversity restore mechanism invoked when the population diversity becomes
too low according to the measure o.
The diversity measure employed is the following:

( )
,
( ) std ( )
p i
P f x o
(6)
where std is the standard deviation function computed using the fitness values
of the particles composing population P. Although genotypic distances between
particles seem to be in principle more appropriate than fitness values, it has exper-
imentally observed that the use of the latter indicators provides a good diversity
measure with the additional property that the computation becomes more efficient.
The diversity measure o(P
0
) of the initial random population P
0
is used to com-
pute the threshold value t = 0.2o(P
0
) that will be used in the next iterations for
comparison purposes. This threshold value is computed basing on the particles
population at time 0 since they are generated in a purely random way.
At each iteration DIVERSITY_CONTROL() compares the current population di-
versity o(P
t
) with the threshold t and if o(P
t
) < t the diversity restore procedure is
invoked. This procedure randomly restarts the positions of the worst 50% popula-
tion according to current fitness values. Note that only particle current positions
x
p,t
are restarted, while particle personal best b
p,t
. and velocity v
p,t
remain un-
changed.
3.3 Memes Representation
The local search operators adopted in AMPSO are a generalization of the Random
Walk (RW) technique [20].
RW is an iterative and stochastic local optimization method. Let x
(t)
be a candi-
date (local) minimum at the t-th iteration, then the new value x
(t+1)
, at the (t+1)-th
iteration, is computed according to:

( ) ( ) ( ) ( ) ( )
( 1)
( )
if ( ) ( )
otherwise
t t t t t
t
t
x wz f x wz f x
x
x
+
+ + <

(7)
where w is a given scalar step-length, and z
(t)
is a unit-length random vector. The
step-length w is initialized to a given value w
init
and it is halved if the fitness is not
improved. The process is repeated until a given number of iteration q is reached.
In our generalization, other than the previously described parameters w
init
and
q, two other parameters are introduced: the ramification factor b and the number
8
of chosen successors k (with k s b). Briefly, the idea is to expand the RW in a (b-
k-bounded) breadth instead of the pure deep-first (DF) search of original RW. Ini-
tially k copies of the starting point of the local search are generated. b new points
are generated from these k points like in DF-RW. From these new intermediates
points the best k are chosen and the process continues until the deepening parame-
ter q is reached.
In this way we can represent a meme with four parameters: a real w
init
and three
integers b, k, q. For each one a range of admissible values is required. These rang-
es have been established experimentally as w
init
e[0.5,4], be[1,8], ke[1,b],
qe[4,16].
3.4 Memes Evolution
A meme can be represented by a 4-ple m = (w
init
, b, k, q) in the hybrid meme
space. Memes PSO evolution proceeds in an asynchronous way with respect to the
particles evolution, this is because the memes are evolved only before they are ap-
plied according to the probability and frequency | described above.
Memes, like particles, are characterized by a current position x
m,t
, i.e. the repre-
sentation of the meme in the meme space, a velocity v
m,t
, a personal best position
b
m,t
, and a global best meme l
t
. Two alternative functions have been considered as
memes fitness: (1) the absolute fitness f of the particle p
i
associated to meme m
i

and (2) the fitness improvement Af realized with the application of meme m
i
to
particle p
i
. Experiments have shown that the two approaches do not significatively
differ.
Memes are evolved by a classical PSO scheme modified to manage meme vec-
tors with hybrid continuous/discrete components. The PSO_MEME scheme distin-
guishes the evolution of continuous components (according to updating rules (1),
(2), (3), and (5)) and that of discrete components where a probabilistic technique,
presented in the next subsection, is applied. The PSO_MEME pseudo-code is re-
ported in Figure 1b.
3.5 PSO for Discrete Domains
The meme discrete components are managed using a probabilistic technique
that simulates the usual PSO behavior for continuous domains exploiting the total
order of the discrete domain.
For each meme m and for each discrete component domain D
j
= { d
j,1
, , d
j,r
},
an appropriate probability distribution P
Dj,m
over the values of D
j
is built (rule 8),
then a randomized tournament is made among the values in D
j
where P
Dj,m
(d
j,i
)
represents the probability of d
j,i
being selected. The selected value in the random
9
tournament will be the new position of the particle component x
j
(rule 9). Hence,
the probability distribution plays the role of the velocity vector and together with
the randomized tournament realizes most of the properties of the classical contin-
uous PSO.
(Rule 8) Probability distribution on discrete component.
Let d
x
, d
b
, d
l
e D
j
be the values of the given component j for the current meme po-
sition, the meme personal best position, and the global best meme position. Then
the distribution is defined as:
P
Dj,m
(d
x
)= (1+e)

/r 1/N
Dj
P
Dj,m
(d
b
)= (1+
1
)/r 1/N
Dj
P
Dj,m
(d
l
)= (1+
2
)/r 1/N
Dj

where r = |D
j
| and N
Dj,m
is the normalization factor explained in the following.
The probability for the other values deD
j
with de{d
b
,d
p
,d
l
} is obtained by
smoothing the probabilities in the contour of the centers d
p
, d
b
, and d
l
. Initially, a
quantity 1/r is assigned to each value d
i
and then the values in the centers and in
the points to their left and to their right are incrementally amplified. Let
o e {1+e, 1+
1
, 1+
2
} the amplification factor for a center d
k
and let | = o/(+1)
where 2 is the amount of values centered in d
k
whose probability will be ampli-
fied with smoothness |. All the values d
ks
(d
k+s
) at the left (right) of index k will
be amplified with parameter o = | |+1s|, this process is iterated for every cen-
ter d
p
, d
b
, and d
l
, finally these quantities are normalized using an appropriate N
Dj

which sum up all the amplified and not amplified values in order to obtain the
probability distribution P
Dj,m
. The parameter determines the interval of features
values where the (1+o) amplification is applied, it is easy to see that values near
the global optimum, local optimum, and current particle values tend to be pre-
ferred, while the initial distribution 1/r ensures that each domain value has a non
zero probability of being selected.
(Rule 9) Discrete position update.
The new position of a discrete component x
j
of the particle is computed by a ran-
dom tournament which use the probability distribution above.
A random number r uniformly distributed in [0,1] is extracted, then x
j
= d
j,k

where d
j,k
e D
j
and k such that
( ) ( )
, ,
1
, ,
0 0 j m j m
k k
D j i D j i
i i
P d r P d

= =
s s

where by
definition
( )
,
1
,
0
0
j m
D j i
i
P d

=
=

.
The technique for discrete component based on probability distribution values
can be interpreted as considering all the values equiprobable, except d
b
, d
p
, d
l

whereas the amplification factors e,
1
, and
2
gives a greater probability to d
b
, d
p
,
and d
l
. The smoothness factor | also amplifies the probabilities of their neighbors,
i.e. values close to the centers.
In other words the amplification and smoothness factors, and their use in meme
position update, implement for the discrete components the probabilistic counter-
parts of typical behavior of PSO velocity, i.e. the tendency to remain in the current
10
position (inertial factor e), and the tendency to move toward the local best (cogni-
tive factor
1
) or the global best position (social factor
2
).
4 Experiments
The performances of Adaptive Memetic PSO have been evaluated on a set of
standard benchmark functions, reported below in equations (10-14), which differ
each other for the properties of modality, symmetry around the optimum and regu-
larity.

2
1
1
( )
D
i
i
f x x
=
=

(10)

2
2
1 1
( ) cos 1
4000
D D
i i
i i
x x
f x
i = =
| |
= +
|
\ .
[
(11)

( )
( ) ( )
2 2 2
1 2
3 2
2 2
1 2
sin 0.5
( ) 0.5
1 0.001
x x
f x
x x
+
= +
+ +
(12)

2
1 1
4
cos(2 )
( ) 20exp 0.02 exp 20
D D
i i
i i
x x
f x e
D D
t
= =
| |
| |
|
|
= + +
|
|
|
\ .
\ .

(13)

2
4
5
2
1
0.15( 0.05sign( )) if 0.05
( )
otherwise
i i i i i
i
i i
z z d x z
f x
d x
=
<
=

(14)
Termination conditions has been defined for convergence and maximum com-
putational resources. An execution is regarded convergent if f(x)f(x*)<c. On the
other hand, the execution has been considered terminated unsuccessfully if the
number of function evaluations (NFE) exceeds the allowed cap of 100,000. The
feasible domain, the dimensionality, and value c values, used for each benchmark
are reported in Figure 2.
11

Fig. 2 Parameters for the Benchmark Functions.
Three different swarm size (denoted as SS) have been considered, namely 15,
30, and 60, following the setup of [14]. The other PSO parameters were set to the
standard values suggested in [18], i.e. e=0.7,
1
=
2
=1.49.
The domain ranges defining the meme space, where the AMPSO memes
evolve, were: b,ke[1,8], qe[1,16], we[0.5,4], while an amplification width =4
has been used for the evolution of memes discrete features.
For each swarm size configuration a series of 50 executions has been held to
eliminate the randomness of the statistical results. In each execution series, the
convergence probability P
c
(i.e. the number of convergent executions above the
total number of executions), the average NFE of all convergent executions C
avg
,
with the NFE of the best and the worst convergent execution (C
best
and C
worst
re-
spectively) are reported. Moreover the quality measure Q
m
=C
avg
/P
c
introduced in
[21] and suggested in [22] is also reported in the tables.
Functions f
1
, f
2
, and f
4
have been investigated with dimensionality 30. Instead,
functions f
3
, and f
5
(that do not admit a generalization of the dimensionality) have
been investigated with their proper number of dimensions, that is 2 and 4 respec-
tively.

Fig. 3 Experimental Results of (a) AMPSO, (b) CPSO, (c) SMPSO.
All these previously described indexes, P
c
, C
avg
, C
best
, C
worst
, and

Q
m
are report-
ed in Figures 3a, 3b, and 3c, respectively for AMPSO, Classical PSO (CPSO) and
12
Static Memetic PSO (SMPSO), i.e. a PSO with local RW search and no memes
evolution [14] which is the only comparable memetic algorithm in PSO frame-
work.
Figure 3 clearly shows that adaptive memetic AMPSO approach greatly im-
proves the convergence probability of the classical CPSO and SMPSO. It must be
noted in particular that AMPSO converges almost everywhere, and it has a re-
markable worst case convergence probability of 95%. On the other hand SMPSO
and CPSO have a low convergence performance, where SMPSO is as low as 57%
and 50% and CPSO was much worse and not converging at all on two cases of
function f
4
, with small particle sizes. Convergence speed of AMPSO is compara-
ble to SMPSO, however, as expected, in the more simple cases, i.e. f
1
and f
5
, the
convergence speed of AMPSO is worst than CPSO because of the NFEs overhead
introduced by the local search.

Fig. 4 Converge Graphs on functions (a) f
1
, (b) f
2
, (c) f
3
, (d) f
4
, (e) f
5
, and (f) meme conver-
gence
13
Figures 4a-4e plot the behavior of AMPSO convergence with respect to swarm
size, where x axis reports the number of fitness evaluation NFEs and y axis the dis-
tance from the optimal solution f(x) f(x*). AMPSO appears quite monotonic with
respect to swarm size and a low number of particles seem to be generally prefera-
ble.
Finally a measure of memes convergence has been shown in Figure 4f, which
shows the memes standard deviation (Memes STD) when NFEs increases during
optimization of function f
4
. Memes convergence is fast in the early stage and, as
expected, it remains fairly constant, with little adaptation, during the rest of the
computation. The meme convergence curve together with the quality measure and
the convergence probability show the effectiveness of AMPSO and its ability to
adapt to the function landscape.
5 Conclusion
AMPSO, a local search PSO algorithm characterized by two co-evolving popu-
lation of particles and memes, in the framework of memetic meta-heuristics, has
been presented. The memes are evolved by using a generalization of a Random
Walk local search operator.
To best of our knowledge this is the first work where a Memetic PSO algorithm
with memes co-evolution has been proposed.
A novel technique of probabilistic PSO evolution has been designed to manage
discrete components in meme representation, the technique preserves the typical
PSO behavior of cognitive, social and momentum dynamics.
Experimental results on benchmark problems show that AMPSO outperforms
the convergence probability of both classical PSO, and non evolutionary static
memetic PSO, while its convergence speed is affected by some overhead due to
the local search. The effectiveness of the method also relies on the ability to dy-
namically adapt the local search operators, i.e. the memes, to the features of the
optimizing function landscape. From this perspective a static local search method
like in [14], although tailored on the function to be optimized, is not comparable to
dynamical AMPSO approach since the function landscape can be different in dif-
ferent regions of the search space, thus requiring memes with local characteristics.
Future work will regard the investigation of different models for memes opera-
tors and the design of self-regulating mechanisms for swarm size and other
AMPSO parameters.

14
References
[1] Dawkins, R. (1976). The selfish gene. New York: Oxford University Press. Floudas, C. A., &
Pardalos, P. M. (1987). A collection of test problems for constrained global optimization al-
gorithms. In P. M. Floudas (Ed.), Lecture notes in computer science, Vol. 455. Berlin:
Springer.
[2] Angeline, P. J. (1998). Evolutionary optimization versus particle swarm optimization: philos-
ophy and performance differences. In V. W. Porto, N. Saravanan, D. Waagen & A. E. Eiben
(Eds.), Evolutionary programming (Vol. VII, pp. 601610). Berlin: Springer.
[3] Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts.
Towards memetic algorithms. Technical report C3P, Report 826, Caltech Concurrent Compu-
tation Program, California,USA.
[4] Moscato, P. (1999). Memetic algorithms. A short introduction. In D. Corne, M. Dorigo & F.
and Glover (Eds.), New ideas in optimization (pp. 219235). London: McGraw-Hill.
[5] Ourique, C. O., Biscaia, E. C., & Carlos Pinto, J. (2002). The use of particle swarm optimiza-
tion for dynamical analysis in chemical processes. Computers and Chemical Engineering, 26,
17831793.
[6] J. Kennedy, R. Eberhart. Particle swarm optimization. In Proc. of IEEE Conf. on Neural
Networks, IEEE Press, 1995, pp. 1942-1948.
[7] Ong Y. S. and Keane A. J. (2004). "Meta-Lamarckian learning in memetic algorithms". IEEE
Transactions on Evolutionary Computation 8 (2): 99-110.
[8] Krasnogor N. (1999). "Coevolution of genes and memes in memetic algorithms
ocean".Graduate Student Workshop: 371.
[9] Smith, J. E. (2007). Coevolving memetic algorithms: A review and progress report. IEEE
Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, 37(1):617.
[10] Krasnogor N. and Gustafson S. (2002). "Toward truly "memetic" memetic algorithms: dis-
cussion and proof of concepts". Advances in Nature-Inspired Computation: the PPSN VII
Workshops. PEDAL (Parallel Emergent and Distributed Architectures Lab). University of
Reading.
[11] W. Hart, N. Krasnogor, J. Smith, Eds. Recent Advances in Memetic Algorithms. Berlin,
Germany: Springer-Verlag, 2004.
[12] N. Krasnogor, J. Smith, A Memetic Algorithm with Self-Adaptive Local Search: TSP as a
case study. Proceedings of GECCO 2000.
[13] M. Eusuff, K. Lansey, F. Pasha, Shuffled frog-leaping algorithm: a memetic meta-heuristic
for discrete optimization, Engineering Optimization, Volume 38, Issue 2 March 2006 , pages
129 154, Taylor and Francis, 2004.
[14] Y.G. Petalas K.E. Parsopoulos M.N. Vrahatis, Memetic particle swarm optimization,
Ann Oper Res (2007) 156: 99127.
[15] O. Schutze and E. Talbi, C.C.Coello, L.V.Santana-Quintero,G.T.Pulido A Memetic PSO
Algorithm for Scalar Optimization Problems, Proceedings of the 2007 IEEE Swarm Intelli-
gence Symposium (SIS 2007).
[16] B. Liu, L. Wang, andY. Jin, An Effective PSO-Based Memetic Algorithmfor Flow Shop
Scheduling, IEEE Trans. On Man, Systems and Cybernetics, Vol..37, 1, February 2007,
pp.18-28.
[17] Price, K V, R M Storn, e J A Lampinen. Differential evolution: a practical approach to glob-
al optimization. Springer Verlag, 2005.
[18] R. Poli, J. Kennedy, T. Blackwell. Particle swarm optimization. An overview.. Swarm In-
telligence, 1(1): 33-57, 2007.
[19] Clerc, M., & Kennedy, J. (2002). The particle swarmexplosion, stability, and convergence
in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6,
5873.
15
[20] Weiss, G. H., 1994, Aspects and Applications of Random Walk (North-Holland, Amster-
dam).
[21] Feoktistov, V. Differential evolution: in search of solutions. Springer-Verlag New York Inc,
2006.
[22] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger and S. Tiwari,
"Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-
Parameter Optimization", Technical Report, Nanyang Technological University, Singapore,
May 2005 AND KanGAL Report #2005005, IIT Kanpur, India.

You might also like