Professional Documents
Culture Documents
Contents
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
References102
Introduction
Particle swarm optimization (PSO):
developed by Kennedy & Eberhart [3, 7],
first published in 1995, and
with an exponential increase in the number of publications
since then.
Introduction
Particle swarm optimization (PSO):
developed by Kennedy & Eberhart [3, 7],
first published in 1995, and
with an exponential increase in the number of publications
since then.
What is PSO?
a simple, computationally efficient optimization method
population-based, stochastic search
based on a social-psychological model of social influence and
social learning [8]
individuals follow a very simple behavior: emulate the success
of neighboring individuals
emergent behavior: discovery of optimal regions in high
dimensional search spaces
Computational Intelligence: Second Edition Chapter 16: Particle Swarm Optimization
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Chapter 16. Particle Swarm Optimization Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
Neighborhoods:
Neighborhoods are based on particle indices, not spatial
information
Neighborhoods overlap to facilitate information exchange
The lbest PSO algorithm?
gbest PSO vs lbest PSO:
Speed of convergence
Susceptibility to local minima?
Velocity Components
Velocity Components
Velocity Components
(t)
y (t + 1)
y
x1 x1
y
x1 x1
Stopping Conditions
f (xi ) |f (x ) |
Exploration-Exploitation Tradeoff
Exploration-Exploitation Tradeoff
Velocity Clamping:
Velocity Clamping:
Velocity Clamping:
Velocity Clamping:
Velocity Clamping:
velocity update
position update
xi (t + 1)
v2 (t + 1)
0
xi (t + 1)
0
v2 (t + 1)
v1 (t + 1) xi (t)
x1
Velocity Clamping:
Velocity Clamping:
Inertia Weight[15]
Developed to control exploration and exploitation
Controls the momentum
Velocity update changes to
vij (t + 1) = wvij (t) + c1 r1j (t)[yij (t) xij (t)]
yj (t) xij (t)]
+c2 r2j (t)[
For w 1
velocities increase over time and swarm diverges
particles fail to change direction towards more promising
regions
For 0 < w < 1
particles decelerate
convergence also dependent on values of c1 and c2
Computational Intelligence: Second Edition Chapter 16: Particle Swarm Optimization
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Chapter 16. Particle Swarm Optimization Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
Synchronous:
Personal best and neighborhood bests updated separately from
position and velocity vectors
slower feedback
better for gbest
Synchronous:
Personal best and neighborhood bests updated separately from
position and velocity vectors
slower feedback
better for gbest
Asynchronous:
new best positions updated after each particle position update
immediate feedback about best regions of the search space
better for lbest
Velocity Models
Cognition-only model:
vij (t + 1) = vij (t) + c1 r1j (t)(yij (t) xij (t))
Basic Parameters
Swarm size, ns
Particle dimension, nx
Neighborhood size, nNi
Number of iterations, nt
Inertia weight, w
Acceleration coefficients, c1 and c2
Acceleration Coefficients
c1 = c2 = 0?
c1 > 0, c2 = 0: cognition-only model
c1 = 0, c2 > 0: social-only model
c1 = c2 > 0:
particles are attracted towards the average of yi and yi
c2 > c1 :
more beneficial for unimodal problems
No stochastic component
Single, one-dimensional particle
Using w
Personal best and global best are fixed:
y = 1.0, y = 0.0
1 = c1 r1 and 2 = c2 r2
Example trajectories:
30
20
10
im
xt
-10
-20
-30
-40 -6e-16
0 50 100 150 200 250 -40 -30 -20 -10 0 10 20 30
t re
Example trajectories:
500
400
300
200
100
im
xt
-100
-200
-300
-400
-500 -6e-16
0 50 100 150 200 250 -500 -400 -300 -200 -100 0 100 200 300 400 500
t re
Example trajectories:
6e+06 8e+99
6e+99
4e+06
4e+99
2e+06
2e+99
0
0
im
xt
-2e+06 -2e+99
-4e+99
-4e+06
-6e+99
-6e+06
-8e+99
-8e+06
-1e+100
-1e+07 -1.2e+100
0 50 100 150 200 250 -6e+99 -4e+99 -2e+99 0 2e+99 4e+99 6e+99 8e+99 1e+100
t re
Convergence Conditions:
What do we mean by the term convergence?
z2
1
w
0 2.0 4.0
phi z1
30
20
10
x(t)
-10
-20
-30
0 50 100 150 200 250
t
xi = yi = y
xi = yi = y
xi = yi = y
GCPSO Equations
Spatial neighborhoods
Fitness-Based spatial neighborhoods
Growing neighborhoods
Hypercube structure
Fully informed PSO
Barebones PSO
E(xi , xi 0 ) f (xi 0 )
Growing Neighborhoods
Start with a ring topology having smallest connectivity, and
grow towards a star topology
Particle position xi2 (t) is added to the neighborhood of
particle position xi1 (t) if
||xi1 (t) xi2 (t)||2
<
dmax
where dmax is the largest distance between any two particles
3t + 0.6nt
=
nt
with nt the maximum number of iterations.
Computational Intelligence: Second Edition Chapter 16: Particle Swarm Optimization
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Chapter 16. Particle Swarm Optimization Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
Hypercube Structure
1 ym (t) + 2 ym (t)
pm (t) =
1 + 2
Advantages:
More information is used to decide on the best direction to
seach
Influence of a particle on the step size is proportional to the
particles fitness
Disadvantages
Influences of multiple particles may cancel each other
What happens when all particles are positioned symmetrically
around the particle being updated?
Barebones PSO
Formal proofs [18, 19, 22] have shown that each particle
converges to a point that is a weighted average between the
personal best and neighborhood best positions:
yij (t) + yij (t)
2
This behavior supports Kennedys proposal to replace the
entire velocity by
yij (t) + yij (t)
vij (t + 1) N ,
2
where
= |yij (t) yij (t)|
Computational Intelligence: Second Edition Chapter 16: Particle Swarm Optimization
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Chapter 16. Particle Swarm Optimization Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
xij (t + 1) = vij (t + 1)
Alternative formulation:
(
yij (t) if U(0, 1) < 0.5
vij (t + 1) = yij (t)+yij (t)
N( 2 , ) otherwise
Hybrid Algorithms
Selection-based PSO
Reproduction in PSO
Mutation in PSO
Differential evolution based PSO
yi1 (t + 1) = xi1 (t + 1)
Disadvantage:
Parent particles are replaced even if the offspring is worse off in
fitness
If f (xi1 (t + 1)) > f (xi1 (t)) (assuming a minimization problem),
replacement of the personal best with xi1 (t + 1) loses
important information about previous personal best positions
Solutions:
Replace parent with its offspring only if fitness of offspring is
better than that of the parent.
Mutation PSO
Mutate the global best position:
0 0
y(t + 1) = y (t + 1) + N(0, 1)
0
where y (t + 1) represents the unmutated global best position,
0
and is referred to as a learning parameter
Mutate the componets of position vectors: For each j, if
0
U(0, 1) < Pm , then component xij (t + 1) is mutated using [6]
0 0
xij (t + 1) = xij (t + 1) + N(0, )xij (t + 1)
where
= 0.1(xmax,j xmin,j )
with
0 1
= p
2 nx
1
=
2nx
where
vi (t + 1) = |vi (t + 1)|r
r is a random vector with magnitude of one and angle
uniformly distributed from 0 to 2 and
N(0, (1 c2 )||yi (t) y(t)||2 ) if U(0, 1) > c1
vi (t+1)| =
N(0, c2 ||yi (t) y(t)||2 ) otherwise
Advantages:
Instead of solving one large dimensional problem, several
smaller dimensional problems are now solved
Fitness function is evaluated after each subpart of the context
vector is updated, allowing a finer-grained search
Improved accuracies have been obtained for many optimization
problems
Predator-Prey PSO
Competition introduced to balance explorationexploitation
Uses a second swarm of predator particles:
Prey particles scatter (explore) by being repelled by the
presence of predator particles
Silva et al. [16] use only one predator to pursue the global best
prey particle
The velocity update for the predator particle is defined as
y(t) xp (t))
vp (t + 1) = r(
where vp and xp are respectively the velocity and position
vectors of the predator particle, p
r U(0, Vmax,p )nx
Vmax,p controls the speed at which the predator catches the
best prey
Computational Intelligence: Second Edition Chapter 16: Particle Swarm Optimization
Introduction
Overview of the basic PSO
Global Best PSO
Local Best PSO
Chapter 16. Particle Swarm Optimization Aspects of Basic PSO
Basic Variations of PSO
PSO Parameters
Particle Trajectories
Single-Solution Particle Swarm Optimizers
Life-Cycle PSO
vij (t+1) = wvij (t)c1 r1j (t)(yij (t)xij (t))c2 r2j (t)(
yij (t)xij (t))
Multi-Start PSO
Craziness PSO
Repelling Methods
Binary PSO
Binary PSO
Binary PSO
H-Y. Fan.
A Modification to Particle Swarm Optimization Algorithm.
Engineering Computations, 19(7-8):970989, 2002.
F. Heppner and U. Grenander.
A Stochastic Nonlinear Model for Coordinated Bird Flocks.
In S. Krasner, editor, The Ubiquity of Chaos. AAAS
Publications, 1990.
H. Higashi and H. Iba.
Particle Swarm Optimization with Gaussian Mutation.
In Proceedings of the IEEE Swarm Intelligence Symposium,
pages 7279, 2003.
J. Kennedy and R.C. Eberhart.
C.W. Reynolds.
Flocks, Herds, and Schools: A Distributed Behavioral Model.
Computer Graphics, 21(4):2534, 1987.
J.F. Schutte and A.A. Groenwold.
Sizing Design of Truss Structures using Particle Swarms.
Structural and Multidisciplinary Optimization, 25(4):261269,
2003.
B.R. Secrest and G.B. Lamont.
Visualizing Particle Swarm Optimization Gaussian Particle
Swarm Optimization.
In Proceedings of the IEEE Swarm Intelligence Symposium,
pages 198204, 2003.