You are on page 1of 6

Co-evolutionary Particle Swarm Optimization for

Min-Max Problems using Gaussian Distribution



Renato A. Krohling, Frank Hoffmann
Lehrstuhl Elektrische Steuerung und Regelung
Fakultt fr Elektrotechnik und Informationstechnik
Universitt Dortmund
D-44221 Dortmund, Germany
E-mail: krohling@esr.e-technik.uni-dortmund.de
E-mail: hoffmann@esr.e-technik.uni-dortmund.de
Leandro dos Santos Coelho
Laboratrio de Automao e Sistemas
LAS/PPGEPS/CCET/PUCPR
Pontifcia Universidade Catlica do Paran
Rua Imaculada Conceio, 1155, CEP 80215-901
Curitiba, PR, Brazil
Email: lscoelho@rla01.pucpr.br

Abstract- Previous work presented an approach based on
Co-evolutionary Particle Swarm Optimization (Co-PSO) to
solve constrained optimization problems formulated as min-
max problems. Preliminary results demonstrated that Co-PSO
constitutes a promising approach to solve constrained
optimization problems. However the difficulty to obtain fine
tuning of the solution using a uniform distribution became
evident. In this paper, a modified PSO using a Gaussian
distribution is applied in the context of Co-PSO. The new
modified Co-PSO is tested on some benchmark optimization
problems and the results show a superior performance
compared to the standard Co-PSO.
I. INTRODUCTION
Co-evolutionary algorithms have shown to be a
successful approach to solve constrained optimization
problems. A constrained optimization problem is
transformed into an unconstrained optimization problem
by introducing Lagrange multipliers. The optimization
problem is then formulated in a min-max form [1], a
representation that arises in many areas of science and
engineering. Min-max problems are considered difficult to
solve. Hillis [2], in his pioneering work, proposed a
method inspired by the co-evolution of populations. Two
independent Genetic Algorithms (GAs) were used,
whereas one for sorting networks (host) and the other for
test cases (parasites). Both GAs evolve simultaneously and
are coupled through the fitness function. In the standard
Evolutionary Algorithms (EA) framework, the fitness
depends only on the individual of the population to be
evaluated.

For Co-evolutionary algorithms (Co-EA), the fitness of
an individual depends not only on the individual itself but
also the individuals of other EA. Generally, Co-EA can be
grouped into two categories: competitive and cooperative.
Although both co-evolutionary approaches have shown
useful results for solving complex problems, we focus on
the competitive co-evolutionary approach. In this case, the
fitness of an individual is evaluated by means of a
competition with the members of the other population [2],
[3], [4], [5]. Inspired by the work of Hillis [2], Barbosa [6-
7] presented a method to solve min-max problems by using
two independent populations of GA coupled by a common
fitness function. Also, along the same line, Tahk and Sun
[8] used a co-evolutionary augmented Lagrangian method
to solve min-max problems by means of two populations
of evolution strategies with an annealing scheme. The
populations of the variable vector and the Lagrange
multiplier vector approximate a zero-sum game by a static
ma e. trix gam

In this paper, based on a previous work on Co-PSO [9],
we present an improved method to solve min-max
problems, where the random numbers are generated using
a Gaussian probability distribution [10] for updating the
particles velocities. Two populations of independent PSOs
are evolved: one for evolving the variable vector, and the
other for evolving the Lagrange multiplier vector. The rest
of the paper is organized as follows: In section 2, the min-
max problem formulation is described. The standard PSO
and the modified PSO is explained in section 3. In section
4, the Co-PSO is presented to solve the min-max
problems; Section 5 gives the simulation results for some
benchmarks constrained optimization problems followed
by conclusions in section 6.
II. PROBLEM FORMULATION
This section is concerned with constrained optimization
problem formulated as min-max problems. Generally a
constrained optimization problem is given by:

(1)

min ( )
x
x
n
f

subject to

( ) 0, 1... x
i
g i , ,m =
( ) 0, 1,..., x
i
h i l = =

where the vector consists of variables: x n

| |
n
n
x x x = ,..., ,
T
2 1
x

The set designates the search space, which is
defined by the lower and upper bounds of the variables:
with By introducing the
Lagrangian formulation, the dual problem associated with
the primal problem (1) can be written as:
n
S

u i
x
i l i
x x . ,..., 1 = n i

) ( max
,


, x, L (2)

subject to

0, for 1
i
i ,...,m =
with
(3) ) ( ) ( ) ( ) , , ( x x x x h g f L
T T
+ + =

where is a multiplier vector for the inequality
constraints and
1 m
is a multiplier vector for the
equality constraints. If the problem (1) satisfies the
convexity conditions over S, then the solution of the primal
problem (1) is the vector of the saddle-
point
1 l
*
x
{ }
*
x ,

, of ( ) L
*
x ,

,

so that:

) , , ( ) , , ( ) , , (
* * * * * *
x x x L L L

Solving the min-max problem

) ( max min
,


, x,
x
L (4)
provides the minimizer as well as the multiplier

*
x
. ,
* *


However, for non-convex problems, the solution of the
dual problem does not coincide with that of the primal
problem. In this case, is added to the Lagrangian function a
penalty term and the augmented Lagrangian is
written as:
( ) x P

(5) ( ) ( ) ( ) ( ) ( ) x, , x x x x
T T
a
L f g h = + + +P

Another common form to describe the augmented
Lagrangian is given by
( ) x, ,
a
L =
( )+ max ( ) x x
m
i
i
i=1 g
f g , - +
2r

|
(

(

\

|
|
|
.


2
( ) ( ( )) max ( ) x x x
l l m
2 i
i i h i g
i=1 i=1 i=1 g
h +r h + r g , -
2r
| | (
|
(
|
(
\ .



(6)
where and
h
r
g
r are penalty factors.

It can be shown that the solution of the primal problem
and the augmented Lagrangian are identical. The main
issue is to find the saddle-point
{ }
.
*
x ,

, In Section
4, a modified Co-PSO using Gaussian distribution is
presented to solve the min-max problem.


III. PARTICLE SWARM OPTIMIZATION

A. Standard Particle Swarm Optimization

Particle Swarm Optimization (PSO) originally
developed by Kennedy and Eberhart in 1995 [11, 12] is a
population-based evolutionary algorithm. Similar to the
other population-based evolutionary algorithms, PSO is
initialised with a population of random solutions. Each
solution in PSO, called particle, has associated a
randomized velocity, moves through the problem space.

Each particle keeps track of its coordinates in the
problem space, which are associated with the best solution
(fitness) it has achieved so far, pbest. Another best
value tracked by the global version of the particle swarm
optimizer is the overall best value, gbest, and its location,
obtained so far by any particle in the population.

The particle swarm optimization algorithm, at each time
step, changes the velocity (accelerating) of each particle
moving towards its pbest and gbest locations (global
version of PSO). Acceleration is weighted by random
terms, with separate random numbers being generated for
acceleration toward pbest and gbest locations, respectively.
The procedure for implementing the global version of PSO
is given by the following steps [13-14].

Algorithm 1: Procedure PSO

(i) Initialize a population of particles with
random positions and velocities in the n
dimensional problem space using uniform
probability distribution.

(ii) For each particle, evaluate its fitness value.

(iii) Compare each particles fitness evaluation
with the current particles pbest. If current
value is better than pbest, set its pbest value
to the current value and the pbest location to
the current location in n-dimensional space.

(iv) Compare fitness evaluation with the
populations overall previous best. If current
value is better than gbest, then reset gbest to
the current particles array index and value.

(v) Change the velocity and position of the
particle according to equations (7) and (8),
respectively :


2
( ) (
i i i i
)
g i
v v p x p x
1 d d
w c u c U = + + (7)

x x v
i i
= +
i
(

)
(8)
(iv) Loop to step (ii) until a stopping criterion is
met, usually a maximum number of iterations
(generations).

where stands for the position of the
i-th particle, stands for the velocity
of the i-th particle and represents
the best previous position (the position giving the best
fitness value) of the i-th particle. The index g represents
the index of the best particle among all the particles in the
group. Variable w is the inertia weight, and
T
1 2
, ,..., x
i i i in
x x x =

1 2
, ,..., v
i i i
v v =

p =
d
u
T
in
v
,
i i
d
T
1 2
,...,
i in
p p p (

1
c
2
c

are
positive constants; and U are random numbers in the
range [0,1] generated according to a uniform probability
distribution. Particles velocities along each dimension are
clamped to a maximum velocity Vmax. If the sum of
accelerations causes the velocity on that dimension to
exceed Vmax, which is a parameter specified by the user,
then the velocity on that dimension is limited to Vmax.

Vmax is an important parameter as it determines the
resolution with which the regions around the current
solutions are searched. If Vmax is too high, the PSO
facilitates global search, and particles might move past
good solutions. If Vmax is too small, on the other hand,
the PSO facilitates local search, and particles may not
explore sufficiently beyond locally good regions.

The first part in equation (7) is the momentum part of
the particle. The inertia weight w represents the degree of
the momentum of the particles. The second part is the
cognition part, which represents the independent
behavior of the particle itself. The third part is the social
part, which represents the collaboration among the
particles. The constants and
1
c
2
c

represent the weighting
of the cognition and social parts that pull each particle
toward pbest and gbest positions.

Based on previous experience with particle swarm
optimization we set the acceleration constants and
1
c
2
c

equal to 2.0; Vmax is set to 20% of the dynamic range of
the variable along each dimension.


B. Particle Swarm Optimization Using Gaussian
istribution D

In this paper, a new approach to PSO, named modified
PSO is proposed, which is based on the studies of mutation
operators in fast evolutionary programming [15, 16]. The
aim is to modify the equation (7) of the standard PSO to
use it with Gaussian probability distribution. This
distribution provides a faster convergence in local search.
In this work, equation 7 is used to calculate the velocity in
standard PSO is modified to:

2
( ) (
i i i i g i
v v p x p x
1 d d
w c g c G = + + (9)
where
d
g and are random numbers in the range [0, 1]
generated according to a Gaussian probability distribution.
Higashi and Iba [17] also proposed the use of Gaussian
mutation for PSO, but in a very different way. In their
work, the position of the particles is mutated using a
Gaussian distribution. Our approach uses the Gaussian
distribution to generate random numbers in the velocity
update equation. In the following we use the modified PSO
in the context of Co-PSO to solve constraint optimization
problems formulated in section 2.
d
G
IV. CO-EVOLUTIONARY PARTICLE SWARM
OPTIMIZATION
Two populations of PSOs are involved in the co-
evolutionary PSO to solve the min-max problem
formulated according to equation (6). The first PSO
focuses on evolving the variable vector x with frozen
and . Only the variable vector x is represented in the
population
1
. P
2
P
The second PSO focuses on evolving
Lagrangian multiplier vectors and with frozen x.
Only the multiplier vectors and are represented in the
population . The two PSOs interact with each other
through a common fitness evaluation. For the first PSO,
the problem is a minimization problem and the fitness
value of each individual x is evaluated according to


2
1
( ) max ( )
,
x x,
a
P
f L

,

= (10)

For the second problem, the problem is a maximization
problem and the fitness value of each individual and is
evaluated according to


1
2
( ) min (
x
) , x, ,
a
P
f L

= (11)

In the PSO algorithm all particles are transferred into
the next generation (no selection mechanism). The
cooperation among particles is established through the
history variables pbest and gbest, which are updated if
better fitness values are obtained.

The procedure of the co-evolutionary PSO algorithm is
given in the Algorithm 2. Within each cycle, the first PSO
is invoked for max_gen_1 generations, then the second
PSO is invoked for max_gen_2 generations, this process is
repeated until either an acceptable solution has been
obtained or the maximum number of cycles has elapsed.
The global best in the population is the solution for the
variable vector x, and the global best in the population
is the solution for the Lagrangian multiplier vectors and
.
1
P
2
P

A

lgorithm 2: Procedure Co-PSO
(

i) Initialize two PSOs.
(ii) Run the first PSO for max_gen_1 generations.

(iii) Re-evaluate the pbest values for the second
PSO if it is not the first cycle.

(iv) Run the second PSO for max_gen_2
generations.



(v) Re-evaluate the pbest values for the first PSO.
(iv) Loop to step (ii) until a termination condition
is met.

In the Co-PSO algorithm described above, i.e., one PSO
is running, the other remains static and serves as its
environment, so each PSO modifies the environment. Due
to the change of the environment between cycles, the pbest
values obtained in the previous cycle have to be re-
evaluated according to the new environment before
starting evolution, which is shown as step (iii) and (v) in
the Algorithm 2.
V. SIMULATION RESULTS
For the optimization problems studied in this paper, the
population size was set to 100 for PSO1 and set to 50 for
PSO2. The number of generations for each PSO of one
cycle is chosen to be 2. The particles are randomly
initialized according to a uniform probability distribution
and the values of the variables were initialized within the
boundaries for each run. The inertia weight of each PSO is
linearly decreased over the course of each run, starting
from 0.9 and ending at 0.4. Each run is terminated if the
maximum number of cycles elapsed. In order to test the
Co-PSO for constrained optimization using Gaussian
distribution we took some benchmarks from [18]. The
results presented are averaged over 20 runs.

The first function to be optimized, g04 [18] consists of
minimizing:

2
3 1
( ) 5.3578547 + 0.835689 + 37.293239 -
-40792.141
1 5
f x x x = x x

subject to

1 2 5
( ) = 85.334407+ 0.0056858 + 0.0006262 -
- 0.0022053 -92 0
1 4
3 5
g x x x x
x x
x
2 2 5
3 5
( ) = -85.334407- 0.0056858 - 0.0006262 +
+ 0.0022053 x 0

1 4
g x x x x
x
x
3 2 5
2
3
( ) = 80.51249+ 0.0071317 + 0.0029955 +
+ 0.0021813 - 110 0

1 2
g x x x x
x
x
4 2 5
2
3
( ) = -80.51249- 0.0071317 - 0.0029955 -
-0.0021813 + 90 0

1 2
g x x x x
x
x
5 3 5
3 4
( ) = 9.300961+ 0.0047026 + 0.0012547 +
+0.0019085 - 25 0

1 3
g x x x x
x x
x
6 3 5
3 4
( ) = -9.300961- 0.0047026 - 0.0012547 -
-0.0019085 + 20 0

1 3
g x x x x
x x
x


where
1 2
78 102 33 45 27 45 for ( =3,4,5). , , ,
i
x x x i

The optimal solution reported in the literature is
and the value of the
objective function is
(78, 33, 29.99, 45, 36.77),
*
x =
( ) 30 f =
*
665 53 x . .

The results of the simulation for the function g04 are
shown in Table I.

TABLE I. RESULTS FOR THE FUNCTION g04
Generation
number
Standard PSO
(g
best
)
Improved PSO
(g
best
)
1 -29664.832 -24591.751
50 -30648.742 -30662.910
100 -30649.478 -30665.525
500 -30664.183 -30665.572
1000 -30664.183 -30665.572


For problem g04, -30665.539 was reported by
Runarsson and Yao [18] as the best value. Firstly, using
the standard PSO the best value of g04 was 30664.183,
which shows that it was not possible to find the optimal
solution. By using the improved PSO with Gaussian
distribution the best value found for g04 was 30665.57,
which shows the superiority of the proposed approach.
This result is slightly better than the optimal solution
known so far given in [16]. It is important to point out that
our approach finds the feasible solution, i.e., all constraints
are satisfied and two of the constraints (
1
g and
6
g ) are
active.

The second function g06 [18] consists of minimizing:


3 3
1 2
( ) ( 10) ( -20) x f x x = +

subject to


2 2
1 1 2
( ) = -( -5) - ( -5) +100.0 0 g x x x



2 2
2 1 2
( ) = ( -6) +( -5) -82.81 0 g x x x

where13
1 2
100 0 100. , x x
The optimal solution known is
with
(14.095, 0.842)
*
x =
( ) 6961 813
*
x . f = .

The results of the simulation for the function g06 are
shown in Table II.

TABLE II. RESULTS FOR THE FUNCTION g06
Generation
number
Standard PSO
(g
best
)
Improved PSO
(g
best
)
1 -463.150 -1502.7404
50 -6620.320 -6965.5117
100 -6858.7133 -6963.557
500 -6957.0859 -6963.693
1000 -6957.0859 -6963.693

For problem g06, -6961.81 was reported by Runarsson
and Yao [18] as being the best value. By using the
standard PSO the best value of g06 found was 6957.08,
which shows that it fails to find the optimal solution. By
using the improved PSO with Gaussian distribution the
best value found for g06 was 6963.693, which indicates a
solution better than that given in [18]. Here also our
approach finds the feasible optimal solution, whereas both
constraints (
1
g and
2
g ) are active.

T

he third function g09 in [18] consists of minimizing:
2 2 4
1 2 3 4
6 2
5 6 6 7 6 7
( ) ( 10) 5( 12) 3( 11)
10 7 4 10 8
x f x x x x
x x x x x x
= + + +
+ +
2


subject to

2 4 2
1 1 2 3 4 5
2
2 1 2 3 4 5
2 2
3 1 2 6 7
2 2 2
4 1 2 1 2 3 6 7
( ) = 127 2 3 4 5 0
( ) = 282 7 3 10 0
( ) = 196 23 6 8 0
( ) = 4 3 2 5 11 0
x
x
x
x
g x x x x x
g x x x x x
g x x x x
g x x x x x x x

+
+
+ +

where 10 10 with 1 7. , , ,
i
x i =

The results of the simulation for the function g09 are
shown in Table III.

TABLE III. RESULTS FOR THE FUNCTION g09
Generation
number
Standard PSO
(g
best
)
Improved PSO
(g
best
)
1 4874055 8155021
50 699.908 690.059
100 699.908 681.090
500 685.830 680.635
1000 685.830 680.635
The global minimum is known to be x
*
= (2.330499,
1.951372, -0.4775414, 4.365726, -0.6244870, 1.038131,
1.594227) with f(x
*
) = 680.6300573.

For problem g09, 680.630 is the optimal solution [18].
By using the standard PSO the best value of g09 found was
685.83, which shows that it was not possible to find the
optimal solution. By using the improved PSO with
Gaussian distribution the best value found for g09 was
680.635, which demonstrate the ability to find the optimal
feasible solution. In this case two constraints (
1
g and
4
g ) are active.

For the three benchmarks presented in this paper, it
becomes evident that PSO with Gaussian distribution
outperforms the standard PSO to find the optimal solution.
The algorithm has also been applied to other benchmarks
given in [18] and provided good results. This is due the
fine tuning ability of the Gaussian distribution. PSO using
uniform probability distribution suffers from easy
entrapment when the particles lie in a locally optima
region. However for multimodal problems with large
number of variables PSO with Gaussian distribution has
also shown difficulties to find optimal solutions. In this
context, the combination of other probabilities distribution,
e.g., Cauchy seems to be promising to escape from local
minima [10].
VI. CONCLUSIONS
In this paper, an improved Co-evolutionary PSO has
been presented to solve min-max problems and tested on
some benchmark constrained optimization problems. The
simulation results indicate that the improved PSO
algorithm using Gaussian distribution outperforms the
standard PSO and shows competitive results with those
published in the literature. More works need to be done to
test the algorithm ability on other benchmark constrained
optimization problems. Work in progress is also
considering the Cauchy distribution.

ACKNOWLEDGEMENTS

R. A. Krohling would like to thank Dr. Y. Shi for a
previous research work on Co-PSO.

REFERENCES
[1] D. Du and P.M. Pardalos, Minimax and Applications,
Holland: Kluwer Academic Publishers, 1995.

[2] W.D. Hillis, Coevolving parasites improve simulated
evolution as an optimization procedure, Physica D, vol.
42, pp. 228-234, 1990.

[3] J. Paredis, Steps towards coevolutionary classification
neural networks, Artificial Life IV, MIT Press, pp. 359-
365, 1994.

[4] C.D. Rosin and R.K. Belew, Methods for competitive
coevolution: Finding opponents worth beating, in Proc. of
the 6th int. Conf. on Genetic Algorithms and their
Applications. Pittsburgh, USA, pp. 373-380, 1995.

[5] C.D. Rosin and R.K. Belew, New methods for competitive
coevolution, Evolutionary Computation, vol. 5, no. 1, pp.
1-29, 1996.

[6] H.J.C. Barbosa, A genetic algorithm for min-max
problems in Proc. of the 1st

int. conf. on Evolutionary
Computation and its Applications, Moscow, Russia, pp.
99-109, 1996.

[7] H.J.C. Barbosa, A coevolutionary genetic algorithm for
constrained optimization, in Proc. of the 1999 Congress
on Evolutionary Computation, pp. 1605-1611, 1999.

[8] M.J. Tahk and B.-C. Sun, Coevolutionary augmented
lagrangian methods for constrained optimization., IEEE
Trans. on Evolutionary Computation, vol. 4, no. 2, pp. 114-
124, 2000.

[9] Y. Shi and R.A. Krohling, Co-evolutionary particle swarm
optimization to solving min-max problems, in Proc. of the
IEEE Conference on Evolutionary Computation, Hawai,
May, pp. 1682-1687, 2002.

[10] L. dos Santos Coelho and R.A. Krohling, Predictive
controller tuning using modified particle swarm
optimization based on Cauchy and Gaussian distributions,
in Proc of the 8th On-line World Conference on Soft
Computing in Industrial Applications (WSC8), 2003.

[11] R.C. Eberhart and J. Kennedy, A new optimizer using
particle swarm theory, in Proc. of the 6th. Int. Symposium
on Micro Machine and Human Science, Nagoya, Japan,
Piscataway, NJ: IEEE Service Center, pp. 39-43, 1995.

[12] J. Kennedy and R.C. Eberhart, Particle swarm
optimization, in Proc. of the IEEE Int. Conf. on Neural
Networks IV, Piscataway, NJ: IEEE Service Center,
pp.1942-1948, 1995.

[13] J. Kennedy, R.C. Eberhart, and Y. Shi, Swarm Intelligence,
San Francisco: Morgan Kaufmann Publishers, 2001.

[14] Y. Shi and R.C. Eberhart, A modified particle swarm
optimizer, in Proc. of the IEEE International Conference
on Evolutionary Computation, Piscataway, NJ: IEEE Press,
pp. 69-73, 1998.

[15] X. Yao and Y. Liu, Fast evolutionary programming, in
Proc. of 5th Annual Conference on Evolutionary
Programming, San Diego, CA, pp. 451-460, 1996.

[16] K. Chellapilla, Combining mutation operators in
evolutionary programming, IEEE Trans. on Evolutionary
Computation, vol. 2, no.3, pp. 91-96, 1998.
[17] N. Higashi and H. Iba, Particle swarm optimization with
Gaussian mutation, in Proc. of the IEEE Swarm
Intelligence Symposium, Indianapolis, IN, pp. 72-79, 2003.
[18] P.R. Runarsson and X. Yao, Stochastic ranking for
constrained evolutionary optimization, IEEE Trans. on
Evolutionary Computation, vol. 4, no.3, pp. 284-294, 2000.

You might also like